Is there a way to verify PCI slots are bad/not working via command line?



Brickwizard

Well-Known Member
Joined
Apr 28, 2021
Messages
2,576
Reaction score
1,716
Credits
19,327
Humour me.. what colour is the slot thats not working?
 

f33dm3bits

Gold Member
Gold Supporter
Joined
Dec 11, 2019
Messages
4,619
Reaction score
3,325
Credits
33,517
I guess technically this thread is about PCI "slots" so that is helpful. But everything on the PCI bus isn't necessarily in a slot.
As mentioned before hardware isn't my thing, I figured some other stuff would probably be connected to the PCI slots through some other means like lanes or something else. Since what I read from that PCI wikipedia article it seems a lot of components are connected to each someway through a bus or some sort of bridge or other way of connecting.
 

dos2unix

Well-Known Member
Joined
May 3, 2019
Messages
1,392
Reaction score
1,019
Credits
8,557
One other note here. Sometimes things are "working". But not as good as they could or should.

This is a quote from...

Most 32-bit PCI cards will function properly in 64-bit PCI-X slots, but the bus clock rate will be limited to the clock frequency of the slowest card, an inherent limitation of PCI's shared bus topology. For example, when a PCI 2.3, 66-MHz peripheral is installed into a PCI-X bus capable of 133 MHz, the entire bus backplane will be limited to 66 MHz. To get around this limitation, many motherboards have two or more PCI/PCI-X buses, with one bus intended for use with high-speed PCI-X peripherals, and the other bus intended for general-purpose peripherals.

I have done this before. I mixed a slow 32 bit card with a fast 64 bit card, and so both cards were seen as slow 32 bit cards.
Lesson learned. Be careful where you plug it in.
 

Brickwizard

Well-Known Member
Joined
Apr 28, 2021
Messages
2,576
Reaction score
1,716
Credits
19,327
@dos2unix
you have hit on something that was going through my head, I think he has 2 PCIe and 1 PCI [legacy standard] whilst you can put a peripheral designed for PCI into PCIe, and it will work, it won't work the other way around, PCI were single channel and ran at a speed of around 264mbs, PCIe have one to 16 channels each of 1gbs so can run more than 16 times faster
 
OP
I

Iacceptthelinuxchallenge

Gold Member
Gold Supporter
Joined
May 18, 2021
Messages
43
Reaction score
11
Credits
446
@dos2unix
you have hit on something that was going through my head, I think he has 2 PCIe and 1 PCI [legacy standard] whilst you can put a peripheral designed for PCI into PCIe, and it will work, it won't work the other way around, PCI were single channel and ran at a speed of around 264mbs, PCIe have one to 16 channels each of 1gbs so can run more than 16 times faster
I have 2 PCIE slots...one is X16 and one is X4. The X16 is black and the X4 is white. I also have a PCIX which is unverified either way because I don't know if the card is god or not (its slot is little over an inch wide and is black). The lspci was ran with a working GPU in the X16. A unknown if its working card in the pcix slot, a verified working tv tuner card in PCI1 slot. The PCI X4 card is empty as is the PCI2 and PCI3 slots were empty.
 

Brickwizard

Well-Known Member
Joined
Apr 28, 2021
Messages
2,576
Reaction score
1,716
Credits
19,327
You should have a short black PCIe4 a black PCIe 16, near to that you should have a legacy PCI white with a release tail, and one that looks like a PCI slot near the edge of the motherboard without a release tail... this is not a PCI slot. If you have been following what we have been saying, your modern cards will not work in the white PCI slot it is not fast enough
 
OP
I

Iacceptthelinuxchallenge

Gold Member
Gold Supporter
Joined
May 18, 2021
Messages
43
Reaction score
11
Credits
446
As mentioned before hardware isn't my thing, I figured some other stuff would probably be connected to the PCI slots through some other means like lanes or something else. Since what I read from that PCI wikipedia article it seems a lot of components are connected to each someway through a bus or some sort of bridge or other way of connecting.
My original confusion was due to my confusion as to how a GPU card in the PCIX4 slot could take out the slot above it (PCIX), skip a slot PCI1, then take out PCI2 and PCI3. I have made multple mistakes in this adventure verbalizing what I needed to know, but have just recently realized I also messed up NOT trying different cards in the PCIx slot to see if it other cards would work.....damn it.
You should have a short black PCIe4 a black PCIe 16, near to that you should have a legacy PCI white with a release tail, and one that looks like a PCI slot near the edge of the motherboard without a release tail... this is not a PCI slot. If you have been following what we have been saying, your modern cards will not work in the white PCI slot it is not fast enough
All my PCI cards cards are from the same year give or take a few years.
 
Last edited:

Brickwizard

Well-Known Member
Joined
Apr 28, 2021
Messages
2,576
Reaction score
1,716
Credits
19,327

Brickwizard

Well-Known Member
Joined
Apr 28, 2021
Messages
2,576
Reaction score
1,716
Credits
19,327

h2-1

Active Member
Joined
Mar 7, 2021
Messages
125
Reaction score
104
Credits
1,858
Having dealt with slots data in inxi (inxi --slots) and of course pci bus data, the sad fact is that as far as I know, not only is there no way to detect broken or dead hardware, beyond it not showing up, but, slightly worse, dmidecode data is not always complete for pci slots. I have examples where working pci slots do not show up at all in dmidecode/inxi --slots, something I discovered while testing the --slots feature when it was introduced.

In general it's almost impossible for a running operating system to detect hardware damage, with one exception, you can often detect damaged drives using either file system checks or smartctl. But otherwise, beyond searching for clear fault messages in system logs, you can't generally deduce much about failing hardware, outside of an item, say, a usb port, or audio port, suddenly simply being gone from lspci output, that almost always means it's dead. But those are not empty slots, those are occupied slots.

In more extreme failure events, like a failed controller card, which is half running, but mostly dead, you may detect it by system hangs, the Linux kernel gets very sad and grouchy when something that is telling it is there, like a usb controller card, actually is incapable of standard request/response type actions.

There used to be some system utility disks that included things like motherboard testers, but I never had any luck with those working, ram testers work, but in general, motherboard level stuff is very difficult to test for.
 

Brickwizard

Well-Known Member
Joined
Apr 28, 2021
Messages
2,576
Reaction score
1,716
Credits
19,327
motherboard level stuff is very difficult to test for.
Damn near impossible unless you have it on a test bench, even then I was never more than passably successful
 
OP
I

Iacceptthelinuxchallenge

Gold Member
Gold Supporter
Joined
May 18, 2021
Messages
43
Reaction score
11
Credits
446
Having dealt with slots data in inxi (inxi --slots) and of course pci bus data, the sad fact is that as far as I know, not only is there no way to detect broken or dead hardware, beyond it not showing up, but, slightly worse, dmidecode data is not always complete for pci slots. I have examples where working pci slots do not show up at all in dmidecode/inxi --slots, something I discovered while testing the --slots feature when it was introduced.

In general it's almost impossible for a running operating system to detect hardware damage, with one exception, you can often detect damaged drives using either file system checks or smartctl. But otherwise, beyond searching for clear fault messages in system logs, you can't generally deduce much about failing hardware, outside of an item, say, a usb port, or audio port, suddenly simply being gone from lspci output, that almost always means it's dead. But those are not empty slots, those are occupied slots.

In more extreme failure events, like a failed controller card, which is half running, but mostly dead, you may detect it by system hangs, the Linux kernel gets very sad and grouchy when something that is telling it is there, like a usb controller card, actually is incapable of standard request/response type actions.

There used to be some system utility disks that included things like motherboard testers, but I never had any luck with those working, ram testers work, but in general, motherboard level stuff is very difficult to test for.
THANK YOU!!!
 
$100 Digital Ocean Credit
Get a free VM to test out Linux!

Members online


Top