• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

How to change from x16 to x8 GPU to free up lanes

Associate
Joined
26 Jan 2010
Posts
1,658
Hi Guys

I have a build where its got a Intel Core i9-12900KF, its basically a 20 Lane CPU

On my motherboard I have 2 PCIe slots, when I place my GPU in, it takes up x16 lanes

However I have a network card that requires x8 lanes, so thats GPU 16 lanes & network card 8 lanes totals 24 lanes, so now the network card dont work.

Because I'm 4 lanes short, either one device at a time works, but both dont.

Is there a way to switch the top PCIe slot to run at x8 so I can use my network card?

I went on the BIOS and can see there a PCIe option to change from gen 4 to either gen 3 or 2, but I dont think that will change the top slot from x16 to x8

is a puzzling one.

Thank you guys
 
As above, every manufacturer wires the lanes differently. Usually the first two PCIe x16 slots will be wired to the CPU controller so when something is plugged in both, it'll be x8 on both, but there are some who wire the last PCIe slot to the chipset controller instead, with usually only 4 lanes. There's no way around this if this is the case for your board.
 
What motherboard and what network card?
well its a Lenovo gaming PC, they call it a Lenovo Legion T7, these have propriety motherboards, not like the ones we buy, HW info shows is as Lenovo 3750, but that dont mean anything

The network card is a Intel X540 T2 x8 ethernet card

I went in to the Bios and even that is very plain and simple, not many options, I went to the bios to change the PCIe slots from x16 to x8 so I can use both devices

But all I see is changing the PCIe option from Gen 5 to Gen 4 or 3, did change it but my network card not still not detected, i think the gen stuff is reducing pcie the bandwidth, and not the lanes.

20 lanes for an i9 chip is way too low, considering 16 being eaten by GPU's
 
well its a Lenovo gaming PC, they call it a Lenovo Legion T7, these have propriety motherboards, not like the ones we buy, HW info shows is as Lenovo 3750, but that dont mean anything

The network card is a Intel X540 T2 x8 ethernet card

I went in to the Bios and even that is very plain and simple, not many options, I went to the bios to change the PCIe slots from x16 to x8 so I can use both devices

But all I see is changing the PCIe option from Gen 5 to Gen 4 or 3, did change it but my network card not still not detected, i think the gen stuff is reducing pcie the bandwidth, and not the lanes.

20 lanes for an i9 chip is way too low, considering 16 being eaten by GPU's
Don't forget some PCIe slots are not wired to the CPU, some will be wired to the chipset instead. I'm willing to bet the second x16 slot is actually only wired to the chipset with only 4 or less lanes. There will be no BIOS option to bypass this.
 
You can't reroute lanes unless the motherboard already wired them up that way. This switch is usually automatic based on what is (or rather if something is) present in each PCI-E slot, not something you can enable in the BIOS.

Without the motherboard manual, it is not possible to confirm either way, but as said above, usually you'll only get an 8 lane secondary PCI-E slot with a Z or X board, B boards tend to be wired to the chipset with 4 lanes.
 
Last edited:
Have you tried the network card in the top slot, with the graphics card in the bottom slot?

Likely as mentioned above the bottom slot is only 4x, so you will get a performance hit, but at least your NIC should work
Ive actually tried in both slots with the card and device manager doesn't even see it, no lights on the nework card

I did install the network card in my personal build and if fully worked, sparked dup immediately.

Like the guys above have stated above, I reckon its wired to the chipset hence why the NIC dont work

20 lanes for an i9 is tight when a GPU eats up 16 of them.
 
Ive actually tried in both slots with the card and device manager doesn't even see it, no lights on the nework card

I did install the network card in my personal build and if fully worked, sparked dup immediately.

Like the guys above have stated above, I reckon its wired to the chipset hence why the NIC dont work

20 lanes for an i9 is tight when a GPU eats up 16 of them.
You have to remember most of the modern "i9"s is still on Intel's "midrange" platform, where users are on a tighter budget and don't need that many lanes. The only reason why it's become so "high end" in terms of CPU performance is thanks to competition from AMD, which is why there's been no proper LGA2066 successor yet (and also AMD's Threadripper line). If you need a lot of PCIe lanes unfortunately it's a bad time as there's nothing new from both sides.
 
Last edited:
well its a Lenovo gaming PC, they call it a Lenovo Legion T7, these have propriety motherboards, not like the ones we buy, HW info shows is as Lenovo 3750, but that dont mean anything

The network card is a Intel X540 T2 x8 ethernet card

I went in to the Bios and even that is very plain and simple, not many options, I went to the bios to change the PCIe slots from x16 to x8 so I can use both devices

But all I see is changing the PCIe option from Gen 5 to Gen 4 or 3, did change it but my network card not still not detected, i think the gen stuff is reducing pcie the bandwidth, and not the lanes.

20 lanes for an i9 chip is way too low, considering 16 being eaten by GPU's

That network card even when maxed out with both ports utilizing a full 10Gb connection at full duplex, will require bandwidth of 5GB/s. You could achieve that with the following combinations:

PCI-e Version 1 = x16 slot (4.0 GB/s) (slight bottleneck)
PCI-e Version 2 = x16 Slot (8.0 GB/s)
PCI-e version 3 = x8 Slot (7.8 GB/s)
PCI-e version 4 = x4 Slot (7.8 GB/s)
PCI-e version 5 = x2 Slot (7.8 GB/s)
PCI-e version 6 = x1 Slot (7.6 GB/s)
PCI-e version 7 = x1 Slot (15.1 GB/s)

I'm going to assume you have a modern system which will mean you have one of the last 4 lines above. Worst case you would need 4 additional lanes. If you can only alter the motherboard to make the slots be different PCI-e "versions" and you think this is causing the issues where it's trying to grab too many lanes, then put it in the second big x16 slot and limit it to PCI-e version 2. That still won't cause a bottleneck?

EDIT: modified to account for full duplex
 
Last edited:
Like the guys above have stated above, I reckon its wired to the chipset hence why the NIC dont work

It shouldn't matter how the lanes are routed if the add-in card is not platform specific. In the datasheet of the controller it says it can support 1, 2, 4 or 8 lane operation, though the card you're using only says 8 lane on Ark. You could try asking Lenovo, if you get lucky they might test it and send you a BIOS update (or unlock a feature you need to run it).
 
Last edited:
Actually there could be another thing that can prevent the card from working. Looking at the NIC release date it was back in 2012, I wonder if it's actually some some of weird incompatibility with UEFI boards that requires CSM to be enabled.
 
If it was released in 2012 then it would have most probably been PCI-e version 3.0 spec, which seems to fit with it being a x8 card. Have you got a model name of the motherboard in your PC? Have a look at it physically and see if it has anything written on it maybe?
 
If it was released in 2012 then it would have most probably been PCI-e version 3.0 spec, which seems to fit with it being a x8 card. Have you got a model name of the motherboard in your PC? Have a look at it physically and see if it has anything written on it maybe?

Seems to be PCIe 2.1: https://ark.intel.com/content/www/u...thernet-converged-network-adapter-x540t2.html

But that shouldn't matter as PCIe is backwards compatible, and it shouldn't prevent the system from seeing the card especially since OP tried it on the top slot. I highly suspect it's due to being designed for legacy BIOS, which is what CSM on UEFI is for. OP said they tried it in their personal system, if they have CSM enabled that could be why it works there (assuming it's a fairly recent system). Lenovo most likely will have it already disabled since it's required for secure boot for Windows.

OP can try disable CSM on their personal system and see if Windows can still see it. If it doesn't then that'll be the reason why it won't work in the Lenovo.
 
Seems to be PCIe 2.1: https://ark.intel.com/content/www/u...thernet-converged-network-adapter-x540t2.html

But that shouldn't matter as PCIe is backwards compatible, and it shouldn't prevent the system from seeing the card especially since OP tried it on the top slot. I highly suspect it's due to being designed for legacy BIOS, which is what CSM on UEFI is for. OP said they tried it in their personal system, if they have CSM enabled that could be why it works there (assuming it's a fairly recent system). Lenovo most likely will have it already disabled since it's required for secure boot for Windows.

OP can try disable CSM on their personal system and see if Windows can still see it. If it doesn't then that'll be the reason why it won't work in the Lenovo.
That's a good shout. Probably is that. If the card is PCI-e 2.1 and 8x.... doesn't make sense as it will run at the weakest link which will be the cards version 2.1. On an 8x 2.1 it will have total bandwidth of 4.0GB/s which would bottleneck running the two ports at full duplex 10Gb. Unless this card was not marketed as supporting that.
I feel like maybe I'm missing something here though.
 
Why on earth would the network card need 8 lanes? Are they definitely all wired up on the connector?
Because servers of the same age had standardised on 8x slots for almost everything, be that raid card, network card or anything else. There were generally no 1x or 4x slots in servers, and 16x slots were rare in servers due to GPUs previously being a rare use case.
 
Because servers of the same age had standardised on 8x slots for almost everything, be that raid card, network card or anything else. There were generally no 1x or 4x slots in servers, and 16x slots were rare in servers due to GPUs previously being a rare use case.

Fair one - makes sense. I suggest he spend a fiver on a new 1x network card...
 
That's a good shout. Probably is that. If the card is PCI-e 2.1 and 8x.... doesn't make sense as it will run at the weakest link which will be the cards version 2.1. On an 8x 2.1 it will have total bandwidth of 4.0GB/s which would bottleneck running the two ports at full duplex 10Gb. Unless this card was not marketed as supporting that.
I feel like maybe I'm missing something here though.
10Gb=1.25GB

So a dual 10Gb card requires 2.5GB of bandwidth, which is slightly more than a Pcie 2.0 4x card could supply (2.0GB), hence why it needs 8x lanes.
 
Back
Top Bottom