10gb PCIe nic

Do bear in mind that SQM configured correctly will reduce your maximum throughput slightly, but you will have better loaded latency.

This only really matters if you regularly saturate the connection and rob other clients of necessary bandwidth. IIRC from your earlier posts, you only have one PC? If so, not sure I'd configure SQM for just one client.
 
SQM is absolutely still worth it. Downloading a Steam game, something from Usenet, or (worse) updating your Linux ISOs over torrent - and also want to play a game, or run a voice chat/call, etc? SQM will keep the line snappy, ensure latency doesn't go crazy and avoid dropped packets where possible. It keeps you from being at the mercy of your ISP's buffers (and policer/shaper).
 
Based on that type of usage, yes I agree. However, for one client QoS locally would likely be just as effective. OP's router may not even support SQM.

I mean, you are always at the mercy of the ISP's bandwidth policer. There is no escaping that policer - all you can do is not hit it.
 
Based on that type of usage, yes I agree. However, for one client QoS locally would likely be just as effective. OP's router may not even support SQM.

I mean, you are always at the mercy of the ISP's bandwidth policer. There is no escaping that policer - all you can do is not hit it.
Yeah, not hitting it is exactly what SQM is for!

What do you mean by 'QoS locally'? Running a plain queue discipline (e.g. fq_codel) and/or congestion control algorithm (e.g. cubic, reno, bbr) on the interface of a device without a shaper is ineffective on its own. It will sense congestion via packets dropped upstream, and crash/seesaw your transfer speeds in response, but it will never control the link or provide real benefit. Unshaped fq_codel is still slightly smarter than FIFO, but without a shaper active (AQM/SQM) it will never activate in the meaningful way we're discussing here.

If you meant to run fq_codel or similar with a shaper then we're still back to what is effectively AQM/SQM, just not CAKE. You don't really want to make a single device the bottleneck on your link (I'm assuming on-device is what you meany by 'locally'), that job should always be the place of the edge router. If your router isn't capable of SQM or at least AQM, it's time to upgrade!

As for the single-user argument: Even with a single person in a home, it's highly unlikely these days that they have only one device connected to the Internet. There's the PC, smart TV, home and/or mobile phone, IPTV streaming, games consoles, maybe a NAS - all kinds of things. Even a single device can cause bloat for itself, much more so with multiple devices running at once (cloud backups, Steam download/game update, file downloads, video streams). I don't know anyone these days who pays for Internet (much less 'good' Internet) and just plugs a single PC into the ONT or modem.

You're not always at the mercy of the ISP's bandwidth shaper/policer in the way you imply. You're talking about the headline speed cap, which - yes - you're always subject to, but you're not considering the bigger picture or the role of {A,S}QM. No matter the package with the ISP, the ISP is 99% of the time going to be the bottleneck in the physical network link even when you're not carrying out a large/fast bulk download. For example 10G NICs locally, 10Gb ONT, ~8G effective goodput from the ISP (or 1Gb hardware and any plan up to 'gigabit' from the ISP, or whatever). Your local devices are sending and receiving data (even regular stuff, not just bulk TCP downloads) at line rate, which will be faster than the cap at the ISP end. You don't need to be downloading a 100GB file to hit this - even requesting a web page via the 2.5Gb NIC on your PC via your 2.5Gb NIC on the router is going to hit a bottleneck at the ISP's side when you hit the plan's cap.

There are now two possibilities:
  • Your ISP has massive buffers on their BNG/edge (common these days) to avoid dropped packets, and it's FIFO. Your packets all hit your router NIC and ONT/modem at line rate (1G, 2.5G, 10G, whatever) and hit the ISP's edge at a slower rate (your plan speed). The ISP is the bottleneck and your packets sit in a huge dumb queue ISP-side, outside your control. Your game, voice chat, FaceTime call and stream packets sit helplessly behind 100ms to 500ms or worse worth of packet delay from all the bulk TCP data (downloads, javascript, etc) they're ingesting from you *and every other customer in your area* at that time. Whether you are a single person with one PC or a family of ten with 100 devices doesn't change the fact the ISP side buffers are ingesting likely thousands of customers' packets at once, and you are now at the mercy of *their* buffers and *their* packet priorities. The result is uncontrollable bloat and a rubbish experience in latency sensitive applications that you can do nothing about.
  • Your ISP has regular sized or shallow buffers and relies on a dumb policer. Now your packets spew out of your (faster) WAN NIC and hit the too-small ISP buffer. In this scenario you hit the brick wall that is the dumb bandwidth policer, and your packets that exceed the cap (eg you send at 2.5Gb physical line rate but your plan is 2Gb) are simply dropped cold. You'll see saw-tooth download graphs, huge jitter, TCP retransmits and your speed will 'yo-yo' as your local devices crash their speed in response to dropped packets (sensing congestion) and slowly ramp back up.
That's even with just one local user and a single device. Now as I said add in a family, or even just a single user with multiple devices... Well, it's not pretty - and yes, you're relying on (at the mercy of) the ISP shaper/policer. However, when we run AQM or - preferably - SQM we take away those possibilities. We are no longer subject to the ISP shaper/policer in the way you meant, because now *we* control the flow of packets and will never actually bump into said shaper/policer. For example, on my 2Gb ISP plan with 2.5Gb NICs locally, CAKE is set to 1.95Gb and as such I will *never* be subject to the ISP shaper - because now I am the bottleneck in the link, and I have full control over the flow of packets and their (non) delay. That's exactly what fair queueing is.

Also, don't make the mistake of conflating fairness (sharing bandwidth between multiple users on a connection, or even multiple applications on a single user's single device) and responsiveness. Maintaining responsiveness under load, even during microbursts, is paramount to a good user experience. That's equally true of just surfing the web as it is running multiple apps/streams/devices.

Modern protocols and applications are designed to probe and make full use of available capacity (whether it's a torrent, a website loading over QUIC or something else). The 'you'll never use the whole line/speed' is fallacious, as even when it's for a fraction of a second you'll hit microbursts of traffic that do indeed momentarily saturate the line and cause jitter and delays. Thus, you'll benefit from the responsiveness good SQM provides.

Both of the above ISP-controlled scenarios are independent of the plan's bandwidth cap, and the negative consequence of the ISP being the bottleneck on the link can be avoided by simply making yourself the bottleneck and taking full control of the packet flows. This allows you to avoid delays, jitter and needless drops. For that, we need a shaper (SQM) like CAKE, which provides both fairness *and* maintains responsiveness under load. The best scenario is a good qdisc and congestion control algorithm on the local devices (eg plain fq_codel + bbr), and SQM like CAKE on the edge device.
 
Sorry, Rainmaker, your short novel is too early for me at half 6 in the morning!

OP, enable SQM or don't. I'm sure Rainmaker has given you all of the information you need to make a decision if your router supports it.
 
Sorry, Rainmaker, your short novel is too early for me at half 6 in the morning!

OP, enable SQM or don't. I'm sure Rainmaker has given you all of the information you need to make a decision if your router supports it.
Its all a bit too much for my use case. I get the 8Gbps advertised rate via my PC which Is enough for my needs, as was the 1Gbps I had before!

We don't even have massive contention or congestion at peak times (on my 1gb or 10gb lines), so any issues will be at my end tbh. It's all very different to my UK usage, albeit that was years ago, where contention ratios etc played a big part.


rp2000
 
Hey everyone, Not sure if this is the rigth thread to post this or if i should make my own.

I'm trying to enable Jumbo Frames (MTU 9000) on my Windows 11 PC's 10GbE NIC, but I'm running into a wall and could use some help.

It is a 10Gtek 10GbE PCIE Network Card for Intel X520-DA1, 82599EN Chip, Single SFP+ Port on my Windows 11 machine. I also have another one inside my TrueNas system as well.

What I've done:
  • I set Jumbo Frames in the NIC properties: I went to my network adapter settings, right-clicked my 10GbE card ("Ethernet 3"), went to Properties > Configure > Advanced, and set Jumbo Packet to 9014 Bytes.
  • I tried using netsh**:** I ran the command netsh interface ipv4 set subinterface "Ethernet 3" mtu=9000 store=persistent. However, this command fails with "The parameter is incorrect."
  • I checked the current MTU: Running netsh interface ipv4 show subinterface "Ethernet 3" still shows my MTU is stuck at 1500, even after setting the Jumbo Packet property and rebooting.
  • I can however set the MTU size to something smaller than 1500 like 1499 etc using the same NetSh comand
  • I tried the above again with another 10gbit NIC aka ASUS XG-C100F PCI-E Network Interface Card and still cant set it higher than 1500.
Specs of windows pc:
  • AMD 9850X3D
  • 96GB ram
  • Asrock X870E Nova
  • Nvidia 5090 gpu
  • Bunh of nvme ssd's installed.
I've also confirmed that my MikroTik switch(CRS309-1G-8S+IN), router( RB5009UPr+S+IN), and my TrueNAS system all have their MTU settings correctly set to 9000. I've even tried pinging with jumbo packets (ping 192.168.x.x -f -l 8972), and it fails with the "Packet needs to be fragmented but DF set" error, which confirms the MTU isn't being applied.

How its networked:

Fibre internet -> otp modem -> Router -> Switch ->(TrueNas DIY pc, Desktop windows pc...)

It seems like Windows isn't applying the MTU change to the IPv4 and IPv6 stack, and the netsh command isn't working for me. Has anyone else experienced this with a specific NIC or driver? Is there another method to force the MTU change?

Thanks for any advice!
 
Last edited:
Hey everyone, Not sure if this is the rigth thread to post this or if i should make my own.

I'm trying to enable Jumbo Frames (MTU 9000) on my Windows 11 PC's 10GbE NIC, but I'm running into a wall and could use some help.

It is a 10Gtek 10GbE PCIE Network Card for Intel X520-DA1, 82599EN Chip, Single SFP+ Port on my Windows 11 machine. I also have another one inside my TrueNas system as well.

What I've done:
  • I set Jumbo Frames in the NIC properties: I went to my network adapter settings, right-clicked my 10GbE card ("Ethernet 3"), went to Properties > Configure > Advanced, and set Jumbo Packet to 9014 Bytes.
  • I tried using netsh**:** I ran the command netsh interface ipv4 set subinterface "Ethernet 3" mtu=9000 store=persistent. However, this command fails with "The parameter is incorrect."
  • I checked the current MTU: Running netsh interface ipv4 show subinterface "Ethernet 3" still shows my MTU is stuck at 1500, even after setting the Jumbo Packet property and rebooting.
  • I can however set the MTU size to something smaller than 1500 like 1499 etc using the same NetSh comand
  • I tried the above again with another 10gbit NIC aka ASUS XG-C100F PCI-E Network Interface Card and still cant set it higher than 1500.
Specs of windows pc:
  • AMD 9850X3D
  • 96GB ram
  • Asrock X870E Nova
  • Nvidia 5090 gpu
  • Bunh of nvme ssd's installed.
I've also confirmed that my MikroTik switch(CRS309-1G-8S+IN), router( RB5009UPr+S+IN), and my TrueNAS system all have their MTU settings correctly set to 9000. I've even tried pinging with jumbo packets (ping 192.168.x.x -f -l 8972), and it fails with the "Packet needs to be fragmented but DF set" error, which confirms the MTU isn't being applied.

How its networked:

Fibre internet -> otp modem -> Router -> Switch ->(TrueNas DIY pc, Desktop windows pc...)

It seems like Windows isn't applying the MTU change to the IPv4 and IPv6 stack, and the netsh command isn't working for me. Has anyone else experienced this with a specific NIC or driver? Is there another method to force the MTU change?

Thanks for any advice!
Try setting the MTU to 9014 rather than 9000 using netsh.
 
Yea i tried that also and i get the parameter is incorrect error. As said, i cant set it above 1500, Not even 1501...
In PowerShell run this and post the output here:

Get-NetAdapter "adaptername" | Get-NetAdapterProperty | select * | ft

I'm doing this on my phone from memory and it may not be entirely accurate. Check if PS thinks the jumbo packet property is enabled. PS queries WMI.
 
o wow i fixed it!!

With the help of Claude AI. Basically i had to uninstall express vpn even though i have it disabled most of the time!!!

The original issue was likely resolved by removing ExpressVPN—its network filter drivers were preventing the MTU change from being applied properly. Once uninstalled, your existing jumbo frame settings took effect.
 
  • Like
Reactions: KIA
Quick question, if a 10gb nic says it uses a pcie 3.0 X4 slot. Guessing you need that X4 to get the speeds, how does that translate to a pcie 4 slot?
I had a quick look and couldn't find many suitable B850 boards they all seem to be pcie 3 X1.
So it may actually be cheaper to buy that Gigabyte b850 AI top which has 10gb onboard.
 
Quick question, if a 10gb nic says it uses a pcie 3.0 X4 slot. Guessing you need that X4 to get the speeds, how does that translate to a pcie 4 slot?
I had a quick look and couldn't find many suitable B850 boards they all seem to be pcie 3 X1.
So it may actually be cheaper to buy that Gigabyte b850 AI top which has 10gb onboard.
You can put it in a PCIe 4 slot and it will run at PCI 3.0 x 4 speeds.

I have done the exact same thing with the card I bought in this thread.


rp2000
 
Quick question, if a 10gb nic says it uses a pcie 3.0 X4 slot. Guessing you need that X4 to get the speeds, how does that translate to a pcie 4 slot?
I had a quick look and couldn't find many suitable B850 boards they all seem to be pcie 3 X1.
So it may actually be cheaper to buy that Gigabyte b850 AI top which has 10gb onboard.
Yea i came across that and got that asus one as thats x2 pci 3
 
Quick question, if a 10gb nic says it uses a pcie 3.0 X4 slot. Guessing you need that X4 to get the speeds, how does that translate to a pcie 4 slot?
I had a quick look and couldn't find many suitable B850 boards they all seem to be pcie 3 X1.
So it may actually be cheaper to buy that Gigabyte b850 AI top which has 10gb onboard.
PCI-E versions (3.0, 4.0) are backwards compatible. This is a combination of device and slot - both need to support the same version, or they will negotiate to the highest commonly supported version between them. 4.0 device in a 5.0 slot results in 4.0. 4.0 device in a 3.0 slot results in 3.0, and so on.

The widths of the lanes (x1, x4, x8, x16) all downsize. x8, for example, can become an x1 or x4, but an x1 can't do x4. Because PCI-E is now connected to CPUs and not chipsets, it's now CPU, board, and configuration dependent on what versions and widths are available. Always check the motherboard manual to ascertain what your configuration will result in for PCI-E availability.
 
Correct it needs to go into a slot that has at least that amount of lanes i.e. 4x, 8x, 16x

They will also often run on reduced lanes, my x540 dual 10G NIC is an 8x card but runs fine on x4 which was fortunately as that was all I had on my old motherboard.
 
PCI-E versions (3.0, 4.0) are backwards compatible. This is a combination of device and slot - both need to support the same version, or they will negotiate to the highest commonly supported version between them. 4.0 device in a 5.0 slot results in 4.0. 4.0 device in a 3.0 slot results in 3.0, and so on.

The widths of the lanes (x1, x4, x8, x16) all downsize. x8, for example, can become an x1 or x4, but an x1 can't do x4. Because PCI-E is now connected to CPUs and not chipsets, it's now CPU, board, and configuration dependent on what versions and widths are available. Always check the motherboard manual to ascertain what your configuration will result in for PCI-E availability.
That's what I'm finding all the cheaper boards you get a fast x16 slot for the GPU then the rest are all X1 on a lot of boards.
Spending more even if it has a X4 sometimes means it knocks out an m.2 or similar. So seems spending an extra £100 on a board with 10gbe might not be so expensive after all and be tidier.
 
That's what I'm finding all the cheaper boards you get a fast x16 slot for the GPU then the rest are all X1 on a lot of boards.
Spending more even if it has a X4 sometimes means it knocks out an m.2 or similar. So seems spending an extra £100 on a board with 10gbe might not be so expensive after all and be tidier.
The problem with current amd boards at least is that the pci lane distrubtiuons. They are now favouring distributing those lanes on usb ports instead!
 
Back
Top Bottom