10gb PCIe nic

Why? Jumbo frames doesn't really give any benefit for normal use with the exception of fringe use cases such as block level storage. It had its time in 2010 or whatever with less capable NICs, slower CPUs etc but you can saturate 25 Gbps NICs using default frame sizes.
Why? Because without enabling it, i could not go anyuwhere near 10gbe speeds. Thats why. You need 900 set as your MTU. Lots of sources i can provide that says this and through my very own experience.

You honestly think i just set my mtu to 9000 for fun? Nope! i wasnt gettinig 10gbe speeds without it......
 
Why? Because without enabling it, i could not go anyuwhere near 10gbe speeds. Thats why. You need 900 set as your MTU. Lots of sources i can provide that says this and through my very own experience.

You honestly think i just set my mtu to 9000 for fun? Nope! i wasnt gettinig 10gbe speeds without it......
ChrisD. definitely knows what he's talking about - don't doubt that. He is right; you shouldn't need jumbo frames to saturate a 10Gb link in modern compute environments. What speeds were you getting without jumbo frames enabled and how were you measuring throughput?

Some food for thought: your ISP likely won't support jumbo frames and you will end up with packet fragmentation on your WAN connection. It depends on what your uses for 10Gb are, really.
 
ChrisD. definitely knows what he's talking about - don't doubt that. He is right; you shouldn't need jumbo frames to saturate a 10Gb link in modern compute environments. What speeds were you getting without jumbo frames enabled and how were you measuring throughput?

Some food for thought: your ISP likely won't support jumbo frames and you will end up with packet fragmentation on your WAN connection. It depends on what your uses for 10Gb are, really.
Without jumbo set I was getting around 3/4gbe.

Now I more or less get the full bandwidth.
Use case is I have a true nas and I want to transfer and receive files to and from the true nas. That's also set to jumbo frame..
 
It might be @jonneymendoza has hardware configurations that Jumbo Frames can still make a difference on, much like how the USB attached device threw OP off. (Thinking of different thread here, my bad)

But I will also echo that Jumbo Frames likely isn't necessary these days if your network components comprises of more modern hardware. My current (onboard connected) 10Gb/s network that throws over 1.1GB/s (~9Gb/s) has Jumbo Frames disabled. But when I was first utilising the NICGIGA 10Gb/s card I mentioned in a prior post, I did need to mess around a bit with Jumbo Frames as it was limited by the PCIe slot I put it in and it was on a shared PCIe Lane bus (DMI 2.0) so couldn't throw the full 10Gb/s with the storage being on the same bus section, but it did help and got around another 1gb/s increase (up from around 5.5-6gb/s to around 6-7gb/s).
 
Last edited:
Why? Because without enabling it, i could not go anyuwhere near 10gbe speeds. Thats why. You need 900 set as your MTU. Lots of sources i can provide that says this and through my very own experience.
I saturate 10 Gbps using default 1500 bytes to/from TrueNAS, if you are not reaching that something along the way is slowing it down. It could be an old/bad NIC (I'm looking at you Realtek), bad driver, old CPU etc. What I'm getting that is whilst using jumbo frames in your case clearly is helping, ultimately you are masking a problem somewhere.

What could have been happening before is your VPN software was clamping your connection to below 1500 bytes, and as a result you were getting fragmentation when doing the tests. Do a tcpdump or wireguard if you're interested, or, just leave it as is since it's working for you. But generally 9k MTU on client devices can cause unwanted and unpredictable issues.
Some food for thought: your ISP likely won't support jumbo frames and you will end up with packet fragmentation on your WAN connection. It depends on what your uses for 10Gb are, really.
PMTUD should take care of that and negotiate a suitable L2 frame size (it's a bit more complex than that, but I don't like writing essays on forums!), also most routers will use MSS which will clamp the connection to a defined size, usually 1492 bytes in the UK where PPPoE is in use.
 
PMTUD should take care of that and negotiate a suitable L2 frame size (it's a bit more complex than that, but I don't like writing essays on forums!), also most routers will use MSS which will clamp the connection to a defined size, usually 1492 bytes in the UK where PPPoE is in use.
Should is the word that never makes me comfortable... I've been in too many situations where some spanner sets jumbo frames on every interface on the box then we can't understand why that node is doing weird ****! :D
 
I saturate 10 Gbps using default 1500 bytes to/from TrueNAS, if you are not reaching that something along the way is slowing it down. It could be an old/bad NIC (I'm looking at you Realtek), bad driver, old CPU etc. What I'm getting that is whilst using jumbo frames in your case clearly is helping, ultimately you are masking a problem somewhere.

What could have been happening before is your VPN software was clamping your connection to below 1500 bytes, and as a result you were getting fragmentation when doing the tests. Do a tcpdump or wireguard if you're interested, or, just leave it as is since it's working for you. But generally 9k MTU on client devices can cause unwanted and unpredictable issues.

PMTUD should take care of that and negotiate a suitable L2 frame size (it's a bit more complex than that, but I don't like writing essays on forums!), also most routers will use MSS which will clamp the connection to a defined size, usually 1492 bytes in the UK where PPPoE is in use.
Hi. Did you read my initial post about? Tried two different nic and provided my systems details. My cpu is certainly not old lol.

My friend who is a network engineer was helping me also and said yea for 10ge you should set mtu higher than 1500.

Plenty of sources in the Web have also said the same.

You are the first person I have come across to say otherwise though
 
Hi. Did you read my initial post about? Tried two different nic and provided my systems details. My cpu is certainly not old lol.

My friend who is a network engineer was helping me also and said yea for 10ge you should set mtu higher than 1500.

Plenty of sources in the Web have also said the same.

You are the first person I have come across to say otherwise though
Jumbo frames increase the payload available with each frame. Because frames are larger, less packets are generated and transmitted. This reduces load on the CPU, as it doesn't need to generate as many packets, and on the interface card, as it doesn't need to transmit as many packets.

Increasing payload size reduces compute requirements - it doesn't actually increase throughput. If it does increase throughput, something is benefitting from that reduced compute overhead.

I will comment that Windows isn't great at utilising 10Gb and upwards throughput without the most optimal configuration.

As for whether it's possible: I've personally maxed out Brocade, Intel, and Mellanox 10Gb and 25Gb cards using a 1500 MTU back in my engineering days. Even VMs with paravirtual NICs can top out 10Gb on a 1500 MTU if it's just transferring files. When you start really amping up the PPS requirement, however, that's when the likes of jumbo frames, RDMA, and SR-IOV can come together into a wonderful harmony.
 
Fwiw I've never needed jumbo frames to max out 10Gb.

My experience has been limited to Windows Server 2008R2 and newer, with either Intel or Mellanox cards.


The last time I remember actually using jumbo frames was back in the 98se/win 2000 days in order to max out gigabit on slower CPUs.
 
Hi. Did you read my initial post about? Tried two different nic and provided my systems details. My cpu is certainly not old lol.
I did, briefly. The X520 is over 10 years old and from memory lacks some of the more modern capabilities which are crucial for high network throughput. Asus are a joke in the networking world. You didn't mention your TrueNAS hardware, or if you did I missed it.
My friend who is a network engineer was helping me also and said yea for 10ge you should set mtu higher than 1500.
I'm happy for him. If we're playing top trumps, I write white papers, design documentation, and guide engineers in some of the largest IT estates in the world on how to configure networks, storage and virtual infrastructure. Moving packets, HPC, and making fast IT go faster is kinda my thing.
Plenty of sources in the Web have also said the same.
I forgot that the internet is never wrong, or outdated, or regurgitated information that is (re)posted without proper research.
You are the first person I have come across to say otherwise though
Apart from the others in this thread for a start? And from a quick search countless posts on TrueNAS forums (including by their own devs/employees) stating it's not really required.
 
I did, briefly. The X520 is over 10 years old and from memory lacks some of the more modern capabilities which are crucial for high network throughput. Asus are a joke in the networking world. You didn't mention your TrueNAS hardware, or if you did I missed it.

I'm happy for him. If we're playing top trumps, I write white papers, design documentation, and guide engineers in some of the largest IT estates in the world on how to configure networks, storage and virtual infrastructure. Moving packets, HPC, and making fast IT go faster is kinda my thing.

I forgot that the internet is never wrong, or outdated, or regurgitated information that is (re)posted without proper research.

Apart from the others in this thread for a start? And from a quick search countless posts on TrueNAS forums (including by their own devs/employees) stating it's not really required.
Which card do you consider new and up to date for 10gb these days?

My friend may not have written white papers about networks but he does work for a multitude billion dollar company and is one of lead engineers in his division if where chalking off points here.

But as I said, I've looked at various sources all pointing out the same thing and that is to increase mtu.

Apologies for trusting multi sources vs the 1 source ocuk here.. My fault for trusting and listening to the heaps of advice from multiple sources
 
Last edited:
Fwiw I've never needed jumbo frames to max out 10Gb.

My experience has been limited to Windows Server 2008R2 and newer, with either Intel or Mellanox cards.


The last time I remember actually using jumbo frames was back in the 98se/win 2000 days in order to max out gigabit on slower CPUs.
Then am not sure what is going on then and why multiple places I've read all said i need to enable jumbo frame.

Don't think i just enabled it for fun. I obviously just plugged it all in and ran some benchmark and never got close to 10gb and when I asked lots of people and checked lots of places, they all said to enable jumbo frame
 
Then am not sure what is going on then and why multiple places I've read all said i need to enable jumbo frame.

Don't think i just enabled it for fun. I obviously just plugged it all in and ran some benchmark and never got close to 10gb and when I asked lots of people and checked lots of places, they all said to enable jumbo frame
What exactly were you using to test the network performance of your 10gb network/card? File transfer? iPerf? Other?

As file transfer requires both sides and the network itself to support 10Gb/s (1GB/s)) to see those speeds. If you are rocking HDD's in a NAS with encryption that could be a reason why you're only seeing a lower throughput. I normally test with Ramdisks on both sides or with NVME or SATA SSDs in RAID0 (or equivalent) to avoid any bottleneck from slowing any side down.
 
What exactly were you using to test the network performance of your 10gb network/card? File transfer? iPerf? Other?

As file transfer requires both sides and the network itself to support 10Gb/s (1GB/s)) to see those speeds. If you are rocking HDD's in a NAS with encryption that could be a reason why you're only seeing a lower throughput. I normally test with Ramdisks on both sides or with NVME or SATA SSDs in RAID0 (or equivalent) to avoid any bottleneck from slowing any side down.
Iperf mainly.

My nas has two pools. One has two pairs of mirrors of nvme drives for fast data access (this is what I also use to test speeds) and another pool with 7 hdd, with two of them as redundancy.

I tested using Iperf and via sending and receiving files from /to the nvme pool.

Out of the box with no mtu tweaking I was not getting even half of the speed of 10gbe.

So what i done is used Google, used even ai and I even had my mate spend hours on voice call with him screen sharing my system to try and figure out what was the issue.

It's been an issue for last coupe months but it's only recently I decided to revisit this again and post my issue here and in other forums with more context and updates on all the things I tried and you know what helped me find out the issue?

Claude ai. It helped me find out that uninstalling express vpn solved it as somehow it was overriding the mtu size I was trying to set on the nic.

I can share past commands and outputs if you want to show you.
 
Back
Top Bottom