Soldato
I've never fully seen 10Gbit on mine and it is always on windows I see speed issues. Changing to jumbo frames is not an ideal option as it will require packet fragmentation in order to access the internet and any other devices on your network which don't support it.
A loopback test will show what the cpu can handle as you are testing the network stack.
This is what I could get on linux.
Same test and hardware on windows could barely edge out 10Gbit. (Yes I tried NTTcp and multiple instances, all came out the same)
This is the best I could get out of mine during testing which is a shade over 7Gbit from a local NVME ssd to my server running 7x western digital SE hard disks in RAID 5. (Which is the limiting factor here)
Some good info can be found here for checks that you can do in powershell. Get-NetAdapterHardwareInfo will tell you the pcie version and link width that it is actually getting, which is quite useful.
A loopback test will show what the cpu can handle as you are testing the network stack.
This is what I could get on linux.
Same test and hardware on windows could barely edge out 10Gbit. (Yes I tried NTTcp and multiple instances, all came out the same)
This is the best I could get out of mine during testing which is a shade over 7Gbit from a local NVME ssd to my server running 7x western digital SE hard disks in RAID 5. (Which is the limiting factor here)
Some good info can be found here for checks that you can do in powershell. Get-NetAdapterHardwareInfo will tell you the pcie version and link width that it is actually getting, which is quite useful.