Why Can't I saturate with Iperf

Man of Honour
Joined
13 Nov 2009
Posts
11,646
Location
Northampton
I was moving some files around over the network to my FreeNAS server yesterday and noticed slower than normal transfers.

After running Iperf between the FreeNAS Server and my PC I'm seeing 500-600Mbits/sec transfer rates. Along with ~56% NIC Usage on the windows PC in Task manager.

If I run the same iperf command from two seperate windows machines they will push around 450Mbit/sec to the server each, so I assume it is safe to say that the FreeNAS server is more than capable of fully utilizing its gigabit connection. I've tried using jumbo frames which makes no difference, and the windows machine are using onboard Realtek NICs and the FreeNAS server has a Intel PRO/1000 GT.

Is this simply a case of the Realtek NICs being crap?
 
Yea, all drivers are the latest. I've tried directly connecting the two machines which makes no bit of difference either along with a fresh install of windows.
 
what packet sizes are you using with iperf? Are you using tcp or udp?

I generally find the limiting factors when saturating links are the packet rates. So if you up the packet size you can produce a higher rate. Secondly use udp, that way tcp feedback loops wouldn't be causing the source to throttle back its sending rate.
 
By packet size do you mean the window size? If so 64k.

The Iperf command im using is

Code:
iperf -c 192.168.1.250 -w 64k -t 15 -i 1

edit: Two parallel TCP streams using -P 2 flag ups throughput to around 650Mbit sec

Running UDP with the following command

Code:
iperf -c -u -b 1000m -t 15 -i 1

Gives me 850Mbits/sec average with a peak NIC usage of ~89%.
 
Last edited:
I'm not sure what the default packet size is. But on the udp stream you can use the switch '-l' to define size;

iperf -c 10.8.1.120 -u -b 1000m -t 15 -i 1 -l 1500

would sent 1500 byte packets. If you are using jumbo frames then up can take this upwards. Typically the network card has to do less work (in term of producing packet per second) if you use a larger packet size.


for example using a 20 byte packet size

root@hici:~# iperf -c 10.8.1.120 -u -b 1000m -t 15 -i 1 -l 20
WARNING: option -l has implied compatibility mode
------------------------------------------------------------
Client connecting to 10.8.1.120, UDP port 5001
Sending 20 byte datagrams
UDP buffer size: 110 KByte (default)
------------------------------------------------------------
[ 3] local 10.8.1.120 port 48960 connected with 10.8.1.120 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 5.24 MBytes 43.9 Mbits/sec
[ 3] 1.0- 2.0 sec 5.27 MBytes 44.2 Mbits/sec


using a 1500 byte size;

root@hici:~# iperf -c 10.8.1.120 -u -b 1000m -t 15 -i 1 -l 1500
------------------------------------------------------------
Client connecting to 10.8.1.120, UDP port 5001
Sending 1500 byte datagrams
UDP buffer size: 110 KByte (default)
------------------------------------------------------------
[ 3] local 10.8.1.120 port 52333 connected with 10.8.1.120 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 130 MBytes 1.09 Gbits/sec

the vm with my test box doesn't support jumbo, but you get the idea.


Update: the default size is 1470, so try increasing this to something closer to your jumbo frame size.
 
Last edited:
Back
Top Bottom