Help me check my VM/SAN throughput is optimal!

Soldato
Joined
31 Dec 2003
Posts
4,749
Location
Stoke on Trent
Hi all,

Following Zarf's comment here in the Microserver thread, I've been doing some testing on my SAN and VMware environment by using IPerf between two virtual machines.

I have a Netapp SAN with 28 x 15k SAS drives in total across the two controllers. This SAN services 3 x VMware ESX 4.0 hosts, with virtual machines spread evenly over the SAN via NFS volumes.

What sort of throughput do you think I could expect between two virtual machines both on the same SAN volume, and same ESX host, and am I actually testing in the best way by using IPerf? I want to make sure I'm getting the most from my setup.
 
Iperf tests network throughput - it's got nothing to do with storage. If you want to see how fast two hosts can fling data at each other, Iperf is great for that - two VMs on the same host should see near gigabit speeds through Iperf.

To measure IOPS performance, use IOMeter on the guest VMs. That'll tell you how fast the guest can read/write from the storage.
 
Iperf tests network throughput - it's got nothing to do with storage. If you want to see how fast two hosts can fling data at each other, Iperf is great for that - two VMs on the same host should see near gigabit speeds through Iperf.

To measure IOPS performance, use IOMeter on the guest VMs. That'll tell you how fast the guest can read/write from the storage.

Thanks for quick response. I think you're right I need to test more than one thing here, but
two VMs on the same host should see near gigabit speeds through Iperf.
that is my problem, I thought it should be gigabit at least really, but i'm not getting that. With nearly a 1MB window on IPerf I'm getting about 800Mb/s I would have thought it should be Gigabit at least though. I'm going to try IOMeter now.
 
I'm guessing you're accessing the SAN from within the VM not a host HBA/Datastore?

There could be several things affecting throughput. Key questions are:

How many NICs do you have in each VM?
How many NICs do you have in the vHost? and How many per vSwitch?
Do you have jumbo frames enabled on both SAN, SAN switching and NICs?
800Mbit isn't bad for some types of transfer over gigabit adaptors.
 
I'm guessing you're accessing the SAN from within the VM not a host HBA/Datastore? I am

There could be several things affecting throughput. Key questions are:

How many NICs do you have in each VM? 1 vNIC per VM
How many NICs do you have in the vHost? and How many per vSwitch? 3 physical NICs per vHost for production network, 4 for storage network, 3 other NICs for other stuff (DMZ, management etc)
vswitches.png



Do you have jumbo frames enabled on both SAN, SAN switching and NICs? No
800Mbit isn't bad for some types of transfer over gigabit adaptors.
 
Well ideally you should have two virtual NICs per VM, one in the storage vSwitch one in the production vSwitch to keep your storage traffic and production traffic separated and prevent them competing with each other.
It's also worth running Jumbos of at least 4k as that's going to be your minimum I/O to storage so it makes sense to try and get as much of it into a single frame as you can. If not all of it. 3 frames per read/write = 3times the protocol overheads. Most stuff that supports Jumbo frames supports up to 9k.

You should see an improvement with that.
 
Back
Top Bottom