iSCSI MPIO VMware

Soldato
Joined
4 Mar 2008
Posts
2,566
Location
Guildford
Hi all,

I'm just re-setting up my home lab. I have a server 2012 R2 standard installed on hardware, with 5x 2TB drives in RAID-5 (I get around 400MB/s r/w from the array) I have an iSCSI virtual disk and target setup on the array which I want to use to store VMs on for my two ESXI servers. (Poweredge 1950 and 2950). Now i have multiple NICs in all boxes and I have followed guides to setting up MPIO, however I only get 94MB/s r/w from within a virtual machine stored on the array.

I'm not sure what more I have to do, I have enabled MPIO on the target server and also configured it on the ESXI hosts so that I can see multiple paths to the device, it is also configured for round robin and alll the network adapters say active.

Anything else I need to do to get MPIO working?

Thanks
 
OK. If I take it as read that you have gigabit connections but no LACP then ~100mb is not far of hat to expect with a standard load balancing / round robin policy.

WMware loses my at this point I'm afraid but from a server 2012 perspective try a different policy from the MPIO settings under device manager (think its still there)
 
Last edited:
Surely I could achieve those speeds using a single gigabit link? I have 3, is round robin and MPIO not meant to utilise the 3 links and load balance it?
 
nope divide it all by 8

so the max on gigabit (not gigabyte) is 128 so 96 with overheads is right.

your links are fine but what policy is set? it all depends on active-active etc

you have the following policies to choose from

Fail Over Only - Policy that does not perform load balancing. This policy uses a single active path, and the rest of the paths are standby paths. The active path is used for sending all I/O. If the active path fails, then one of the standby paths is used. When the path that failed is reactivated or reconnected, the standby path that was activated returns to standby.

Round Robin - Load balancing policy that allows the Device Specific Module (DSM) to use all available paths for MPIO in a balanced way. This is the default policy that is chosen when the storage controller follows the active-active model and the management application does not specifically choose a load-balancing policy.

Round Robin with Subset - Load balancing policy that allows the application to specify a set of paths to be used in a round robin fashion, and with a set of standby paths. The DSM uses paths from a primary pool of paths for processing requests as long as at least one of the paths is available. The DSM uses a standby path only when all the primary paths fail. For example, given 4 paths: A, B, C, and D, paths A, B, and C are listed as primary paths and D is the standby path. The DSM chooses a path from A, B, and C in round robin fashion as long as at least one of them is available. If all three paths fail, the DSM uses D, the standby path. If paths A, B, or C become available, the DSM stops using path D and switches to the available paths among A, B, and C.

Least Queue Depth - Load balancing policy that sends I/O down the path with the fewest currently outstanding I/O requests. For example, consider that there is one I/O that is sent to LUN 1 on Path 1, and the other I/O is sent to LUN 2 on Path 1. The cumulative outstanding I/O on Path 1 is 2, and on Path 2, it is 0. Therefore, the next I/O for either LUN will process on Path 2.

Weighted Paths - Load balancing policy that assigns a weight to each path. The weight indicates the relative priority of a given path. The larger the number, the lower ranked the priority. The DSM chooses the least-weighted path from among the available paths.

Least Blocks- Load balancing policy that sends I/O down the path with the least number of data blocks currently being processed. For example, consider that there are two I/Os: one is 10 bytes and the other is 20 bytes. Both are in process on Path 1, and both have completed Path 2. The cumulative outstanding amount of I/O on Path 1 is 30 bytes. On Path 2, it is 0. Therefore, the next I/O will process on Path 2.

Source http://technet.microsoft.com/en-us/library/dd851699.aspx

transfer rates will vary massively depending on what is selected.
 
Also if you dont have a DSM configured the MPIO will almost certainly default to Fail Over Only and only use 1 link.

I have used powershell to set it to Round Robin mode.

If by bonding you mean using more than one link to obtain a higher overall read/write speed, then yes
 
Do your speed test from more than one VM at a time. You should see your ~gigabit throughput on each one, up to the point you saturate your network or disk IO.
 
mpio is not worth the hassle. Even top vmware guys say there is no benefit to it. Just use nfs and create multiple datastores. ie if you have 2x gigabit for storage, and you don't want redundancy then create 2 datastores. Vmware does not even support lacp over the network interfaces, nevermind storage.
 
Last edited:
i have to agree. if its raw throughput you need then you need to look at LACP/bonding/NIC Teaming as an easier better option.

Server 2012 can do NIC teaming without any requirement or dependency on the switch itself but obviously if you can create a LACP group on the switch this would still be the recommended way.
 
so it doesn't support MPIO very well and doesnt support LACP very well.

errm what does it support for bonding/bandwidth enhancement then?
 
If you speak to vmware people they still suggest against using it, even though it says its supported and i think you have to have enterprise licensing and dvswitch. The nic teaming is just passive failover, it is not joining two 1gigabit in to a two gigabit link.
 
The nic teaming is just passive failover, it is not joining two 1gigabit in to a two gigabit link.

no its not

What is NIC Teaming?

A solution commonly employed to solve the network availability and performance challenges is NIC Teaming. NIC Teaming (aka NIC bonding, network adapter teaming, Load balancing and failover, etc.) is the ability to operate multiple NICs as a single interface from the perspective of the system. In Windows Server 2012, NIC Teaming provides two key capabilities:

1. Protection against NIC failures by automatically moving the traffic to remaining operational members of the team, i.e., Failover, and
2. Increased throughput by combining the bandwidth of the team members as though they were a single larger bandwidth interface, i.e., bandwidth aggregation.

source Technet http://blogs.technet.com/b/privatecloud/archive/2012/06/19/nic-teaming-in-windows-server-2012-brings-simple-affordable-traffic-reliability-and-load-balancing-to-your-cloud-workloads.aspx
 
http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2051307

http://wahlnetwork.com/2014/03/25/avoid-lacp-iscsi-port-binding-multi-nic-vmotion/

There are some more articles. I get my info mainly from #vmware on freenode where we have discussed a few times lacp and often people say it causes more problems than its worth.

There are some more recent articles referring to 10gigabit connections and using them for storage and network, technically i think it is possible with 5.5 and if your set up can't avoid it (if you have 10gigabit) then its probably worth the effort. If you have just gigabit nics then everyone seems to recommend to assign two nics to a vm port group but not set up lacp.

I have my vswitches all set up with two nic but none of them are lacp.

In vmware esxi adding two nic to a vswitch will never utilise more than 1 gigabit traffic. Unless you enable lacp it on the distributed switch with enterprise licensing and the physical switch ports. The term nic teaming, in hp servers and in general nic teaming is used to combine the two nics at a software level but that is still not lacp as its not enabled on the switches for lacp.

More info about the 10gigabit with lacp and 5.5 https://connect.nimblestorage.com/servlet/JiveServlet/downloadBody/1242-102-1-1173/LACP.pdf
 
Last edited:
Back
Top Bottom