iSCSI MPIO VMware

the HP implementation allows you to select the mode to use and the Microsoft nic teaming will use 802.3ad if available at the switch, otherwise it uses a software xor from what i can see.
 
Interesting to know. I was not aware that it was using a teaming protocol 802.3ad. Will have to look at that.

Nic teaming on the ESXi does not use that i don't think. Based on what i have seen, there is no way you can get more than 1gigabit out of a vswitch at a time. Even if you add 3 gigabit nics to a vm port group with 10 guests. A guest will still use an individual gigabit link at any one time, ie it won't make all 10 guests 3gigabit. It will just utilise each gigabit link individually, so you will have for example, (this is done automatically), 3 guests over nic 1 and 2 guests over nic 2 and remaining guests over nic 3, even though they all added to the same vm port group.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2007467

This configuration i use for vmotion. It essentially utilises two gigabit but by creating two vmkernal ports on one vswitch and then doing a active/passive and passive/active configuration. Although any vmotion you do will still only go over one of the gigabit links, this is just for redundancy i think.
 
Last edited:
that makes sense with the higher bandwidth's

you hit similar limits with Hyper-V but since they added Single root I/O or ndis 3.6 functionality the limit sort of went away. however it is still suggested to use a higher bandwidth interface rather than bonding them as the bonding take CPU time and the higher the speed the more it will take.

back to the original post though, if the raid can handle 400mbs on a write you would need 6 bonded NIC's to max the write speed at gigabit. which is unrealistic. the realistic choice is 10GB.
 
This link might point you in the right direction. It's fairly old, so you'll need to confirm it's still relevant but there are various comments that suggest it can improve performance.
I've never used iSCSI only FC, but we have done very similar tuning as is described, and it gives measurable performance increases.

I think people have got derailed with the talk about LACP. If you were talking about server to client transfers this would make sense, but I'm pretty sure you're talking about your iSCSI storage. In which case MPIO is definitely the direction to be looking.
 
Back
Top Bottom