Hi,
I'm wondering what the best practises are for increasing bandwidth on a VMware virtual machine network for connectivity between vm's (and possibly vmotion too). At present, our iscsi traffic is nicely balanced, but our vm traffic is not.
I suppose our ultimate goal would be to have 4 nics configured for the virtual machine network on each hyper-visor. What we would then like to see is a vm on esxi1 with a 10gbit virtual nic being able to talk to a vm on esxi2 also with a 10gbit virtual nic at a speed of 4gbit (the 4 x 1gbit nics on the hypervisor). Is this possible?
Google searches have thrown up loads of articles which I've been reading but none have really clearly explained this.
Our setup is as follows.
Ok, so we have multiple ESXi Hypervisors, each with ten 1 gigabit Nics in them. These use Dell Equallogic SAN's for storage with 4 iscsi nics.
We have set it all up with iscsi mpio as per the dell best practises, and looking at the nic statistics on the SAN, we can see that traffic is evenly distributed across all the equallogic nics, so we're confident we have 4Gb iscsi bandwidth available rather than 1Gb and all is balanced nicely.
The question is, how do we nicely balance the VM traffic. We obviously have 6 nic ports left on each hyper-visor. Right now for example if we copy a file from a vm on esxi1 to a vm on esxi2, the copy runs at 1Gb. But if we simultaneously copy another file from another vm on esxi1 to another vm on esxi2, each copy now only runs at 500mbps since both copies are going out on the same gigabit nic on the hypervisor which bottlenecks it.
I see than VMware 5.1 supports LACP, and our switches are managed and will also support this, so am I best creating a LACP group for each hypervisor with say 4 nics. Would this then give 4Gb bandwidth for the virtual machine network between hyper-visors?
I'm wondering what the best practises are for increasing bandwidth on a VMware virtual machine network for connectivity between vm's (and possibly vmotion too). At present, our iscsi traffic is nicely balanced, but our vm traffic is not.
I suppose our ultimate goal would be to have 4 nics configured for the virtual machine network on each hyper-visor. What we would then like to see is a vm on esxi1 with a 10gbit virtual nic being able to talk to a vm on esxi2 also with a 10gbit virtual nic at a speed of 4gbit (the 4 x 1gbit nics on the hypervisor). Is this possible?
Google searches have thrown up loads of articles which I've been reading but none have really clearly explained this.
Our setup is as follows.
Ok, so we have multiple ESXi Hypervisors, each with ten 1 gigabit Nics in them. These use Dell Equallogic SAN's for storage with 4 iscsi nics.
We have set it all up with iscsi mpio as per the dell best practises, and looking at the nic statistics on the SAN, we can see that traffic is evenly distributed across all the equallogic nics, so we're confident we have 4Gb iscsi bandwidth available rather than 1Gb and all is balanced nicely.
The question is, how do we nicely balance the VM traffic. We obviously have 6 nic ports left on each hyper-visor. Right now for example if we copy a file from a vm on esxi1 to a vm on esxi2, the copy runs at 1Gb. But if we simultaneously copy another file from another vm on esxi1 to another vm on esxi2, each copy now only runs at 500mbps since both copies are going out on the same gigabit nic on the hypervisor which bottlenecks it.
I see than VMware 5.1 supports LACP, and our switches are managed and will also support this, so am I best creating a LACP group for each hypervisor with say 4 nics. Would this then give 4Gb bandwidth for the virtual machine network between hyper-visors?