VMWare vSwitch and physical uplinks.

Soldato
Joined
18 Oct 2002
Posts
4,898
We're trying to improve our storage connections in VMWare.

We've had a consultant in who's given us a design to implement.

We've got 5 ESXi 5.5 hosts, with 4 10GbE ports each.

The advice is to split our traffic into 4 VLANs on different subnets - NFS, iSCSI_A, iSCSI_B, and VMotion. Seems sensible to me, segregates the traffic and allows MPIO for iSCSI.

The bit I'm struggling with is the physical uplink port config and the standby links. The design states:

Port 1 - NFS Active, vMotion Standby
Port 2 - iSCSI_A
Port 3 - iSCSI_B
Port 4 - VMotion Active, NFS Standby

So I need 4 vmk's and 4 vSwitches, but I need 2 of the physical uplinks to be present on 2 vSwitches so I can set up the standby links. I can't seem to configure this using standard vSwitches because once an uplink is assigned to a vSwitch it disappears from the list for new vSwitches. If I put 1 and 4 on the same vSwitch with the NFS and vMotion vmk's, I get a team which will work but doesn't give the true traffic separation we are seeking.

It appears to create this config we would need to use distributed vSwitches. One of my colleagues is against this because our hosts are not identical so the vmnics have different numbers. From what I've seen this is not a problem as you assign physical ports to the dvuplinks for each host?
 
Your config looks straight forward.

You need two vswitches & four port groups per vswitch.

Vswitch0: port 1 & 4

Four port groups on vswitch0.

Vswitch1: port 2 & 3

Four port groups on vswitch 1

If you're confused, search for the vmotion best practice PDF, which tells you how to set up exactly what your consultant has recommended for vmotion only (your chap has just applied that logic to other types of traffic- perfectly acceptable and doable without distributed switches).
 
It appears to create this config we would need to use distributed vSwitches. One of my colleagues is against this because our hosts are not identical so the vmnics have different numbers. From what I've seen this is not a problem as you assign physical ports to the dvuplinks for each host?

If you have the required Enterprise Plus licensing for distributed switches then use them. Our hosts aren't identical either, and we use them with no problems whatsoever.

Migrating from standard to distributed switches is an absolute breeze using the wizards and involves no down time at all - so once you've done some testing and change control there is no real barrier to using them.
 
Back
Top Bottom