VMware: Standard or Distributed vSwitchs?

Associate
Joined
29 Dec 2003
Posts
2,039
Location
Newcastle upon Tyne
I'm just going through the VTSP course notes at the moment (to refresh my VMware knowledge as I haven't used it since VI3) as we have an install on a customers site coming up soon and I've noticed VMware now recommend using "Distributed vSwitches" rather than the old-style Standard vSwitches. The install coming up consists of:

2x ESX Hosts with 4 Guest OSs per server
2x vCenter Servers
2x iSCSI SANs

What I would normally do with the standard vSwitches is physically attach vmnic0 to the customers desktop-facing network which would host the Guest OSs and Service Console, then physically attach vmnic1 to the separate (no physical path to the desktop network) iSCSI network which would deal with iSCSI and vMotion traffic, each NIC is on a different vSwitch.

But how would I replicate this setup with a Distributed vSwitch? Would I just create two different Distributed vSwitches each with the uplink ports for the iSCSI network and Desktop network from each host assigned to the relevant vSwitch and then create the port groups for the service console, guests, vmotion and iscsi?

Sorry if this is a totally dumb question but I wanted to make sure what the score is rather than flying by the seat of my pants :)
 
No point in DVS if your just using a "flat LAN" with no VLANs etc imo :)

Also if I was you I would be looking at a minimum 6 NIC config for resilience - 2 for SC, VMotion and maybe FT if your using it? 2 for VMs and 2 for ISCSI (which should ideally be in a segregated VLAN)

Unfortunately we are restricted by their existing hardware which they want to re-use as ESX hosts - they have 2x Gigabit NIC's onboard with no spare PCI-X slots - I'd have preferred to have at least separate the vMotion traffic from the iSCSI traffic but we have to make do with what we are given!

Not worth it for 2 hosts - are they even licensed for it?

I don't think vDS's are only available in certain versions - according to the latest VTSP course it's the way VMware want it set up from now on.

Have a read here on the basics of a vDS - http://www.no-x.org/?p=252

As said above though, there's no real advantage to using one with just a couple of hosts. If you're planning on adding more in future then maybe.

That link was most useful, thanks :)

We have enough kit here to test out the entire set-up before hand so I'll give it a try with both scenarios, they don't plan on adding more capacity soon - it's more for redundancy purposes than anything else!

Cheers for all the help chaps :)
 
If I was you, I would be mitigating any circumstances were design decisions are going to impact performance and stability (basically cover yourself and caveat everything). Get the customer to agree to what you are doing and explain why you’re doing it that way.

vMotion isn't going to be too much of a killer on the iSCSI LAN as long as you don't vMotion within business hours. vMotion causes heavy spikes in traffic, this is not what you want on an iSCSI LAN, and also you should be looking at utilising Jumbo frames with iSCSI where possible. Although if you don't have an enterprise iSCSI storage array, you will probably saturate disk I/O before network I/O and circumvent any need for Jumbo frames.

I'm going to have a talk with the customer early next week as this was solution was designed by a developer initially and I was brought in in the latter stages - I've a few points to bring up that the devs missed out :)

39587_vsphere_howtobuy_v2_nobuttons_R2.jpg

I looked at that feature comparison list earlier and didn't even notice that it was a Enterprise Plus feature! I'm not firing on all cylinders today!
 
Wow what sort of servers are these ? What else is using the PCI-X slots ?

You'd be crazy to do this with 2 NIC's.

It's going on PowerEdge R210's - as standard they have 2x Broadcom NIC's and the PCI-Express slot is taken up with an array controller at the moment. I've had the lid off one of them this morning and the motherboard has 2x SATA ports on it so I'll be removing the array controller and attaching the disks directly to the motherboard controller. I'll then be sticking a Intel quad port NIC in the spare PCI-Express slot :)
 
Back
Top Bottom