2 Node Hyper-V Cluster: Changing NIC Config

Caporegime
Joined
21 Nov 2005
Posts
41,330
Location
Cornwall
We're currently running a 2 node 2012 R2 Hyper-V Cluster with 1GbE NICs for the following: -

1 x 1GbE - Host Management
1 x 1GbE - CSV/Heartbeat
1 x 1GbE - Live Migration
3 x 1GbE - Virtual Switch for VMs (Teamed)
3 x 1GbE - SAN MPIO (Separate)
1 x 1GbE - Spare

We're planning to install a 2 port 10GbE card in each host, with each port being teamed, then migrate some, if not most, of the above roles to the quicker network.

I've read so many conflicting articles about the best config and I'm no further to deciding what we'll actually do yet.

Option 1
Everything but SAN MPIO on to 10GbE?

Option 2
Separate 2 x 1GbE teams for Host Management, CSV/Heartbeat and Live Migration, 2 x 10GbE team for the Hyper-V Virtual Switch and keep the SAN MPIO NICs as they are?

Option 3
Separate 2 x 1GbE team for Host Management, separate 2 x 1GbE crossover teams for CSV/Heartbeat and Live Migration, a 10GbE team for the Hyper-V Virtual Switch and keep the SAN MPIO NICs as they are?

Option 4
A mixture of the above?

I know there are so many different options so does anyone have any suggestions or recommendations based on your own experience or knowledge?

Thanks :)
 
Last edited:
Depends on traffic for each current NIC?

You have a SAN? With only two hosts local storage is the recommended setup. SAN goes down and nothing works.
 
You have a SAN? With only two hosts local storage is the recommended setup. SAN goes down and nothing works.

That'd be pretty unlucky/crap hardware!

Pop everything onto the 2x 10Gb! Using 2x 10Gbe NICs teamed with iSCSI on top is perfectly support by Microsoft and exactly the setup we use at work. In extreme setups it might be worth putting iSCSI on its own NICs but if you are moving from 1Gb should be fine.

Here's a link to a Technet article about the MS stance on it.

https://blogs.technet.microsoft.com...not-supported-for-iscsi-that-is-the-question/
 
Back
Top Bottom