ESXi 5.5: How does it handle NICs on hosts clustered but not HA

Soldato
Joined
19 Oct 2002
Posts
2,714
Location
Auckland, New Zealand
Hi,

I'm currently looking in ESXi to replace my Xenserver 6.5 setup as it doesn't handle different numbers of NICs per host in a pool. Basically, do you need to have the same number of NICs per host in a non-HA'd cluster or does it not matter like it does in Xenserver? I have a number of hosts with a different number if built in ports (some ex lease servers and some custom built) and I would like to know how its handled.

Thanks,

Chris
 
If there is no vcenter in use, then I can't see how having different nic configurations on different hosts would matter.

The only time nic configuration matters is when you have vcenter as then you need the same configuration on each host, in order for the HA to work. ie for a guest to move from one host to another without any problems.

Each host is just seen as independent without vcenter and they don't even communicate with each other.

You could use different hosts with vcenter as well and even utilise HA. For example one host could have 5 nic and one only 2. As long as the two nics are configured the same on both hosts, then you can use HA for guests that utilise those interfaces. any guests using the other three interfaces would just get an error when you try to migrate the guest to the host that doesn't have that nic configuration. Of course with the main requirement for HA being shared storage as well.
 
Last edited:
If there is no vcenter in use, then I can't see how having different nic configurations on different hosts would matter.

The only time nic configuration matters is when you have vcenter as then you need the same configuration on each host, in order for the HA to work. ie for a guest to move from one host to another without any problems.

Each host is just seen as independent without vcenter and they don't even communicate with each other.

Basically this.

You'll need vcenter configured to get vmotion working, which controls HA.
 
Hi and thanks. What I meant was that I do not want or need HA enabled, just clustering if possible. Obviously I can create multiple datacentres in vcentre to control this if I cannot cluster.
 
What do you mean by cluster? adding them to vcenter?

Sure you can add different hosts configurations to the same datacenter. The main factor there would be esxi host versioning. If they are all compatible with the vcenter version that you select, then there is no other considerations that i can think of realy..

We have one site that was poorly managed and they had about 6 esxi hosts on different hardware all with different esxi host versioning and nic and different local storage configurations. They were all added to the same vcenter for management.
 
Last edited:
You cant get clustering or HA as its known in ESXi without a vCenter server (sort of). However once the cluster is configured you don't need a vCenter server for the operation of the Cluster. So in theory you can built it, and then configure it with a vCenter server, and then shut down your vCenter server (assuming you use the trial version 60days to set it all up) the cluster will still work as configured. You just cant do anything manually.
 
Morning all,

Ok so I will rephrase everything to try and make my question clearer. This is also a homelab so don't need to follow the official ways if necessary.

In Xenserver 6.5 you have the option of adding standalone servers, pooling servers or pooling with HA. The issue comes that when you pool a number of servers that have different numbers of NICs, Xenserver simply takes the largest NIC count and replicates that across all member servers, leading to issues with servers that do not have more than 1 NIC. The advantage of pooling in Xenserver is the ability to access a single shared network storage device and the ability to migrate/move Vms across the servers and this can be achieved without the need for HA.

So, with that in mind, does Esxi (using Vcentre), allow you to 'pool' hosts in to a single cluster (without HA) with different numbers of NICs, with the ability to move VMs around from server to server?

If the above doesn't work, if you don't 'pool' can you still migrate VMs from one host to another using vcentre or move to some form of shared storage that all the hosts can access?

Thanks,

Chris
 
You can v motion between standalone hosts, even using local storage. Hosts do not need to be in a cluster for this, as if version 6 you have even more freedom.

Switching and phyiscal up links can differ between hosts however clearly you would want as similar configuration between hosts as possible to ensure v motion is suitable and possible.
 
That's not vmotion, which 100% requires a VC.

No VC, no vmotion.

Where did I say a vc was not required, and it is v motion, to be precise enhanced vmotion. if you referring to my comment of standalone hosts, I am referring to hosts which are not in a cluster. A cluster does not create the vmotion capability, stand alone hosts non clustered can vmotion but yes a VC is required.
 
Morning all,

Ok so I will rephrase everything to try and make my question clearer. This is also a homelab so don't need to follow the official ways if necessary.

In Xenserver 6.5 you have the option of adding standalone servers, pooling servers or pooling with HA. The issue comes that when you pool a number of servers that have different numbers of NICs, Xenserver simply takes the largest NIC count and replicates that across all member servers, leading to issues with servers that do not have more than 1 NIC. The advantage of pooling in Xenserver is the ability to access a single shared network storage device and the ability to migrate/move Vms across the servers and this can be achieved without the need for HA.

So, with that in mind, does Esxi (using Vcentre), allow you to 'pool' hosts in to a single cluster (without HA) with different numbers of NICs, with the ability to move VMs around from server to server?

If the above doesn't work, if you don't 'pool' can you still migrate VMs from one host to another using vcentre or move to some form of shared storage that all the hosts can access?

Thanks,

Chris


Ok i see what you mean by cluster, was being thick. As long as the vcenter version is compatible with hosts then you can add them to the same cluster. The requirement for moving them between hosts manually will be that the hosts have shared storage and the nic configuration of the guest that you are wanting to move between hosts will have to match on each host. But nics on the hosts that are not in use by the guest that you are migrating are irrelevant, ie they can be configured differently. All that matters is that the configured nic on the guest has the same configuration on the other host.

So, with that in mind, does Esxi (using Vcentre), allow you to 'pool' hosts in to a single cluster (without HA) with different numbers of NICs, with the ability to move VMs around from server to server?

Short answer, yes.


You can even use different physical nics on different hosts, all that matters is the configuration of the vswitch (vm port group) configured for that specific guest. The names have to match up and ideally networking configured the same. But in theory you could have a vswitch on a different host with the same name configured on a different subnet and it would still probably migrate although i wouldn't recommend that.


In vcenter when you go to inventory-> networking. It will show you vmport groups that are available for guests across all the hosts. Once you have a valid portgroup (valid for migrations) across hosts it will change from listing multiple vm port groups to listing only one, even though the port group is on every host.

Let me know if that makes sense.
 
Last edited:
Thank you very much, that makes perfect sense and appears to be exactly what I wanted to hear! I'm not looking for High Availability here, just the ability to move VMs around and start on a server manually if necessary. As I generally have to passthrough some devices, I don't do much migration but the ability to move to a general store if I need to take down a host or do something is good.

Again, thank you!
 
HA uses the same mechanism for migrations (vmotion = migration). If you right click and migrate a guest to another host that is no different than what HA does. It just HA will do it automatically if it is alerted to a problem with a host. HA i think is only available for essential plus and up licensing, i am not sure on trials because i think on trial everything is enabled anyway for testing.
 
HA uses the same mechanism for migrations (vmotion = migration). If you right click and migrate a guest to another host that is no different than what HA does. It just HA will do it automatically if it is alerted to a problem with a host. HA i think is only available for essential plus and up licensing, i am not sure on trials because i think on trial everything is enabled anyway for testing.

HA - High Availability, if a host totally fails then the VM process is dead. Other members of the fault domain will attempt to bring online the failed VM assuming there is sufficient free resources on that host.

If you're creating HA clusters then you should really be reserving some fail-over capacity, and ensuring this is always available with admission control policy. Most of the recommendations are based on % values you can sustain in regards to a failure of the cluster (so a 3 node cluster configured for N+1 you'd reserve 34% CPU and 34% RAM) however unless you are using reservations then the AC doesn't really work too well as it'll just use the bare min resources needed to bring a VM online which leads nicely into correct usage of resource pools and reservations.

I personally would only ever use HA with DRS - without DRS (a function of the VC, HA will work without a functioning VC) or you can have situations where the HA mechanism can't bring online VMs due to resource issues which DRS may have been able to resolve but then you'd have to be running some very hot clusters.

Select power on options for the largest machines first to aid this.
 
Back
Top Bottom