How much breaks when vCenter Server is not running?

Associate
Joined
1 Dec 2005
Posts
803
My home lab will soon be using shared storage and 2 identical hosts. I have vCenter Server running on Server 2012 in a VM, but I'm concerned about how much will stop working if a host fails while it's running the vCenter VM. Does HA still kick in during a failure? Will DRS still work?

I would consider running it fault tolerant but then I can't take snapshots (or use enough vCPUs).
 
HA and DRS are reliant on vCentre so they will stop if vCentre is not available.
 
That's a shame, especially about HA. I thought the agents might work it out themselves in the absence of vCenter.

Is there a good way to have a fault-tolerant vCenter VM that can still have snapshots? I guess I could script something... bit of a faff though.
 
I'm not a VCP but I believe HA will still work (as long as it has been configured) and this is why it's possible to virtualize vcenter
 
I'm not a VCP but I believe HA will still work (as long as it has been configured) and this is why it's possible to virtualize vcenter

This. vCenter is required to configure HA but is not required for HA to function.

I also hold a current VCP5 and that's one of the questions on the exam. ;)
 
My home lab will soon be using shared storage and 2 identical hosts. I have vCenter Server running on Server 2012 in a VM, but I'm concerned about how much will stop working if a host fails while it's running the vCenter VM. Does HA still kick in during a failure? Will DRS still work?

I would consider running it fault tolerant but then I can't take snapshots (or use enough vCPUs).

vCenter is required for CONFIGURING, but not for RUNNING. You can run pretty much indefinitely without vCenter, as long as nothing changes in your environment. Obviously it can be scary to run a production system in the dark, but the VMs and distributed switches will quite happily keep running.

HA most definitely doesn't require vCenter to work (the ESXi hosts talk amongst themselves to figure out when a host has gone down). Obviously vCenter is required to configure any changes to HA, but once HA is configured, vCenter is no longer required.

dvSwitches also do not require vCenter to run, only to configure.

If you reboot an ESXi host while vCenter is down, some things may not come up, dvSwitches being one of them; although e.g. Cisco Nexus v1000 can be setup to come up in a bare-bones configuration in the absence of vCenter (to facilitate a cold-start of the environment).
 
Thanks rotor, great explanation. I've held off using dvSwitches because I assumed they would die if vCenter was down, but haven't had time to test/read up on it. More good news :)
 
rotor is spot on with his comments in this thread regarding HA and DRS.

One thing I would add is that you should consider pegging your vCenter to a particular host - that way, if you end up with an outage you know if vCenter is affected or not and, crucially, you won't have to try and "find" it if DRS has punted it around your hosts, especially if you don't have things set to power on or whatever depending on your configuration.

If you're super-worried, peg it to one host and have FT keeping it up simultaneously on another host. I personally don't worry about my vCenter VM to that extent but it depends on what you are comfortable with.
 
If you're super-worried, peg it to one host and have FT keeping it up simultaneously on another host. I personally don't worry about my vCenter VM to that extent but it depends on what you are comfortable with.

I wouldn't recommend FT with vCenter, as FT has set of limitations and the main one for vCenter is a single vCPU which vCenter requires 2 or more logical cores.
 
[RXP]Andy;24101401 said:
I wouldn't recommend FT with vCenter, as FT has set of limitations and the main one for vCenter is a single vCPU which vCenter requires 2 or more logical cores.

Ah yeah, forgot about that. It was just a flippant remark anyway - I don't know why anyone would ever want to do it because its just not that important...
 
Slightly old thread, but just to chime in with our recent experiences..

We're having issues with (we believe) our CNA cards which connect the hosts to the storage. Sometimes they'll lose connection, dropping all the VMs and marking them as inaccessible. When this happens to the host containing vCenter, we can't manually move the VMs to another host, as the dvSwitch hasn't got any ports allocated for them, so we can't bring the vC back up, so we can't move VMs, so we can't bring the vC... etc.

We've created a port group on the standard vSwitch now just for the vC, but having had this happen twice now, management and some colleagues are sick of it and we're now planning to junk the dvSwitch :(

(And also trialling some other CNA's as the current manufacturers tech support is rubbish...)
 
Slightly old thread, but just to chime in with our recent experiences..

We're having issues with (we believe) our CNA cards which connect the hosts to the storage. Sometimes they'll lose connection, dropping all the VMs and marking them as inaccessible. When this happens to the host containing vCenter, we can't manually move the VMs to another host, as the dvSwitch hasn't got any ports allocated for them, so we can't bring the vC back up, so we can't move VMs, so we can't bring the vC... etc.

We've created a port group on the standard vSwitch now just for the vC, but having had this happen twice now, management and some colleagues are sick of it and we're now planning to junk the dvSwitch :(

(And also trialling some other CNA's as the current manufacturers tech support is rubbish...)

You shouldn't really be managing the hosts via a DVSwitch, keep management traffic to a vswitch and guest VMs on a DVSwitch where accuracy of port groups names is important for HA.
 
Yep. Hybrid dVSwitch is the way to do it if you're going to use a dVSwitch at all.

By the way, which CNAs are you using? We're using QLogic 8262 and have no issues at all.

How is your storage presented? FCoE? NFS?
 
We have management running off the internal NICs on a standard vSwitch.

When the NFS store gets marked as inaccessible and the vC drops, we can log on directly to another host and add the VMs to the inventory. However when switching them on, a port on the dvSwitch isnt allocated as the vC is down, so the host can't talk to the network. Catch 22 when trying to get the vC back up!

We're using 2x QLogic 8242's per host - beyond the cards themselves being pants, the only other thing it can be is an incompatability with the Dell they're in. Dell don't officially support them, I know, but then they were thoroughly unhelpful when we wanted to buy their version.. :(
 
When the NFS store gets marked as inaccessible and the vC drops, we can log on directly to another host and add the VMs to the inventory. However when switching them on, a port on the dvSwitch isnt allocated as the vC is down, so the host can't talk to the network. Catch 22 when trying to get the vC back up!
When running vCenter as a VM, you mustn't run it on a dvSwitch, which becomes a major pain, and to me actually takes away quite a bit of the advantage of running vCenter as a VM (because you need additional NICs to run the vSwitch dedicated to vCenter).

If you use the Nexus 1000v dvSwitch, you can configure a small number of "System VLANs", which are hard-coded into each ESXi host, so in the event of a cold start (where the entire environment is powered down), you can still bring up the environment. i.e. the VLAN the ESXi hosts are on, and the VLAN the vCenter VM is on, are configured as System VLANs on the 1000v, and you're golden. I have tested this extensively (cold start), and it works.
 
If you use the Nexus 1000v dvSwitch, you can configure a small number of "System VLANs", which are hard-coded into each ESXi host, so in the event of a cold start (where the entire environment is powered down), you can still bring up the environment. i.e. the VLAN the ESXi hosts are on, and the VLAN the vCenter VM is on, are configured as System VLANs on the 1000v, and you're golden. I have tested this extensively (cold start), and it works.

We're starting to look at the Nexus 1000 (again, we looked at it ages ago before the VMWare project kicked off!) for various reasons. Wasn't aware of that one though! Will have to pass it on..

Got any more little side benefits of it? :)
 
Back
Top Bottom