• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD announce EPYC

Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
Well the day is almost here lads and I won't lie, I have had a nightmare getting everything upgraded from 5.5 to 6.7u2 (for that tasty Enhanced V Motion) but the estate is now all up and I have a cluster of 3 with EVC, DRS and HA. Well that's not quite the truth, it's actually a cluster of two and one host sitting empty in maintenance mode ready for the 6.7 install tomorrow, after which I can just vmotion servers back.

It all started last week when I was digging around in 5.5 trying to get a plan together on how the migration was going to work and somehow I managed to break vcenter, not just a little bit either, I broke it in such a way that I couldn't reconnect hosts to any vcenter. I did everything reset management networks, went through dns and host files with a fine tooth comb but after a day or so of fighting I pulled the plug and started putting a plan into action. Basically the network is now very much ready for some epyc action come Friday. Some 50 servers have been upgraded and moved over the last two evenings and apart from a single issue it's been fairly smooth sailing.
 
Man of Honour
Joined
20 Sep 2006
Posts
33,887
Older versions can be a nightmare to upgrade. With such few hosts it probably would have been easier to build fresh and seize the hosts.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
Older versions can be a nightmare to upgrade. With such few hosts it probably would have been easier to build fresh and seize the hosts.

That was the odd thing, I built a fresh vcenter in 5.5 and tried to seize the hosts which didn't work. I called VMware and of course they have little time for 5.5 which is out of support and advised to just upgrade. It was just a bit of a pain that you cant manage 5.5 hosts in 6.7 so all the hosts needed a reinstall as well then of course they all had live VM's on them. I actually have two clusters of 3 at different sites so I guess I'll have to do the other site at some point as well.
 
Man of Honour
Joined
20 Sep 2006
Posts
33,887
It’s a two stage upgrade to 6.7 from 5.5. Power is down at home at the moment but I believe 6.0 or 6.5 can manage 5.5 hosts. So go to 6.0 U3 first, bring the hosts up to date using VUM and then upgrade vCenter.

It’s the Windows to Appliance migrations which cause most issues due to service accounts and issues surrounding the DB.

I work for VMware so feel free to drop me a PM if you have any questions.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
It’s a two stage upgrade to 6.7 from 5.5. Power is down at home at the moment but I believe 6.0 or 6.5 can manage 5.5 hosts. So go to 6.0 U3 first, bring the hosts up to date using VUM and then upgrade vCenter.

It’s the Windows to Appliance migrations which cause most issues due to service accounts and issues surrounding the DB.

I work for VMware so feel free to drop me a PM if you have any questions.

I'm all good now. Went with the "sod it and rebuild" method... I now have all the hosts in 1 cluster and a second cluster built and ready for the 3 epyc hosts. Out of interest do you know which EVC option I should go with?
 
Man of Honour
Joined
20 Sep 2006
Posts
33,887
That's a can of worms question right there, there are a lot of blogs and opinions about this.

If you are going to live vMotion then EVC mode needs to be at the lowest that any target host supports which the VM can reside on. Do bare in mind that the EVC flag exposed to he VM is on VM boot.

If you're keeping the two clusters seperate and are happy to cold vMotion the VMs then set both clusters at the highest that they support.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
That's a can of worms question right there, there are a lot of blogs and opinions about this.

If you are going to live vMotion then EVC mode needs to be at the lowest that any target host supports which the VM can reside on. Do bare in mind that the EVC flag exposed to he VM is on VM boot.

If you're keeping the two clusters seperate and are happy to cold vMotion the VMs then set both clusters at the highest that they support.

I so knew you were going to say this. Going from Xeon 5690's to Epyc Rome. I think its probably best to just shut it all down and move them cold. Probably less agro in the long run.
 
Man of Honour
Joined
20 Sep 2006
Posts
33,887
I assumed you current cluster was Epyc and not Intel. There is no warm migration path possible between CPU architecture, the only supported way is a cold vMotion.

Do the clusters have shared storage? If so it's a pretty simple job, power off, vMotion, upgrade hardware, power on, VMware tools etc. If not perhaps make use of the Veeam 30 day license to replicate the the VMs over.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
I assumed you current cluster was Epyc and not Intel. There is no warm migration path possible between CPU architecture, the only supported way is a cold vMotion.

Do the clusters have shared storage? If so it's a pretty simple job, power off, vMotion, upgrade hardware, power on, VMware tools etc. If not perhaps make use of the Veeam 30 day license to replicate the the VMs over.

They do, everything is on a big SAN a HP 6500 EVA with some 100tb of disks (My next project). I thought the whole idea of EVC was to offer migration paths between the architectures and vendors. I read a recent KB from VMware that suggested this to be the case. Like you say though it's still fairly easy. Ill just take my time with it and cold migrate all the servers. Hopefully some updates of Rome from me tomorrow if you are interested?

I also have full veeam (9.5 update 4) licenses at both sites with HP 4500 StoreOnce 24TB each backup appliances but to be honest I think just cold vmotion is probably my favored route.
 
Last edited:
Man of Honour
Joined
20 Sep 2006
Posts
33,887
Ah, EVA 6000 - takes me back! You'll need to go 3Par or Nimble with lots of flash to keep with with your new shiny servers!

EVC allows migration between CPU generations, so you can have a cluster with different Intel or AMD generations and freely vMotion around provided that the correct EVC level is set and exposed to the VM. The idea is most customers do a 3-5 year hardware refresh cycle and they can throw the new servers into the cluster and slowly decomission the old ones once the workloads are migrated, all in all while there zero downtime to the VMs.

I don't think EVC was ever intended for AMD to Intel migrations. I'm not sure any other Type 1 hypervisor supports it at present apart from perhaps Proxmox and even then I don't think it's intended for production use as you lose a load of CPU features.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
Ah, EVA 6000 - takes me back! You'll need to go 3Par or Nimble with lots of flash to keep with with your new shiny servers!

EVC allows migration between CPU generations, so you can have a cluster with different Intel or AMD generations and freely vMotion around provided that the correct EVC level is set and exposed to the VM. The idea is most customers do a 3-5 year hardware refresh cycle and they can throw the new servers into the cluster and slowly decomission the old ones once the workloads are migrated, all in all while there zero downtime to the VMs.

I don't think EVC was ever intended for AMD to Intel migrations. I'm not sure any other Type 1 hypervisor supports it at present apart from perhaps Proxmox and even then I don't think it's intended for production use as you lose a load of CPU features.

Yea that san is getting on now, I've had it in here for around 8 years and it has been absolutely bullet proof. Even when we had multiple sustained disk failures it just soldiered on and HP support was excellent. It's not even that slow by today's standards but with the age of the disks and dwindling support its certainly time next year to take stock and invest in new storage infrastructure. You probably guessed it but it will likely be whatever HP have bought up recently so as you say 3par or nimble are options, I guess pure are as well but not HP. There might even be a handful of others to look at but everybody raves on Nimble so perhaps ill go with that.
 
Man of Honour
Joined
20 Sep 2006
Posts
33,887
I really like Nimble and the Plugin for vCenter makes things a doddle. I'd recommend looking into VVols and controlling the storage using policies rather than traditional LUN based management.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
I really like Nimble and the Plugin for vCenter makes things a doddle. I'd recommend looking into VVols and controlling the storage using policies rather than traditional LUN based management.

Advice taken and i've jotted it in my notebook. I have some research and learning to do for sure. I'm getting better VMware support on here than via my support contract :D
 
Man of Honour
Joined
20 Sep 2006
Posts
33,887
Advice taken and i've jotted it in my notebook. I have some research and learning to do for sure. I'm getting better VMware support on here than via my support contract :D
Essentially rather than present LUNs using various QoS configurations and RAID types, you do that all in vCenter with the use of storage profiles. You marry them up with what the storage provides and it's all done automatically. No more sVmotion required to change the storage policy, just change it on the VMDK and it's all done for you.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
Essentially rather than present LUNs using various QoS configurations and RAID types, you do that all in vCenter with the use of storage profiles. You marry them up with what the storage provides and it's all done automatically. No more sVmotion required to change the storage policy, just change it on the VMDK and it's all done for you.

Sounds like the future to me. I see no reason not to adopt the latest and greatest technology if it has significant benefit as this sounds like it might. I dont tend to do a huge amount of storage vmotion, if I'm honest rather than shifting too much about I just buy another tray, expand the eva live and expose that to my hosts. Anything that automates storage profiles is a good thing in my book.
 
Soldato
Joined
5 Sep 2006
Posts
3,553
Location
West Ewell, Surrey
On the subject of storage, we’re currently in the middle of a Hyper-V to VMWare migration and have Nimble as our backend storage for the VMWare cluster and it has been pretty good so far.

As ChrisD says, the integration it has with VMWare makes creating/managing the storage a lot easier. This is from someone coming from a Dell estate with PS6100 SANs.

Hope the Epyc migration goes well, I’ll be interested in the results and uplift in performance.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
Well a bit of an inconvenience today! My servers are held in french customs :( Should have been release for delivery today but now it is Tuesday :( Not what you want when you have put all the leg work in.

Properly disappointed, bearly slept in a week for overnight work and came in today only to get this racked and ready for me to play with. Meh time to go home and we will go again on tuesday.
 
Last edited:
Man of Honour
Joined
30 Oct 2003
Posts
13,229
Location
Essex
For those interested:





Rome is ready to rock. Only been back in work a few days and am working from home tomorrow but I expect to have it singing by the weekend. So much upgrade room left, 3 empty sockets and loads of space for memory and cpu upgrades. Should be a fun few months.
 
Back
Top Bottom