Hyper-threading on ESXi - include in CPU capacity calcs?

Soldato
Joined
27 Feb 2003
Posts
7,294
Location
Shropshire
I'm putting together a design for a small vSphere / ESXi cluster which will need to run 6 or 9 VMs. The spec' required for each VM is 16 vCPUs and 32GB RAM.

The Xeon Silver 4216 is a 16C/32T CPU, so a dual socket 2U box would give me 32C and 64T. Three of the VMs running on a single host would be 48 vCPU. Assuming a heavy load, three VMs would exceed the physical core count and I'm wondering if HT would be good or not? I've seen a figure somewhere that HT can add 30% performance.
 
Yes, there should be data going into the VMs constantly (if not, something else is broken!). Not sure how the traffic coming in is load balanced across the cluster of VMs at the moment though.

Will have a look at the Rome Epyc chips, availability could be an issue.
 
I've done small clusters before but run out of disk / RAM before becoming CPU constrained.

Reading around I've found articles like this one:

http://www.vmwarebits.com/content/vcpu-and-logical-cpu-sizing-hyper-threading-explained (my bold)

This entire article can be summarized in one sentence: never assign more vCPU's than the number of physical cores your host has. You could also argue that you should leave some headroom in not maximizing the number of physical cores in your VM's but that is beyond the scope of this article

Dated 2015 so old but illustrates my question / concern. I have pinged an email over to a vendor SE to double check their stance on physical vs logical cores.
 
A couple of things to look out for when speccing up this cluster:

Memory population performance with Intel Xeon Scalable systems, if your not correctly populating the channels, you can have a huge hit on performance:
https://www.thomas-krenn.com/en/wiki/Optimize_memory_performance_of_Intel_Xeon_Scalable_systems

Enable hyperthreading, but think of it as a bonus, don't count on it in your calculations, exclude it from your ratios.
https://www.vmware.com/content/dam/...nter-server-67-performance-best-practices.pdf

Read up on NUMA, local memory is faster then memory being accessed from the other CPU:
https://itnext.io/vmware-vsphere-why-checking-numa-configuration-is-so-important-9764c16a7e73

And if your going EPYC:
https://frankdenneman.nl/2019/02/19/amd-epyc-and-vsphere-vnuma/

Thanks, especially the DIMM population article which elaborates on a note in the Dell 740 Technical Guide (below). Will see if 192GB can be had in budget, even if it's over spec'd.

Populate six memory modules per processor (one DIMM per channel) at a time to maximize performance.
 
Whilst the below is based on the VMC on AWS, the information is relevant to your sizing.
https://docs.vmware.com/en/VMware-C...98/GUID-F3C4C0FA5C36FC67FBF918030728DD22.html

You may not be running NSX & VSAN, so your overheads will be a little lower, but it does show you the effects of sizing your VMs vs physical cores available vs quantity of servers you can run of that size, based on a similar sized workloads.

Are you also sizing for HA, supplying sufficient available resources for a host failure/maintenance?

There are many considerations when sizing, depending on the business requirements.

Another good link, thanks again.

No NSX, VSAN or HA. Budget at present will cover 3 hosts, so the normal workload will be two VMs per host, plus a small VCSA and one other VM. Will see where the pricing sits when it comes to placing an order, but we could trade memory bandwidth (128GB instead of 192GB) for additional CPU cores (18C Gold instead of 16C Silver ).
 
Back
Top Bottom