When creating a VM, is it possible to align all cores to a single CCX?

Soldato
Joined
1 Apr 2014
Posts
19,083
Location
Aberdeen
On AMD chips a CCX unit is 8 cores. All Ryzen, Threadripper, and Epyc CPUs are built around this. The performance differnece between the Ryzen 3100 and the Ryzen 3300 shows the importance of all the cores being on the same CCX. When creating an 8 (16 including HT) core VM, can you force all 8 virtual cores to be on the same CCX?
 
What software you using? ESX?

And which AMD proccesor specifically? Though you are not wrong in what your saying with regards to the hops taken to get to memory - not all the AMD stuff is the same from a NUMA layout perspective
 
You can do physical core affinity in most hypervisors; however the use case and benefit of doing this varies greatly - its extremely hardware / software load depandant on seeing any real benefits. Limiting the cores you can schedule over can be ultimately detremental to performance. I've seen this attempted several times; typical driven by some licensing restriction but generally its recommended to avoid this type of configuration as standard as it can have undesired knock on effects to other scheduling type activities however it is an option but falls into the "just becuase you can, doesn't mean you should" category for me.
 
however it is an option but falls into the "just becuase you can, doesn't mean you should" category for me.

Absolutely this, VMware and the likes have spent years and years perfecting CPU Scheduling and the likes of this, I wouldn't mess around personally unless this is a homelab setup trying to do something really educational, which it could well be!
 
Just dug out this post that pretty much covers what you are after (for ESXi).

https://frankdenneman.nl/2019/02/19/amd-epyc-and-vsphere-vnuma/

For example, the EPYC 7401 contains 24 cores, 6 cores per Zeppelin and thus 6 cores per NUMA node. When using the default setting of numa.vcpu.min=9, an 8 vCPU VM is automatically configured like this.

Screenshot-2019-02-18-22.55.43-680x178.png

Screenshot by @AartKenens
A VPD is the virtual NUMA client that is exposed to the guest OS system, while a PPD is the NUMA client used by the VMkernel CPU scheduler. In this situation, the ESXi scheduler uses two physical NUMA nodes to satisfy CPU and memory requests while the guest OS perceives the layout as a Uniform Memory Access (UMA) system. In a UMA system, the access time to a memory location is independent of which processor makes the request, or which memory chip contains the transferred data). I.e., pretty much the same latency and bandwidth throughout the system. However, this is not the case as reported in this article above. Reading and writing remote CCX cache and remote memory (on-die) is slower than local memory even within the same Zeppelin. By setting the numa.vcpu.min=6, two VPDs are created, and thus the guest OS is made aware of the physical layout by the ESXi scheduler. The guest OS and the applications can optimize memory operations to attain consistent performance.
 
Back
Top Bottom