It's not that ancient though, and worth revisiting. My post remained un-replied to for over 2.5 years until now but I still think it is valid - I took a look at brand new game Cyberpunk core scaling here:
https://overclock3d.net/reviews/sof...n_hex_edit_tested_-_boosted_amd_performance/1
What you can see based on the hex edited results is that the hex core cpu is extremely competitive compared to even 16c32t.
That's an interesting chart.
To my eyes that's showing us possibly two things -
1. Cyberpunk only really scales to six cores/is optimised for six cores
2. The frame-rate is GPU bound by the 3070, so after a certain point more CPU oomph isn't important.
Both could be true, and even if only the second one is true it doesn't mean that a hex core is a bad idea, because the bottleneck is somewhere else for the foreseeable future.
My post that I took back above was saying that I think beyond a certain point developers (and I am one and I do this) target an arbitrary level of parallelism, rather than a specific core count. When that's done successfully, things tend to just scale to use available cores. That's hard in something that needs guarantees of responsiveness etc, like games, much easier in batch processing, but we will get there. Even then it doesn't mean more cores = more frames, because the GPU is the bigger factor (which is why people have been able to get away with Sandy Bridge i5s for so long).
I would be interested to see another chart using a 3080 and a 3090, with the various core counts, to see if the bottleneck moves at all and 8C/16T is the levelling off point. Comparing the two would give us more of a hint as to what's going on.