• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Have CPUs reached the limit?

Do all programs even use all the cores and threads yet?
A lot of programs don't.

Encoding and decoding does, because it's easy to split that work up into N-chunks for each core.
Games are starting to get there, but they don't really need that much CPU for most stuff.

What is potentially more problematic is the lack of competent optimisation in applications. The amount of work you can do these days on moderns cpu's is staggering. But insistence on high level inefficient languages, and lack of understand of hardware means a lot of programs run terribly.
 
For games to better utilise cores means either needing very good low level programmers given enough time in the development process to optimise things - slim chance!

Or better middleware or frameworks which "attempt" to balance things - maybe by having their own schedulers.
A tall order, but if max clocks and IPC become stagnant there may be no other options.

Increasingly larger CPUs will actually have to dial down the clocks. Maybe more like big.LITTLE like but with even larger differences between the mainstream little cores and 2 to 4 very fast monster cores.
 
Last edited:
Do others get the impression that the writing is on the wall for x86...Why is it in a world where compatibility is king, Windows on ARM has such strong support?...is WINTEL going to be a historical footnote soon. Surely efficiency doesn't trump other benefits in the desktop space.
 
Do others get the impression that the writing is on the wall for x86...Why is it in a world where compatibility is king, Windows on ARM has such strong support?...is WINTEL going to be a historical footnote soon. Surely efficiency doesn't trump other benefits in the desktop space.
No I don't think so.

AMD and Intel are teaming up to make sure they make future x86 instructions compatible and more easily adopted.

There's also some proposals to drop 32bit support and a load of other cruft that would perhaps give up to 5% per core improvement. Though it's mostly to reduce the amount of backwards compatible validation testing they need to do.
 
Last edited:
They've been saying "The End is Nigh!" for x86 for the last 40 years. x86 was obsolete the day they stopped making the original IBM PC. Everything since then is all about compatibility efficiency be damned.
 
I'm no expert but from what I've read and learnt on this issue the main driver that's holding back CPU performance is available memory/cache.

Memory doesn't get the same benefit from a node shrink as processing cores do so going forwards the focus will be on packaging and designing ways to bring memory chips closer to the.CPU die like AMD have done with X3D.
 
I'm no expert but from what I've read and learnt on this issue the main driver that's holding back CPU performance is available memory/cache.

Memory doesn't get the same benefit from a node shrink as processing cores do so going forwards the focus will be on packaging and designing ways to bring memory chips closer to the.CPU die like AMD have done with X3D.

Or make CPUs bigger - why be constrained by old ATX standards, make CPUs are big as GPU and change the motherboard design, then you'll have plenty of space for memory, maybe the CPU can slot in like a PCIE card? Or maybe this isn't the right solution but the point is to always ask "why" - if we say the issue is too little memory on CPUs, then ask why can't we have more memory, and if the answer is because it takes up too much space, then ask why are CPus so small compared to other PC parts and why can't we make them bigger, oh because of ATX there is limited space for socket, so why are we using 30 year old ATX then etc
 
Last edited:
Back
Top Bottom