• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Have Intel's Efficiency cores made a difference to you?

The game doesn’t really need to know about the cores.

It’ll issue calls to the OS using the normal commands of the relevant APIs and wait for an answer back from the OS. Its up to the OS to handle all the calls which various programs are putting on it.

How the OS handles all those calls is down to its scheduler which will filter sort and assign the commands to the CPU. Its a massively complicated process. Programs will request resource, request priority of calls etc etc, the OS will balance those against whether the program calls are coming from a current active window, or minimised one, or a background task etc etc etc … and also consider what cores are available to it. From there is can pass out the work to what it thinks are the most appropriate cores on the CPU.

I’d say that windows has been caught on the hop a little with the sudden burst of high cores over recent years, and P/E cores, and there have been issues with the scheduler as a result … but as AMD and Intel share their designs and operations with MS, the scheduler can be updated to be more aware of the CPU/core structure and how to get the best out of them.

I watched a video just the other day where Intel was showing how its becoming better at the scheduling within Windows. Its in the interest of AMD / Intel to work with the likes of MS to make the scheduler work better.
Yes I agree with that. But there were no calls in Windows API's to differentiate Hybrid cores, and windows scheduler itself can't possibly know which threads need to run the fastest in a game. It is up to the game programmers to determine this. But that information didn't exist when a lot of the games we play were made. I don't even think it is even available now. The game could manually run some quick (couple second) benchmarks to work out which cores are the P cores and use those, but again this assumes the developers have coded specifically for hybrid cores.
This is exactly why AVX512 had to be dropped, because windows does not know which threads may possibly use it, so they had to make the instruction set identical between P&E. If there was some mechanism to state what features a thread would use, they could have kept it, but there isn't.
 
But how about during everyday browsing, for instance?
To be honest I'd be concerned if we were in a situation with modern CPUs where E cores made any difference for everyday browsing. I have a 10 year old 3570k system in the kitchen which can handle web browsing and video streaming absolutely fine. That's a 4 thread CPU, so I can't imagine any modern quadcore is going to struggle.
 
To be honest I'd be concerned if we were in a situation with modern CPUs where E cores made any difference for everyday browsing.

For me it would be the reverse: a browser should not ordinarily stress a CPU and therefore the browser process should be shunted onto an e-core until it's time for a browser application to stress the CPU.
 
OK maybe you get a difference in power consumption from it, but in performance terms it shouldn't make a noticeable difference because the isn't sufficient load to require offloading away from the P-cores to free up capacity on those.
 
Last edited:
The irony of Intel pushing "efficiency" cores in CPU which consume up to 400W :rolleyes:

From what I can tell, they make ****** all difference to gaming and the only things that seem to benefit are some "productivity" applications like rendering. So the only scenarios in which the efficiency cores make any real difference are ones where the overall power draw is the highest.

It baffles me why reviews even include productivity benchmarks, let alone give them so much space, as 99.9% of people will never use such applications.
 
The irony of Intel pushing "efficiency" cores in CPU which consume up to 400W :rolleyes:

From what I can tell, they make ****** all difference to gaming and the only things that seem to benefit are some "productivity" applications like rendering. So the only scenarios in which the efficiency cores make any real difference are ones where the overall power draw is the highest.

It baffles me why reviews even include productivity benchmarks, let alone give them so much space, as 99.9% of people will never use such applications.
They don't draw 400w unless you want them to (unlock power limits etc.). In which case, any CPU can draw up to 999watts if you can cool it. ANY.
 
They don't draw 400w unless you want them to (unlock power limits etc.). In which case, any CPU can draw up to 999watts if you can cool it. ANY.

The key thing is that all Z790 boards will ship without power limits enabled for 13th gen so this is the default behaviour.

Basically, if you buy a 13900K and a Z790 board, do no BIOS tweaks and just fire up an all-core workload, the CPU alone will draw around 400W.
 
The key thing is that all Z790 boards will ship without power limits enabled for 13th gen so this is the default behaviour.

Basically, if you buy a 13900K and a Z790 board, do no BIOS tweaks and just fire up an all-core workload, the CPU alone will draw around 400W.
How do you know that? Have you tried all z790 boards? Im pretty sure the first time you get into the bios to enable XMP, boards pop up a window - you have to choose or they won't let you progress - with the power limit options.
 
How do you know that? Have you tried all z790 boards? Im pretty sure the first time you get into the bios to enable XMP, boards pop up a window - you have to choose or they won't let you progress - with the power limit options.

I've seen at least one reviewer clearly state that there are no more power limits or "Tau" settings by default on the motherboards so the CPUs run at full power indefinitely.

Intel seem to have just accepted that most people disabled the Tau time limits anyway, and indeed some motherboards shipped that way by default, and have just said "screw it" and told all the motherboard manufacturers to remove the limits.

Yes of course you could always do this yourself but the fact Intel has adopted limitless power by default goes against the whole concept of "efficiency cores", which is my point.
 
I've seen at least one reviewer clearly state that there are no more power limits or "Tau" settings by default on the motherboards so the CPUs run at full power indefinitely.

Intel seem to have just accepted that most people disabled the Tau time limits anyway, and indeed some motherboards shipped that way by default, and have just said "screw it" and told all the motherboard manufacturers to remove the limits.

Yes of course you could always do this yourself but the fact Intel has adopted limitless power by default goes against the whole concept of "efficiency cores", which is my point.
Im not sure it's intels guidelines though. Mobo manafacturers , especially for Intel (but they do that with amd as well, remembering the whole ppt fiasco?) are trying to up one another by putting more and more wattage to look better in motherboard reviews. Don't know man, the latest cpus from both companies are useless out of the box. Which is fine with me since I always tinker with the bios, but for average people I can't recommend anything other than zen 3. I had a friend recently asking me what to buy, he runs multiple VM's and stuff, and since he wasn't into tinkering with bios and stuff, my only recommendation was the 5950x. Everything else is terrible for plug and play, and that includes ALD / RPL and Zen 4.
 
Last edited:
Have they made any difference to you in everyday use? Or are they just a gimmick?


Of course. When gaming, they're used for discord/youtube/AV/all other background apps, leaving the P cores to do the heavy lifting in games.

If you're running a clean w11 OS and just testing a game with no other programs open (atypical of actual use) then of course things like a 5800X3D look amazing. Use them in the above environment, with many common background applications (nothing extreme, just the common apps) and you see a different picture painted.

I'm really hoping that Zen4 3D cache CPU's have at least 12 core options for this reason, 16 core preferred. That would be an amazing gaming CPU, one I doubt Intel could touch for years.
 
Im not sure it's intels guidelines though. Mobo manafacturers , especially for Intel (but they do that with amd as well, remembering the whole ppt fiasco?) are trying to up one another by putting more and more wattage to look better in motherboard reviews. Don't know man, the latest cpus from both companies are useless out of the box. Which is fine with me since I always tinker with the bios, but for average people I can't recommend anything other than zen 3. I had a friend recently asking me what to buy, he runs multiple VM's and stuff, and since he wasn't into tinkering with bios and stuff, my only recommendation was the 5950x. Everything else is terrible for plug and play, and that includes ALD / RPL and Zen 4.

Yes, it's just a benchmark race.

If you watch DerBauer's video, it's very interesting what happens if you do enforce a power limit on the 13900K. You can massively reduce the power consumption with a relatively small decrease in performance. At the extreme, he demonstrated it could run almost neck and next with the 12900K whilst consuming a third of the power!

Basically it's the rule of diminishing returns and Intel have decided to allow the things to draw as much power as they like purely to top the benchmarks and beat AMD. If you take the time to tweak your power limits and, as Roman demonstrated, even underclock the E-cores, you can make the 13900K incredibly efficient.

All I'm saying is that Intel's desire to beat AMD in the benchmarks and the ridiculous power consumption this results in is at odds with the concept of efficiency cores in the first place. All conflicting marketing - efficiency cores look good to some people whilst topping the benchmark tables looks good to others.
 
Yes, it's just a benchmark race.

If you watch DerBauer's video, it's very interesting what happens if you do enforce a power limit on the 13900K. You can massively reduce the power consumption with a relatively small decrease in performance. At the extreme, he demonstrated it could run almost neck and next with the 12900K whilst consuming a third of the power!

Basically it's the rule of diminishing returns and Intel have decided to allow the things to draw as much power as they like purely to top the benchmarks and beat AMD. If you take the time to tweak your power limits and, as Roman demonstrated, even underclock the E-cores, you can make the 13900K incredibly efficient.

All I'm saying is that Intel's desire to beat AMD in the benchmarks and the ridiculous power consumption this results in is at odds with the concept of efficiency cores in the first place. All conflicting marketing - efficiency cores look good to some people whilst topping the benchmark tables looks good to others.
The biggest disappointment for me, even before the reviews, was the leaks I saw suggesting that the Ecores will be clocked at 4.3+ ghz out of the box. That was a facepalm moment for me, and I made a comment about it a couple of months ago.

But yeah, funny thing is all of these products (12900k / 13900k / 7950x) are actually incredibly efficient when tuned properly. And personally I don't have a problem with high power consumption - but your design needs to scale with power. Currently none of these cpus scale past 150w, so I don't see the point shipping with 230 and 250w power limits. That's absurd. My 12900k is basically running at 170w and scores 28k CBR23, in order to take it to 30k I need around 260-270 wattage. Why the hell would anyone do that besides benchmarking is beyond me.
 
The biggest disappointment for me, even before the reviews, was the leaks I saw suggesting that the Ecores will be clocked at 4.3+ ghz out of the box. That was a facepalm moment for me, and I made a comment about it a couple of months ago.

But yeah, funny thing is all of these products (12900k / 13900k / 7950x) are actually incredibly efficient when tuned properly. And personally I don't have a problem with high power consumption - but your design needs to scale with power. Currently none of these cpus scale past 150w, so I don't see the point shipping with 230 and 250w power limits. That's absurd. My 12900k is basically running at 170w and scores 28k CBR23, in order to take it to 30k I need around 260-270 wattage. Why the hell would anyone do that besides benchmarking is beyond me.

power consumption is high up my list due to rising power costs everything I use now I think about its power consumption so yeah I now look into it more when before it wasnt as much as a concern
 
power consumption is high up my list due to rising power costs everything I use now I think about its power consumption so yeah I now look into it more when before it wasnt as much as a concern
Of course it is, but what Im saying is 250w CPUs wouldnt be bad if the performance actually scaled. Like let's say going from 150 to 250w would give you a 40-50% performance increase, that's fine. Currently we are getting like 10% with all the extra power.
 
Back
Top Bottom