• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Have Intel's Efficiency cores made a difference to you?

200w.gif
 
The problem is that they're nigh on useless in gaming, and the extent of their usefulness in production programs is debatable (compared to just more regular cores like AMD), all of which is compounded by the scheduler not being perfect for maximizing their use during multi-tasking (f.ex. I remember them tripping up quite easily when running Handbrake in the background; fixable if you know what you're doing, but shouldn't have to).

To me they will make a lot of sense when they can cram a lot more into the CPU for MT gains, so RPL looks more like what it should've been, but ultimately they still can't unseat AMD's 16 core behemoth, and they're still hopelessly lost on PPW. So what's the point? It's a nice gain for when you need better MT in the mid-range (12600k/13600k) but at the bottom end they're MIA and at the high end they're still not good enough. So it's just too niche in usefulness right now. If AMD decides to release a 7950X3D then they will absolutely obliterate Intel on the high-end and I don't see what they can respond with.

It wouldn't be such a glum situation if they could execute their roadmap better but alas, it's all just coming in too slowly.
 
i've a pc with a 12400 in it and one with a 12600k in it, they both seem about them same to me. don't do much productivity work where the extra e-cores might actually make a difference though.
 
The way I view this is they're designed to solve the problem of loads of background tasks.
That is a Windows problem.
So there should be a Windows solution. (remove all the bloat) (or use Linux)
And it's worse than that because this CPU design depends on Windows to make use of it correctly. Relying on the problem to make the workaround for the problem work is fundamentally flawed.
 
Last edited:
I guess that's the question, does windows 11 actually split the load well.

I'm upgrading from a 6700k to a 13700k and i know I'm going to see I significant boost considering the e cores are equivalent to Skylake cores apparently, but how well does windows manage them?
 
But how about during everyday browsing, for instance?
Superfluous. Even if you were talking hardcore browsing you'd more benefit from more & faster RAM rather than e-cores. In fact e-cores are the opposite of what you want for tasks like everyday browsing and the like, it's like in phones - for bursty loads you want super fast core, maybe 1 or 2, and for other MT tasks you want to add more cores even if they're individually slower.

That's the thing about the e-cores, even though marketing tried to sell people on misinfo, they're a way for Intel to add more cores (in a given die space) for MT apps that are so voracious for multithreading, so stuff like encoding, rendering, etc. but they do nothing in things such as gaming, browsing, etc. where even with 8 cores you're at diminishing returns per extra core if even that minimally useful.
It was basically the only way Intel could respond to being so outgunned by AMD in MT without them being forced to sell HEDT processors at a huge discount, because Intel hasn't mastered chiplets like AMD so they can't compete with just pure cores.
This is their stop-gap solution until their other projects pan out (if). But I don't know how much they can really push the e-core #s because even in the leaked HEDT parts or even on server, they're not actually going so wild on the # of e-cores.

That's why I've said they have a really nice sweet-spot for the 1X600K CPUs in terms of gaming perf/price/power/MT, but they're losing in all the other match-ups, whether client (desktop) or server.
 
It was basically the only way Intel could respond to being so outgunned by AMD in MT without them being forced to sell HEDT processors at a huge discount, because Intel hasn't mastered chiplets like AMD so they can't compete with just pure cores.

I agree with half of that - Intel has been doing chiplet related stuff and the R&D behind it for decades (some early quad cores, Pentium-Ds and some mobile and some Xeon products use chiplet like designs as well - with varying degrees of success - some had horrible memory latency issues) - I don't think tech wise they are limited there - IMO it is more about them artificially segmenting the market so as to not erode their HEDT segment.
 
Last edited:
Have they made any difference to you in everyday use? Or are they just a gimmick?
They are freaking nice. Yesterday i was unpacking a huge zip file while playing warzone. I minimised the unzipping program, so it all run on the ecores, and so my game was completely unaffected. I couldnt believe my eyes, there were no frame drops or stutters. When i tried to do something similar with a 3950x or a 10900k, the game kept dropping to single digit framerates.

With that said, a normal home user probably doesnt need more than 8 of these. Unless you constantly are running these kinds of background tasks, 8 ecores are fine.
 
Last edited:
They are freaking nice. Yesterday i was unpacking a huge zip file while playing warzone. I minimised the unzipping program, so it all run on the ecores, and so my game was completely unaffected. I couldnt believe my eyes, there were no frame drops or stutters. When i tried to do something similar with a 3950x or a 10900k, the game kept dropping to single digit framerates.

With that said, a normal home user probably doesnt need more than 8 of these. Unless you constantly are running these kinds of background tasks, 8 ecores are fine.
You should be able to do that on any cpu, just limit the number of cores used for unzipping to 1 or 2 depending on how many you have. A game will still run fine on the remaining cores.
 
The E cores do seem to be utilised in some games..........


F5ABB2F9CE555A20E6C87A05477C42087D59956A

perhaps it varies in how the game is coded and what can be offloaded to them, other than the performance cores.

There are some games where zero use of the E cores applies. Others, like Civ VI above, shows that all the cores, including the E cores, are used.
 
Last edited:
Unless MS have added some new functions to the Windows SDK, how does a game know what is P core or E core anyway? If a game just scales some tasks across all cores, then it will do just that.
A game is going to need code adding to it to handle P and E cores separately, or manually bind the game executable to only the P cores in task manager.

How does windows know which threads need to run the quickest when you have differing performance cores ? It's all a total minefield. I'm surprised games work as well as they do on the hybrid chips.
 
Last edited:
I think the e-waste cores suck and are useless and cause more problems than they are worth. But Intel has the best 8 P cores there are on same ring/CD for gaming.

To me the Intel Core i7 and i9 chips are 8 core chips when it comes to gaming and the best ones at that.

For more than 8 strong cores go AMD. Though for gaming going more than 8 cores even on AMD side with more strong cores there may be a latency penalty for game threads crossing over CCD. Different reason Intel do not want a game thread on an e core as those cores suck even though crossing over to them does not have the latency penalty. With AMD more than 8 good cores, but latency hit to cross over CCD to the other one which is not good for game threads that need super fast talking to each other.

Best for productivity to get 7950X and lock all game threads to one CCD. Could also do it on 7900X, but then limited to 6 cores and 12 threads on one CCD which is usually enough for games, but some may be better with 8 cores. Though fortunately for now no games get any benefit from more than 8 good cores at least none that I know of or have seen.
 
Unless MS have added some new functions to the Windows SDK, how does a game know what is P core or E core anyway? If a game just scales some tasks across all cores, then it will do just that.
A game is going to need code adding to it to handle P and E cores separately, or manually bind the game executable to only the P cores in task manager.

How does windows know which threads need to run the quickest when you have differing performance cores ? It's all a total minefield. I'm surprised games work as well as they do on the hybrid chips.
The game doesn’t really need to know about the cores.

It’ll issue calls to the OS using the normal commands of the relevant APIs and wait for an answer back from the OS. Its up to the OS to handle all the calls which various programs are putting on it.

How the OS handles all those calls is down to its scheduler which will filter sort and assign the commands to the CPU. Its a massively complicated process. Programs will request resource, request priority of calls etc etc, the OS will balance those against whether the program calls are coming from a current active window, or minimised one, or a background task etc etc etc … and also consider what cores are available to it. From there is can pass out the work to what it thinks are the most appropriate cores on the CPU.

I’d say that windows has been caught on the hop a little with the sudden burst of high cores over recent years, and P/E cores, and there have been issues with the scheduler as a result … but as AMD and Intel share their designs and operations with MS, the scheduler can be updated to be more aware of the CPU/core structure and how to get the best out of them.

I watched a video just the other day where Intel was showing how its becoming better at the scheduling within Windows. Its in the interest of AMD / Intel to work with the likes of MS to make the scheduler work better.
 
I believe Hitman 3 will use E-cores for audio processing and physics. The engine is highly parallelised and they worked with Intel to optimise it for high core count CPUs. Whilst the benefit in Hitman is debatable, it is possible for a game target specific cores.
 
Last edited:
The problem is that they're nigh on useless in gaming,
They are not useless if they function correctly. Many people have 2 screens and game on the primary screen with other apps open on the 2nd screen. Like a lot of people, I often play the main game in windows maximized mode in fact I am doing that right now while this post in being written on my 2nd screen. It's pretty common more so with those that play MMORPG's. While I don't have E-Cores in my current setup I often run into situations where E-Cores would have stopped stutter and FPS problems if they work correctly. Even if you only focus on 1 screen having E cores should benefit for things like Discord and voice chat along with any background apps like combat loggers. Have you have played a game like Stellarius multiplayer where the CPU gets maxed out and Discord voice chat starts breaking up even on a 6 core, 12 thread CPU. Thats when E cores should benefit.


But how about during everyday browsing, for instance?
If you are just browsing with nothing else major running, they are likely going to be unnoticeable. When they should be noticeable is when you are browsing with background programmes running. Say you are video editing and rendering then alt tab to browsing. Or you download a game from Steam on a high-speed connection with a virus checker running and have a YouTube video playing possibly while playing around with a gaming related spreadsheet which is something I find myself doing every so often while waiting for the game download. Then E-Cores should make a difference if they work correctly.
 
Back
Top Bottom