• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

On Intel Raptor Lake, any truth to the rumors that disabling all e-cores hurts single threaded performance of the p-cores??

Associate
Joined
28 Jun 2022
Posts
371
Location
United States
https://www.overclock.net/threads/o...and-discussion.1799628/page-252#post-29071588

https://www.reddit.com/r/intel/comments/y9z92t/comment/iwhtura/?context=3



It would not make any sense and I sure hope there is a workaround as I want 0 e-cores enabled as I purchased the Core i9 13900K to be a powerhouse 8 core 16 thread chip as Intel P cores are better performing at same clock speed than AMD Ryzen 7000 Zen 4 cores. And I hate the hybrid arch, but love the P cores Intel has.



So is there really any truth to what Falkentyne at overclock.net or what Glum-Shape-6516 says on Intel reddit in the other thread; or any possible workaround besides having to enable a stupid e-core. This is on Windows 10. I imagine maybe its only a WIN11 problem due to it being thread director aware.



https://fox-laptop.com/pc-component...es-but-without-them-everything-is-only-worse/



That link above suggests it would also be a problem because of thread director on WIN11, but WIN10 no issues as it is not thread director aware. IS there any difference with 13th gen I need to be aware of because if this is an issue and I want only 8 core 16 thread powerhouse chip, I can return it and get a 7700X. I hope I do not have to do that as Intel has better P cores, but I want no e-cores on at all.



And before anyone tells me the e-cores cause no harm on 13th, gen, that is not the question, I simply do not want them but want the 8 P cores and Intel does not sell a separate SKU with 36MB or more L3 cache with 8 well binned P cores.
 
Interesting, in the testing hardware unboxed did with a 12900K, you can see that e.g. in CS:GO and Starcraft at lower resolutions, it did perform better with the E-cores enabled, but it wasn't consistent across all games.

He said that Starcraft is well-known for putting a big load on a single core and I assume CS:GO would be as well, so perhaps they are losing access to the additional cache when the E-cores disabled?

Techpowerup also did some game testing with a 13900K, but I really don't understand the games tested well enough to speculate which games might be affected most by e.g. more cache, more cores, or how effectively they're coded for multi-threading.

I know that with the 9th/10th gen CPUs, there were issues with inter-core latency, but with the E-cores disabled I'd assume that if anything, latency would be improved?

Edit: Hmm, could that be it?
- Games that make good use of several cores, but not lots, prefer lower latency (E-cores disabled).
- Games that use one or two cores prefer more cache (E-cores enabled).
- Games that make good use of many cores, prefer more cores (E-cores enabled).

I don't know :o
 
Last edited:
Interesting, in the testing hardware unboxed did with a 12900K, you can see that e.g. in CS:GO and Starcraft at lower resolutions, it did perform better with the E-cores enabled, but it wasn't consistent across all games.

He said that Starcraft is well-known for putting a big load on a single core and I assume CS:GO would be as well, so perhaps they are losing access to the additional cache when the E-cores disabled?

Techpowerup also did some game testing with a 13900K, but I really don't understand the games tested well enough to speculate which games might be affected most by e.g. more cache, more cores, or how effectively they're coded for multi-threading.

I know that with the 9th/10th gen CPUs, there were issues with inter-core latency, but with the E-cores disabled I'd assume that if anything, latency would be improved?

Edit: Hmm, could that be it?
- Games that make good use of several cores, but not lots, prefer lower latency (E-cores disabled).
- Games that use one or two cores prefer more cache (E-cores enabled).
- Games that make good use of many cores, prefer more cores (E-cores enabled).

I don't know :o

Interesting you mention that because how would extra cache form e-cores benefit games if the cache is private to the e-cores themselves. I mean L1 and L2 cache are private only among a core and not shared among cores. Only L3 is shared and disabling e-cores does not reduce L3 cache.

And single threaded games would want to use more P cores which is why HHUB results are interesting. Though they did use Windows 11 which per article has issues with e-cores disabled due to thread director behavior. WIN10 no such issue as it does not support thread director.

Though having 1 e-core on you woudl thin woudl really screw things up even on WIN11 and maybe more so on WIN10.

TechPowerup had a review of 13900K with more results mixed, though they do not say what CPU boosted to in each config.
 
Interesting you mention that because how would extra cache form e-cores benefit games if the cache is private to the e-cores themselves. I mean L1 and L2 cache are private only among a core and not shared among cores. Only L3 is shared and disabling e-cores does not reduce L3 cache.

And single threaded games would want to use more P cores which is why HHUB results are interesting. Though they did use Windows 11 which per article has issues with e-cores disabled due to thread director behavior. WIN10 no such issue as it does not support thread director.

Though having 1 e-core on you woudl thin woudl really screw things up even on WIN11 and maybe more so on WIN10.

TechPowerup had a review of 13900K with more results mixed, though they do not say what CPU boosted to in each config.

Is there a reliable way of testing if L3 cache is lost?

My logic was that if L3 cache is not lost, then why does performance improve in some single-core heavy games when E-cores are enabled?

In Intel's thread director presentation, they said that depending on the stage of work, a thread might be directed to an E-Core (mid-processing, so to speak), so maybe there's some degree of parallelisation that E-cores enable, which a single P-core (doing a single-thread heavy workload) still benefits from, but that this depends on how the game is coded and how effectively it distributes the workload?

With CS:GO and Starcraft, if they're not coded to effectively distribute the work, then maybe they benefit from the thread director doing this, whereas games that are designed to distribute their work, don't need the thread director?

If that's the case, maybe the thread director is specifically designed with offloading to the E-cores in mind, so with no E-core enabled, it doesn't distribute a single P-cores work as effectively (e.g. to other P-cores), hence it can lose some performance.

If it wasn't obvious, I'm just throwing poorly-informed theories around here :p
 
Last edited:
Is there a reliable way of testing if L3 cache is lost?

My logic was that if L3 cache is not lost, then why does performance improve in some single-core heavy games when E-cores are enabled?

In Intel's thread director presentation, they said that depending on the stage of work, a thread might be directed to an E-Core, so maybe there's some degree of parallelisation that E-cores enable, which a single P-core (doing a single-thread heavy workload) still benefits from, but that this depends on how the game is coded and how effectively it distributes the workload?

With CS:GO and Starcraft, if they're not coded to effectively distribute the work, then maybe they benefit from the thread director doing this, whereas games that are designed to distribute their work, don't need the thread director?

If that's the case, maybe the thread director is specifically designed with offloading to the E-cores in mind, so with no E-core enabled, it doesn't distribute a single P-cores work as effectively (e.g. to other P-cores), hence it can lose some performance.

If it wasn't obvious, I'm just throwing poorly-informed theories around here :p


Is Windows Task Manager reliable. It lists the same L3 cache amount whether e-cores are enabled or disabled.

Its the same thing even on Comet Lake when Hardware Unboxed tested 10900K vs 10600K disabling some P cores and the 10900K performed better because of its extra L3 cache even at lower core counts vs the 10700K and 10600K.

Do not see how it would be any different with Raptor Lake.
 
Is Windows Task Manager reliable. It lists the same L3 cache amount whether e-cores are enabled or disabled.

Its the same thing even on Comet Lake when Hardware Unboxed tested 10900K vs 10600K disabling some P cores and the 10900K performed better because of its extra L3 cache even at lower core counts vs the 10700K and 10600K.

Do not see how it would be any different with Raptor Lake.

I have no idea, but why might it be different? Because, the E-Cores are distinct from the P-Cores, whereas with Comet Lake, they were all the same. I'd imagine it's something fairly easy to test, so you're probably right, somebody would have noticed by now.
 
Last edited:
I have no idea, but why might it be different? Because, the E-Cores are distinct from the P-Cores, whereas with Comet Lake, they were all the same. I'd imagine it's something fairly easy to test, so you're probably right, somebody would have noticed by now.


They are distinct but still have their own L2 cache just as each P core has its own L2 cache. But L3 is shared among everything.
 
I did my test with WIN11 and yes CPU-Z score with all e-cores off with all cores with P cores at 5.6GHz scored 875 almosyt everytime and one rare time scored like 913. With even one e-core on it scored 913-914 every time.

In WIN10 does not matter scores 914 every time with 1 e-core on or all disabled. So it looks like the theory is true with WIN11 only and WIN10 does not matter which will have no affect on me for keeping all e-cores disabled as I use only WIN10.

I can only assume the WIN11 issues when disabling all e-cores are explained below:

 
For the love of god, not this again. There is no reason to disable the ecores. None. 0. Nada. How many times do we have to go through this? LOL


I do not want them on. I am using Windows 10 not Windows 11. Windows 10 cannot even use them properly on its own and WIN11 is flat out required.

I want them disabled and that is my own business. If Intel had an 8 core only Raptor Cove CPU with 36MB L3 cache, I would have gotten it. But they do not so 13900K it is and e-cores shunned.

Stop with the there is no reason to disable them. There is in fact more thermal headroom and the only way to get an 8 core 16 thread CPU from Intel on 13th Gen.

I hate the hybrid arch crap and they are staying disabled especially in an OS being WIN10 that cannot even deal with it correctly.

And do not tell me well you should have gone with an AMD Ryzen 7700X then. Well Intel Core i9 13900K and even i7 13700K with e-cores off are actually better CPUs than the 7700X as Raptor Cove has somewhat better IPC and way better latency than Zen 4. And the price of a 13700K is exactly the same as 7700X anyways.
 
I do not want them on. I am using Windows 10 not Windows 11. Windows 10 cannot even use them properly on its own and WIN11 is flat out required.

I want them disabled and that is my own business. If Intel had an 8 core only Raptor Cove CPU with 36MB L3 cache, I would have gotten it. But they do not so 13900K it is and e-cores shunned.

Stop with the there is no reason to disable them. There is in fact more thermal headroom and the only way to get an 8 core 16 thread CPU from Intel on 13th Gen.

I hate the hybrid arch crap and they are staying disabled especially in an OS being WIN10 that cannot even deal with it correctly.

And do not tell me well you should have gone with an AMD Ryzen 7700X then. Well Intel Core i9 13900K and even i7 13700K with e-cores off are actually better CPUs than the 7700X as Raptor Cove has somewhat better IPC and way better latency than Zen 4. And the price of a 13700K is exactly the same as 7700X anyways.
There is noting wrong with using ecores on Windows 10 and there is nothing wrong with using Windows 11 either, but whateva. You decided to hate on the hybrid for no reason whatsoever, so keep on hating
 
There is noting wrong with using ecores on Windows 10 and there is nothing wrong with using Windows 11 either, but whateva. You decided to hate on the hybrid for no reason whatsoever, so keep on hating


And it is patently false that there is nothing wrong with e-cores and hybrid arch on WIN10. There most certainly can be if you set and forget it without using Process Lasso as WIN10 is not thread director aware. In WIN11 there should not be in a perfect world.
 
Last edited:
For the love of god, not this again. There is no reason to disable the ecores. None. 0. Nada. How many times do we have to go through this? LOL
The whole notion that there is nada reason to disable e-cores on anything is completely false as everyone's use scenario is different. Better thermal headroom and better all P core overclock and better ring clock even on 13th gen though not as pronounced difference as on 12th gen.
 
E cores can help with performance of the P cores by performing all the standard background OS tasks while the P cores work 100% on the main task you're using your computer for.
Seems nonsensical to disable them on a 13900K... If you're going to do that you could just get the 13700K which has less E cores, disable them and then overclock the P cores to match 13900k speeds lol
 
Last edited:
E cores can help with performance of the P cores by performing all the standard background OS tasks while the P cores work 100% on the main task you're using your computer for.
Seems nonsensical to disable them on a 13900K... If you're going to do that you could just get the 13700K which has less E cores, disable them and then overclock the P cores to match 13900k speeds lol

More L3 cache on 13900k is reason to get it and disable e-cores.
 
The whole notion that there is nada reason to disable e-cores on anything is completely false as everyone's use scenario is different. Better thermal headroom and better all P core overclock and better ring clock even on 13th gen though not as pronounced difference as on 12th gen.
Better thermal headroom in what? CBR23 and prime? But your CPU will still be slower at those tasks with ecores off, so why does it matter?
 
Better thermal headroom in what? CBR23 and prime? But your CPU will still be slower at those tasks with ecores off, so why does it matter?

For tasks like gaming that can use 6 or even 8 cores but no more and very few games use even 6 or 8 cores and like none use any more so of course disable e-waste cores. Way better clock and ring speed and better stability.
 
Last edited:
For tasks like gaming that can use 6 or even 8 cores but no more and very few games use even 6 or 8 cores and like none use any more so of course disable e-waste cores. Way better clock and ring speed and better stability.
But how exactly do you have more headroom in tasks that use 6 cores? Lol that doesnt make any sense. If your task is using only 6 cores then ecores should be parked and basically doing nothing, consuming no wattage...

Have you actually tried it or are you talking hypothetically? Cause i have, makes absolutely no difference in games, please stop spreading missinformation.
 
Is there a reliable way of testing if L3 cache is lost?

My logic was that if L3 cache is not lost, then why does performance improve in some single-core heavy games when E-cores are enabled?

In Intel's thread director presentation, they said that depending on the stage of work, a thread might be directed to an E-Core (mid-processing, so to speak), so maybe there's some degree of parallelisation that E-cores enable, which a single P-core (doing a single-thread heavy workload) still benefits from, but that this depends on how the game is coded and how effectively it distributes the workload?

With CS:GO and Starcraft, if they're not coded to effectively distribute the work, then maybe they benefit from the thread director doing this, whereas games that are designed to distribute their work, don't need the thread director?

If that's the case, maybe the thread director is specifically designed with offloading to the E-cores in mind, so with no E-core enabled, it doesn't distribute a single P-cores work as effectively (e.g. to other P-cores), hence it can lose some performance.

If it wasn't obvious, I'm just throwing poorly-informed theories around here :p

I assume with E-cores enabled, Windows interrupts and background tasks can run on those rather than interrupting the P cores running the game. This could give a couple % performance increase.
 
Back
Top Bottom