• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

On Intel Raptor Lake, any truth to the rumors that disabling all e-cores hurts single threaded performance of the p-cores??

Absolutely, and good for you. You can even turn the Pcores off if that makes your day. But let's stop pretending that it performs better, cause it doesn't.

Ichirous power consumption was in CINEBENCH. In cinebench turning off ecores is stupid regardless, since its both slower and less efficient, lol. Good job, you turned ferrari into a lada


Lower power consumption and do not have to account for e-cores in stability testing for manual overclock settings. That is more than enough reason to turn them off.

There is no reaosn to turn off P cores because you can buy chips with less of them. There exists no chips without those e-cores and some of us want only P cores

I never turned a Ferrari into a lada. I got the chip for those 8 awesome P cores alone. More like turning a Cadillac Escalade into a Chevy Corvette.
 
Last edited:
Lower power consumption and do not have to account for e-cores in stability testing for manual overclock settings. That is more than enough reason to turn them off.
So the reason you turn off ecores is to be able to run stability tests....Have you thought about turning them off, stress your pcores, and then turn them back on? Wow, right? :D

Anyways, show me your corvette in cyberpunk, tom's dinner area. I bet my 12900k which im using now is faster than your 13900k with ecores off. Corvette my ass
 
Last edited:
So the reason you turn off ecores is to be able to run stability tests....Have you thought about turning them off, stress your pcores, and then turn them back on? Wow, right? :D

Anyways, show me your corvette in cyberpunk, tom's dinner area. I bet my 12900k which im using now is faster than your 13900k with ecores off. Corvette my ass


Oh yeah turn them back on without testing to make sure whole config is stable. All cores that are on have to be confirmed stable at 100% load for all of them!! Not just well stress the P cores and let e-cores on. Turning on my cores can induce stability and also the ring clock has to be tested as well.

And yeah Corvette is great analogy. Its not always about raw speed. The Escalade can actually transmit so much more cargo and is also more expensive than a Corvette starting MSRP. E-cores on and P cores on fast as lots of data can be processed. Corvette can rocket but is small and cannot carry any cargo.

E-cores off lighter and more thermal headroom and less to account for in stability testing and ring clock. Turning a 6.2L V8 Chevy Suburban into a 5.7L V8 Jeep Grand Cherokee is more like it.
 
That is probably never gonna happen. There are already games that benefit from turning off HT, then there are games that benefit from turning on HT, then there are games that despise ecores, then there are games that love them.

Realistically, a 12core single CCD 3d Zen 5 would be the best bet for gaming. You turn off HT, and it still has enough cores to not lose performance, with the massive cache and no cross CCD penalty. When and if such a part gets released, I kiss Intel goodbye. Until then I have to stick with them for gaming :D


I was hoping Zen 5 would be that way and finally more than 8 cores on a single CCD and many early reports almost 1 year ago in Spring 2022 said it would be with maybe up 16 cores per chiplet? Not going to happen apparently if this is to be believed:


Still only 8 cores per chiplet which means severe cross latency penalty for any game situations that potentially can use more than 8 cores.

Intel did have 10 good cores cores on 10900K and 10850K, but not more than 8 good cores on modern architectures.


Wish they would build a 12 P core 13th Gen. I mean 4 e-cores take the same die space as 1 P core, so they could build one.

And some would say that is what Sap[hire rapids HEDT is for as you can now get modenr arch from Intel with more than 8 P cores. To which I say no because Sapphire Rapids CPU cores are on a mesh topoliogy and mesh toplogy sucks for latency and thus gaming. Wake me up when Intel has more than 8 P cores on a ring bus with the modern architectures be it HEDT or consumer.

Maybe they will have a special Sapphire rapids 10 core CPU on a ring, but not holding my breathe as I actually doubt it, They just bin there mesh CPUs and sell lower core counts that have same build as higher core counts, but with defective core disabled. There would be a hole bunch of money to create another brand new die on a ring with 10-12 P cores which I do not think they will do sadly.
 
I was hoping Zen 5 would be that way and finally more than 8 cores on a single CCD and many early reports almost 1 year ago in Spring 2022 said it would be with maybe up 16 cores per chiplet? Not going to happen apparently if this is to be believed:


Still only 8 cores per chiplet which means severe cross latency penalty for any game situations that potentially can use more than 8 cores.

Intel did have 10 good cores cores on 10900K and 10850K, but not more than 8 good cores on modern architectures.


Wish they would build a 12 P core 13th Gen. I mean 4 e-cores take the same die space as 1 P core, so they could build one.

And some would say that is what Sap[hire rapids HEDT is for as you can now get modenr arch from Intel with more than 8 P cores. To which I say no because Sapphire Rapids CPU cores are on a mesh topoliogy and mesh toplogy sucks for latency and thus gaming. Wake me up when Intel has more than 8 P cores on a ring bus with the modern architectures be it HEDT or consumer.

Maybe they will have a special Sapphire rapids 10 core CPU on a ring, but not holding my breathe as I actually doubt it, They just bin there mesh CPUs and sell lower core counts that have same build as higher core counts, but with defective core disabled. There would be a hole bunch of money to create another brand new die on a ring with 10-12 P cores which I do not think they will do sadly.

Intel is stuck with two piles of garbage. 1 pile is maxed out and the other pretty much irrelevant for building games consoles.
 
Intel is stuck with two piles of garbage. 1 pile is maxed out and the other pretty much irrelevant for building games consoles.

Intel p cores are actually really good.

The e cores such on other hand.

4 e cores take same die space as 1 p core and maybe Slightly more. And 4 e core cluster takes somewhat more power than one p core.

So they could make a 10 to 12 p core 13th gen.

In your view do any games you notice benefit from more than 8 cores.
 
Intel p cores are actually really good.

The e cores such on other hand.

4 e cores take same die space as 1 p core and maybe Slightly more. And 4 e core cluster takes somewhat more power than one p core.

So they could make a 10 to 12 p core 13th gen.

In your view do any games you notice benefit from more than 8 cores.

Intel have tried pushing to 10-12 core with a ringbus but failed. As per core performance is increased the more frequently the bus is saturated and the move performance degrades. If Intel keep pushing IPC and clocks they could end up in a situation that Skylake cores need to be reduced.

Plenty of game engines scale past 8 cores.
 
I think the E cores are the most promising part of the strategy. Intel’s revival of the Atom core has allowed a stay of execution on the desktop. I’m not sure about the push into other markets though…
 
Intel have tried pushing to 10-12 core with a ringbus but failed. As per core performance is increased the more frequently the bus is saturated and the move performance degrades. If Intel keep pushing IPC and clocks they could end up in a situation that Skylake cores need to be reduced.

Plenty of game engines scale past 8 cores.


They actually tried that and failed? Did the 10850K and 10900K fail with 10 P cores and drag down ring clock?

I mean look a the e-waste cores. They drag ring clock down a lot especially on 12th Gen and even t a lesser extent on 13th Gen.

And which game engines scale past 8 cores and 16 threads? I read something not even 1 year ago that it takes a heroic effort and is extremely hard to code a game to even use 8 threads. So how in the world do plenty of games scale past 8 cores and 16 threads?
 
Last edited:
They actually tried that and failed? Did the 10850K and 10900K fail with 10 P cores and drag down ring clock?

I mean look a the e-waste cores. They drag ring clock down a lot especially on 12th Gen and even t a lesser extent on 13th Gen.

And which game engines scale past 8 cores and 16 threads? I read something not even 1 year ago that it takes a heroic effort and is extremely hard to code a game to even use 8 threads. So how in the world do plenty of games scale past 8 cores and 16 threads?
The last of us uses thhem, cyberpunk uses them, spiderman uses them, warzone 2 uses them. Need more? Stop with nonsensical bashing.
 
They actually tried that and failed? Did the 10850K and 10900K fail with 10 P cores and drag down ring clock?

I mean look a the e-waste cores. They drag ring clock down a lot especially on 12th Gen and even t a lesser extent on 13th Gen.

And which game engines scale past 8 cores and 16 threads? I read something not even 1 year ago that it takes a heroic effort and is extremely hard to code a game to even use 8 threads. So how in the world do plenty of games scale past 8 cores and 16 threads?
It really isn't. You can write C++ code to scale to as many cores/threads as you want in just a few lines of code. Any additional cores/threads can just be assigned into a threadpool to split complex work over.
 
It really isn't. You can write C++ code to scale to as many cores/threads as you want in just a few lines of code. Any additional cores/threads can just be assigned into a threadpool to split complex work over.

It isn't the ability to create/handle threads which is the problem with games, it is that a lot of the core logic and some aspects of game rendering is highly serial in nature - sure you can lean on threading for stuff like AI and physics, etc. but there are certain bottlenecks with game processing which are difficult to leverage multi-threading effectively.
 
It isn't the ability to create/handle threads which is the problem with games, it is that a lot of the core logic and some aspects of game rendering is highly serial in nature - sure you can lean on threading for stuff like AI and physics, etc. but there are certain bottlenecks with game processing which are difficult to leverage multi-threading effectively.
Sure, I agree with that. But there's quite a few chunks that can be split into arbitrary number of threads, and those parts can easily run on as many cores as you have. It all depends what libraries you're using though, and how they have been written.
I suspect the main reason we don't see much scaling beyond 8 cores is that game was tuned for console CPU's and they just haven't made any provision for more cores on PC. Or they think there's no point since it runs good enough on console 8 core CPU.
 
Last edited:
Sure, I agree with that. But there's quite a few chunks that can be split into arbitrary number of threads, and those parts can easily run on as many cores as you have. It all depends what libraries you're using though, and how they have been written.
I suspect the main reason we don't see much scaling beyond 8 cores is that game was tuned for console CPU's and they just haven't made any provision for more cores on PC. Or they think there's no point since it runs good enough on console 8 core CPU.

One problem is many games are built using libraries based on highly single threaded code no one has ever bothered to update for modern architectures and often prohibitive in terms of developer resources to build their own multi-thread optimised libraries.

Another issue I find is that threading game code for the sake of it doesn't always produce a good "feel" for some reason - some games like older Hitman ones for instance where they threaded everything possible whether it gave a performance advantage or not can feel a bit inconsistent response wise on keyboard and mouse like microstutter kind of feel.
 
One problem is many games are built using libraries based on highly single threaded code no one has ever bothered to update for modern architectures and often prohibitive in terms of developer resources to build their own multi-thread optimised libraries.

Another issue I find is that threading game code for the sake of it doesn't always produce a good "feel" for some reason - some games like older Hitman ones for instance where they threaded everything possible whether it gave a performance advantage or not can feel a bit inconsistent response wise on keyboard and mouse like microstutter kind of feel.
Yeah you need to use threading in the right places, or it can introduce extra latency. Especially if the core it switches to is in a power down state. Before Intel improved the speed scaling, scrolling browsers up and down used to be jittery due to the CPU cores downclocking when you stopped.
 
It’s likely graphics drivers and associated API layers that cause many bottlenecks. Since the advent of Mantle CPU scaling has massively improved and this has taken the gaming industry to a much healthier place, but I’d imagine developers hands are still very much tied when it comes to the graphics vendors SDK.
 
Last edited:
They actually tried that and failed? Did the 10850K and 10900K fail with 10 P cores and drag down ring clock?

I mean look a the e-waste cores. They drag ring clock down a lot especially on 12th Gen and even t a lesser extent on 13th Gen.

And which game engines scale past 8 cores and 16 threads? I read something not even 1 year ago that it takes a heroic effort and is extremely hard to code a game to even use 8 threads. So how in the world do plenty of games scale past 8 cores and 16 threads?

You seem to assume Intel builds its desktop chips with real focus on pairing with an expensive graphics card, that’s simply not the case. Intels focus is to convince the likes of Dell to commit to long term orders. The only only chance Intel has chance to win over the Dell’s is with a combination of Skylake and Atom cores so each can offset the other’s shortcomings.

Intel is struggling to offer 8p cores on the desktop never mind 12. If you want more than 8 cores (HP cores) from Intel the price is over £100 per core, probably twice that TBH.
 
Last edited:
The last of us uses thhem, cyberpunk uses them, spiderman uses them, warzone 2 uses them. Need more? Stop with nonsensical bashing.

Thanks for the info and yes I have seen lots of reports of these games using more than 8 cores and even lots of cores. Though is that at all resolutions or only high FPS?

I have considered turning the e-cores on as reports of games actually using more and more cores are starting to spread a lot and reports of games using not many cores when I search articles 3+ years old pop up which leads me to believe now is the time more cores are really starting to matter.

Though at same time many say oh even 6 cores is still fine. Results all over the place.

You had mentioned that the perfect CPU would be Zen 5 with 12 P cores on a single CCD then you could turn off SMT and get best gaming performance possible.

Though it appears Zen 5 is still going to max out at 8 cores per CCD.

As for e-cores on Intel you state they do great and cause no issues at all and only help games or make no difference even with Windows 10? Like how is it able to do that when the cores are running different IPC/arch than the P cores? Does the CPU and more than 8 threads scaled to e-cores know how to do it so so that so that it uses proper compute resources of the e-core or e-cores so that it can utilize a larger percentage to compensate for the fact that they are much lower IPC so that the thread is efficiently running as if it were running on additional P cores? Like I imagine if the thread had additional P core, it would use much less of it unless it needed to use the whole thing in which case trouble would abound as e-cores are much weaker than P cores. Though most other threads I do not think need to do so. But still does it know how to do it right?

That has been my fear and concern all along as I want things to work super smooth with no issues for games the past 20 years including modern games and a couple of older ones.

Do you need to use Process Lasso or can you just set and forget.

And do you use WIN10 or WIN11? And do you have hyper threading on or off?

Thanks again for your help and I am sure you are very surprised to hear I am considering it given my bashing of e-cores in the past and even sort of now.
 
Last edited:
Back
Top Bottom