• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

On Intel Raptor Lake, any truth to the rumors that disabling all e-cores hurts single threaded performance of the p-cores??

Intel is stuck with two piles of garbage. 1 pile is maxed out and the other pretty much irrelevant for building games consoles.

Intel p cores are actually really good.

The e cores such on other hand.

4 e cores take same die space as 1 p core and maybe Slightly more. And 4 e core cluster takes somewhat more power than one p core.

So they could make a 10 to 12 p core 13th gen.

In your view do any games you notice benefit from more than 8 cores.
 
Intel have tried pushing to 10-12 core with a ringbus but failed. As per core performance is increased the more frequently the bus is saturated and the move performance degrades. If Intel keep pushing IPC and clocks they could end up in a situation that Skylake cores need to be reduced.

Plenty of game engines scale past 8 cores.


They actually tried that and failed? Did the 10850K and 10900K fail with 10 P cores and drag down ring clock?

I mean look a the e-waste cores. They drag ring clock down a lot especially on 12th Gen and even t a lesser extent on 13th Gen.

And which game engines scale past 8 cores and 16 threads? I read something not even 1 year ago that it takes a heroic effort and is extremely hard to code a game to even use 8 threads. So how in the world do plenty of games scale past 8 cores and 16 threads?
 
Last edited:
The last of us uses thhem, cyberpunk uses them, spiderman uses them, warzone 2 uses them. Need more? Stop with nonsensical bashing.

Thanks for the info and yes I have seen lots of reports of these games using more than 8 cores and even lots of cores. Though is that at all resolutions or only high FPS?

I have considered turning the e-cores on as reports of games actually using more and more cores are starting to spread a lot and reports of games using not many cores when I search articles 3+ years old pop up which leads me to believe now is the time more cores are really starting to matter.

Though at same time many say oh even 6 cores is still fine. Results all over the place.

You had mentioned that the perfect CPU would be Zen 5 with 12 P cores on a single CCD then you could turn off SMT and get best gaming performance possible.

Though it appears Zen 5 is still going to max out at 8 cores per CCD.

As for e-cores on Intel you state they do great and cause no issues at all and only help games or make no difference even with Windows 10? Like how is it able to do that when the cores are running different IPC/arch than the P cores? Does the CPU and more than 8 threads scaled to e-cores know how to do it so so that so that it uses proper compute resources of the e-core or e-cores so that it can utilize a larger percentage to compensate for the fact that they are much lower IPC so that the thread is efficiently running as if it were running on additional P cores? Like I imagine if the thread had additional P core, it would use much less of it unless it needed to use the whole thing in which case trouble would abound as e-cores are much weaker than P cores. Though most other threads I do not think need to do so. But still does it know how to do it right?

That has been my fear and concern all along as I want things to work super smooth with no issues for games the past 20 years including modern games and a couple of older ones.

Do you need to use Process Lasso or can you just set and forget.

And do you use WIN10 or WIN11? And do you have hyper threading on or off?

Thanks again for your help and I am sure you are very surprised to hear I am considering it given my bashing of e-cores in the past and even sort of now.
 
Last edited:
You seem to assume Intel builds its desktop chips with real focus on pairing with an expensive graphics card, that’s simply not the case. Intels focus is to convince the likes of Dell to commit to long term orders. The only only chance Intel has chance to win over the Dell’s is with a combination of Skylake and Atom cores so each can offset the other’s shortcomings.

Intel is struggling to offer 8p cores on the desktop never mind 12. If you want more than 8 cores (HP cores) from Intel the price is over £100 per core, probably twice that TBH.


Well really. If that were the case why do they have the 13900K and 13700K? Those are CPUs many would pair with an expensive GPU.

And yeah I would pay premium dollar for 12 P cores. Though they do not have it so do not have that option. Well Sapphire Rapids is here, but only available to OEMs and they use mesh arch which is awfully bad for gaming.

The e-cores for the additional threads have to work right and my fears about the slower IPC and different arch for secondary game threads beyond 8 need to be alleviated.

If games can successfully saturate fully more and more P cores, then how would e-cores keep up? Or do game threads beyond 8 never fully saturate other cores and thus the weaker IPC of them should not matter at all as long as the CPU and OS and hardware know how to handle it right?
 
Last edited:
I have windows 11, I don't use process lasso or anything else. The game uses the 8 physical threads first, if it needs more it goes to the 8 physical ecores, and if it needs even more resources then it goes to hyperthreading. When you turn off ecores you force the game to use hyper threading instead of the ecores, and that drops 1% lows significantly in lots of cases.

From the amount of cpu usage in games like last of us I would assume that 8+8 would be faster than 10p cores. 12pcores would probably be faster assuming they could get away with ring bus which Ive no idea if it's possible or not.

If zen 5 is still 8 cores then I guess I'm skipping zen 5, what else do you want me to say about it


Would it work fine with WIn10 or do you have to use WIN11?

I know you stated there is 0 reason to disable e-cores even on WIN10. So it should work well right as the thread director internally in CPU knows how to handle it do you know? Though WIN11 uses thread director at OS level where as WIN10 does not but if internal at hardware level it may not matter much?
 
Intel possibly could scale past 8 cores with its ringbus design, but not with the current levels of IPC. Mesh has more capacity to scale than Ringbus and supports faster cores without suffering limited scaling. Between the two options Mesh is the more promising tech IMO, but both have downsides, hence the dilemma Intel is currently in.

Sapphire rapids is the closest part to what you describe, but if gaming is your only reason to own a PC then buying all AMD is way to go. If AMD see enough demand for X3D and it generates enough money they will likely focus more resources into reasonably priced desktop parts tailored to gaming.

If enough people buy Sapphire rapids maybe Intel would start similar development to AMD’s X3D parts.

Buy all AMD including GPU or just CPU?

And X3D is best. Though is 7800X3D being only 8 cores enough as does the big L3 cache matter most?
 
@Wolverine2349 Look at the below,, imagine what would have happened if I turned off Ecores


Yes I see a pretyy good load on each core. It would be bad if e-cores were off in that situation as all P cores pegged to almost 100% or maybe 100%

What game is that? Is that the Last of Us Part 1. Or is it a recent (as in last 3-5 years) Battlefield game)?

ANd do you know is there a way to stop core parking with e-cores on. I noticed on both 12th and 13th Gen if I have e-cores on, in WIndows 10, it parks any P core not at idle. In WIN11 it parks all cores both P and e not at idle. If e-cores are all disabled, no CPU cores ever get parked just like none ever get parked on any Intel CPUs prior to Intel 12th/no- e-core 12th Gen versions, nor any AMD Ryzen 7000, 5000, or 3000 series CPUs.

Have you noticed it parks cores if you gold your mouse cursor over a core in Task Manager that is idle? Any way to disable that sop cores are always active regardless?
 
Id rather have a 64 ecore cpu for 50 quid.


Intel has neither. All hiugh end SKUs have 8 P cores and a bunch of e-cores.

The low end SKUs like 12th Gen Core i5 12400 have 6 P cores only.

They should look into making more options on the market.

A 10-12 P core Raptor Lake.

And 48 e-core Raptor Lake. Well I say 48 because 8 P cores take same space as 32 e-cores and there are 16 e-cores on 13900K so 16+32 is 48. Not sure if they could fit any more on them.

Well there is HEDT Sapphire Rapids, but on a mesh and mesh has horrible latency.
 
In every regard. Intel are lacking in all areas and in trying to look competitive against AMD they have managed to paint themselves into a corner if you will.


Like how have they painted themselves into a corner against AMD on the desktop side?

I mean take a Ryzen 7700X vs a Core i7 13700K or 13900K and disable the c-cores so we have 8 P cores vs 8 P cores from each company.

Fix the clock speed of both at the same

In most tests Intel comes out on top by like 6% which means their IPC is better. And even Golden Cove is on par or slightly ahead of Zen 4 when using CPU-Z and Cinebench R23 in above scenario. And Intel CPUs also can clock higher with the 6% higher IPC giving advantage there.

Though Intel is lacking in terms of server/Enterprise as they cannot get lots of big cores on CPUs as costs efficient as AMD can

And they still top out at 8 on desktop and need to add more e-cores to trade blows with the 16 Big cores from AMD in productivity monster apps.

So yes Intel is lacking in many regards, but every regard, not really as they still have their advantages.
 
Sorry I'm on the phone now, I'll send you when I'm home.

Power draw is by hwinfo, I calibrated the motherboard so it's as accurate as it can be (ac DC ll / vid = vcore etc, you can see it from the above screenshot actually. .). The only thing I can't account for fully is vrm loses but those should be minimal since I have a high end board - but they are kinda irrelevant as well if you are comparing between cpus.

I had a 13900k briefly but I swapped back to the 12900k. The 13 part was a power hog in gaming unless you downclocked it. It consumed around 60 to 80% more power than the 12900k. It was fast, too fast, but the power draw was nutty.


Is it the CPU package power sensor under the Core i9 12900K in HWInfo64?

That is only reading I have on mine for power draw.
 
Well I gave up on static all core clocks being something I desire and sold off 13900K and now on a 7800X3D.

So much better and more stable and so much lower power usage and much less heat dumped into case as a result



And 7800X3D has only 8 strong cores and a bunch of L3 cache 96MB and it spanks 13900K in gaming anyways. So much better than dealing with Intel high heat and power consumption and having to disable those useless for gaming e-waste cores.
 
Last edited:
What board and memory did you go for with your 7800x3d CPU?

Had you previous experience with a recent AMD build, and did all go well for you when putting it all together ?

Thanks


Went with MSI MPG X670E Carbon

Memory G.Skill FLareX DDR5 6000 CL30 2 X 16GB Hynix M Die with EXPOI enabled

Build went very well and no issues

Did mild manually tuning by enabling EXPO and using Buildzoid RAM tinings:


Also set SOC voltage to 1.23 and VDDIO to 1.23 as defualt is 1.3 with my mobo on 6000 EXPO and I read it should be below 1.3 and maybe even below 1.25? SO far perfectly stable no issues.

Though went more conservative on his timings to ensure stability even if all his would work. It lowered my EXPO AIDA64 memory latency form default EXPO 69ns to tweaked timings 62ns.

Also set Curve Optimizer to -20 all cores. I had tried higher like -28 and it passed all tests and was fully stable, but wanted to back off just to ensure unconditioned stability as I heard stability issues can pop up at idle especially with too high negative CO.

And I can say so far perfect stability at idle and under loads at -20. Maybe could go further but not going to push and risk instability as stress tests often do not catch it.

Case in point my reason for switching in particular is that Intel Raptor Lake CPUs when manually tuned despite passing every rough stability/stress test and shader compilation multiple times with no WHEAs nor crashes nor errors made me think I was all set and good to go. Yet a few weeks later doing shader compilation or running CInebench R23 in a game, a random CPU WHEA would show up, but it was intermittent and would not always happen which made me give up on Raptor Lake. Maybe stock settings would have worked fine, but I did not want Intel Raptor Lake for stock and those e-waste cores enabled and also lower clocked P cores. With dynamic clocks anyway, I figured go with the best gaming chip AMD.

It was a shame with Intel as Coffee Lake and prior passing lots of stress/stability tests seemed to guarantee real world usage stability. That sadly was not the case with Raptor Lake as WHEAs were very random indicating maybe easy and fast degradation with Raptor Lake is very real. So sold the chip and mobo and RAM off and now much happier with more reliable and stable 7800X3D even somewhat tweaked, but obviously no static clocks.
 
You really should have got a 7800X3D from the start as it is perfect for people like yourself that just want to plug and play. I've tested my overclocked/tuned 13700K against my CO/tuned 7800X3D and it was faster than my 7800X3D in all games but one and way faster in general software usage. It's overclocked to 5.8Ghz all core to 6.2Ghz boost and has been for months and I get no WHEA etc.

Raptorlake overclocking really isn't for novices or people that don't have the time to learn/do it properly, much better to get a 7800X3D for gaming instead.


Yes I should have but too late for that now.

Hindsight is 20/20.

I had thought overclocking Raptor Lake to reasonable clocks e-cores off on air would be easy like Coffee Lake was. Turned out not to be the case.

In speaking due you use voltage offsets or static vcore with LLC??

I had always used a static voltage with LLC6. It seems maybe static voltages are harder to get fully stable and require more patience unlike Coffee Lake and prior Intel gen and now you need to learn and have patience to do more testing and use voltage offsets and such? I had it for Coffee Lake 9900K and it was easy and super stable. Raptor LKake is so much harder and I had no patience.

So should have just gone 7800X3D off the bat. I have it now and have not looked back and it is great. No need for superior perf in other workloads. I got just as good of gaming performance as mild tuned Raptor Lake and almost as good as heavily tuned Raptor Lake.
 
I use adaptive voltage. I have only ever use constant voltage the very first time when overclocking to establish around what voltage a CPU needs for a certain clock speed.
For 5.8Ghz all core load it will use ~1.394v

53150620080_64fc8b89d4_o.png


53149610282_a8b125c25f_o.png

Yikes 1.394V all core workload. No way to cool that on the best air coolers which are the dual tower ones with 120 to 140mm fans like NH-D15 and Dark Rock pro 3 and similar ones.

I imagine you must have insanely good AIO or custom loop water cooling for that.
Neverminded if you also have the 16 e-cores (4 clusters of 4 each which is like the same space of 4 extra P cores as 1 e-core is like same die space as 4 P cores) on which is another 35% power and heat added. You must have insane cooling for that.
 
I have been doing lots and lots of testing, quite extreme actually.
I have been tinkering with various hidden power settings that manipulate the cpu scheduler.
Doing tests with cpu-z bench as quick and simple test.

So when we think about single threaded, by default these hybrid cpu's keep all p-cores parked, I dont yet know what determines to get a p-core unparked, but I do know that it will always prioritise the 2 preferred cores, these clock higher than the rest of the cores. If all the p-cores are unparked which can be done by adjusting the scheduler settings to force them to always be unparked (or via software calls likely, as an example all core cinebench unparks them), then single threaded performance will be hurt as it seems to be the parking mechanism that routes single threaded load to the preferred cores, once more are unparked it becomes at the mercy of the standard cpu scheduler.

If the e-cores are disabled in the bios, then some p-cores will always be unparked, and it increases the chance of non preferred p-cores been unparked, I have yet to test with e-cores disabled as personally I dont think thats the best way to use these chips, but I will do at some point, the second likely problem with disabling e-cores is you cant remove background tasks from the p-cores. This I was able to observe a noticeable impact on the single threaded cpuz score.

So using something like process hacker move svchost, browser process, afterburner, hwinfo, discord and other background apps to e-cores via affinity settings. There is also the cpu scheduler master setting which has 5 options.
Automatic
efficient cores
prefer efficient cores
prefer performant cores
performant cores
Now when I tested 'performant cores', it was a lower score vs 'prefer performant cores', likely because when using prefer, it moves lower demanding tasks conflicting on the core to e-cores when a heavy task is running. I hadnt moved every single background binary to e-cores only the biggest one's.

The e-cores also dont just offer more raw grunt, but they also offer more cores to reduce scheduling bottlenecks (can cause stutters in games). This is the primary reason why I think its better to have them available.

The downside of them is if anything interactive runs on them instead of the p-cores then that might give slower interactive performance, and also potentially can be less power efficient, as an example if 'any' p-core is unparked, all e-cores will stay at max clocks due to default power scheduler settings (hidden setting) and this also raises the vcore significantly. So I expect there is no perfect solution to cover all bases, but it is still something I am experimenting with. I have noticed occasionally things run on them when I dont want to, e.g. the UAC prompt always seems to use e-cores to process the prompt box. Unless 'performant cores' forces it over, and I havent found a way to adjust that via process hacker.

Via hidden settings the min number of unparked p-cores is adjustable, so you can force them to stay awake, if using prefer p-cores or forced p-cores these do NOT force cores to unpark, so they only actually work if p-cores are available, luckily keeping both preferred p-cores always unparked has no noticable effect on vcore or power consumption, however keeping 'all' p-cores always unparked does have an effect especially when p-core clocks are increases and as mentioned earlier no longer ensures single threaded stuff goes to a preferred p-core.

If you dont care about the scheduling bottlenecks from having less cores, then an approach could be made by disabling all e-cores, then also setting all cores to same clocks as preferred p-cores (if cpu can handle it, might need a voltage bump), by changing to an all core clock that removes any concerns about needing preferred p-cores for max single threaded performance. But will still lose some for having bg tasks running on them. (my single thread score went down by about 4-5% when not moving task to e-cores)

I have also confirmed as a quick and dirty adjustment, simply using the high performance or ultimate power profile, combined with routing background stuff specifically to e-cores and adjusting the thread scheduler to "prefer performant cores" gives insane performance all over the shop, easily beating my 9900k at everything, but is a bit power inefficient this way when using light load e.g. 30 watts cpu package power to watch youtube.

For what its worth I think its inevitable AMD will bring out a variant of e-cores, on their server chips I think they are just cores with less cache, so might see same on future desktop chips.


I debloat Windows using NTLite. Well do not remove components but disable unnecessary services and such which makes it a pretty lite with regards to background stuff running. I have Windows updates completely shut down so they can not start and thus no high background process usage as Windows updates does take a lot. I also never have Chrome nor any browsers in background open. So no way there scheduling bottlenecks in games. Certainly not in WIN10.

In WIN11 its another matter. I have seen reduced performance with e-cores off even for single thread in WIN11. The scheduler is much more complex and what you describe is likely why. WIN10 never parks any P cores by default. Perhaps your method of disabling e-cores in WIN11 to still get best P core performance works.

In WIN10 you can disable e-0cores set and forget it and no reduced single thread performance. Its the same either way. WIN11 once again works differently due to the way it uses thread director in which case WIN10 does not use thread director.

And yeah I had always used all core fixed clock speed for P cores being all same speed so I should have been covered by last point anyways on at least WIN10 and partly on WIN11 if scheduler was setup right.

Anyways I got tired of Raptor Lake tweaking and went with 7800X3D where I only put a mild -20 CO and tuned the RAM without touching any other CPU settings and could not be happier with its much better stability and much less heat inside the case and just as good or better gaming performance than RL.

And no I doubt AMD ever uses hybrid approach. They have already mentioned they are not. Certainly not on the mainstream to high end desktop CPUs.

What I do want to see AMD do is offer more than 8 good cores on a single CCD. Hopefully Zen 5 provides it, though I am not holding my breathe.
 
Last edited:

I take all these predictions with a pinch of salt but it does not seem that it is beyond the realms of possibility.


Well yeah AMD has said they are considering this approach on mobile and APUs which are not really the enthusiast middle to high end chip parts.

APUs are more for those who do not want a discrete video card

Those are not high end desktop CPU SKUs. I have read that AMD is not going that route on high end desktop SKUs.

Those leaks are 4 P cores and 8 e-cores.

AMD on the mainstream highest end cores with Zen 3 and 4 have minimum of 6 cores and no 4 core parts.

So AMD not doing it on all CPUs.

With Intel shame on them that they do it on their Core i9 and i7 parts. No choice to not get the e-waste cores if you want 8 P cores which is just not right.

With AMD no evidence that they will be on anything pother than APUs, lower tier desktop SKUs, and mobile parts which is a great thing.
 
Last edited:
Not sure what cooling you're using but mine was easy to cool when testing with my D15 and even under load was in the 80's, though my normal cooling is a AFII 420mm and it barely if ever leaves the 70's.

Also as has been mentioned many times the best way to set these up for gaming is to leave E-cores on and disable HT which is the way I have mine.

Adaptive voltage is always the way to go and right now while typing this I'm using 0.75v. ;) (you'll notice the idle 7800X3D power usage is probably double that of your raptorlake CPU, mine is)


I used an NH-D15S and P cores at even 5.6GHz and vcore at 1.31, under load VCORE was like 1.25V and it was stable mostly, but temps got into the mid to high 90s and even could hit 100C. If I ran P95 Small FFTs AVX on or Y Cruncher SFT, no way temps would not hit 100 right away and throttle.

And it turned out not to be fully stable as CPU related WHEAs would come down the road running CInebench again or shader compilation in TLOU Part 1. And it did not always happen either, but was random. And only CPU WHEAs, not memory nor PCIE or anything else related but CPU only.

I had described in more detail above.
 
That was your first mistake. :) I stopped using P95 or Y-Cruchner for CPU stability testing years ago. If you are going to use a program then you're much better off using Realbench as that also stresses the GPU subsystem at the same time as the CPU so it is much more realistic to gaming performance.


I also used Realbench and it passed with flying colors. I also use OCCT, Linpack XTREME. I used multiple programs to confirm it and it was confirmed.

Yet a random internal CPU WHEA appeared running CInebench R23 a few weeks later and also during TLOU Part 1 shader compilation. And the game sometimes crashed confirming I was ot actually fully stable afterall.

Have you tried The Last of Us Part 1 shadier compilation and no WHEAs??

And power consumption was like 230 watts during some of these loads and like 250 watts under Cinebench R23 with 5.6GHz 5GHz ring chip.

With another 13900K chip I had clocked lower with 5.4GHz all P core and 4.8GHz ring at only 1.225V LLC6. And load VCORE under tough tests went down to like 1.18 or so I think in tough stress tests. I even ran P95 Small FFT AVX enabled which I refused to do on the other chip as it would thermally throttle and it passed with flying colors at only 210 watts. Same with Y Cruncher SFT. Also ran all other tests that are not as tough and peak power usage was like 170 to 180 watts even during Cinebench R23 with peak CPU temp in low 80s. So I was finally fully stable so I thought. Even ran Realbench like 3-4 times again to confirm with peak temp of 84C with average only in high 70s and CPU power consumption at like 160 watts as opposed to 225 watts and so much heat with 5.6GHz clock. So I am finally stable and much less power usage to boot and confirmed it more this time.

Then weeks later boom CPU Internal WHEA after TLOUD Part 1 Shader compilation and I threw in towel on Intel Raptor Lake.
 
Back
Top Bottom