• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

On Intel Raptor Lake, any truth to the rumors that disabling all e-cores hurts single threaded performance of the p-cores??

Your 2nd mistake was getting a 13900K when you really should have got a 13700K or even 13600K. :) You had no use for so many E-cores on a 13900k and that only results in more heat and power, no wonder you couldn't cool it, plus you also have HT on!

The maximum power usage I see on my overclocked CPU is about 140-150W (the same as my overclocked 9700K). I played Jedi Survivor yesterday and had HWInfo running from just after I started the game and after 3 hours my average CPU power consumption was 98W - not as good as the ~65W of my 7800X3D but nowhere near the power consumption Raptorlake horror stories you often read about.

Your 3rd mistake was thinking that because your system passes X or Y synthetic "stability" program that your system is stable. If you were getting WHEA areas your system simple isn't stable and you have to spend extra time refining it. The 7800X3D is so much easier in this regards as the potential overclocking headroom is so much less as AMD already "do it for you" by maxing out the silicon.

This is a key difference with Intel as historically there can be some decent headroom for overclocking depending on silicon lottery. Even now with Raptorlake I often saw people with 13600K with a default maximum turbo of 5.1Ghz hitting 6Ghz.(Linus for one)


I wanted 8 good cores, so a 13600K was out for me as it only has 6. While 6 is enough for gaming for now, 8 provides more future proof solution.

Yes that is why I am now on 7800X3D.
 
our 3rd mistake was thinking that because your system passes X or Y synthetic "stability" program that your system is stable. If you were getting WHEA areas your system simple isn't stable and you have to spend extra time refining it. The 7800X3D is so much easier in this regards as the potential overclocking headroom is so much less as AMD already "do it for you" by maxing out the silicon.

To be fair, passing all synthetic benchmarks and stability stress/tests proved full stability on my Coffee Lake and prior setups. as I never had any real world stability issues.

I also did test TLOU Part 1 shader compilation as well after the other synthetic tests as well as Cinebench multiple times. No issues and passed and no WHEAs. Then 3-4 weeks later after a clean WIN10 install I go to game and then boom WHEA internal CPU error during TLOU Part 1 shader compilation.

But 9900K I had perfectly fully stable in anything real world with overclock passing all benchmarks/stability/stress tests. Not the case with Raptor Lake.

I was not getting any WHEA CPU Internal errors at first. It came 3-4 weeks later despite passing Y Cruncher, Cinebench R23, OCCT, Realbench, Linpack XTREME, Prime95 and even TLOU Part 1 Shader compilation a couple of times. 3-4 weeks later when I went to do shader compilation in TLOU Part 1 after a fresh Windows install again (As I decided to reorganize my PC and start gaming on it) is when the first WHEA showed up when I thought I was rock stable.

And yes I had a fresh Windows install at first when I ran all stress tests that passed with no WHEAs. I have always done a fresh Windows install after validating my overclocked/tuned stability on the prior fresh Windows install that I tested overlock stability with for all my main gaming builds going back to 2006 and this one was no exception.
 
Last edited:
Then the 13700K(F) would have been way better for you than that power monster 13900K! The 7800X3D is a splendid CPU and should keep you happy for many years to come. ;)



Your CPU was ultimately on the edge of stability and even something like a bios update or driver/software update will push you over the edge.

I only ever use Realbench or Cinebench as a preliminary baseline but the real stability test starts when I actually use it properly. So I will run my most intensive games or software and only after extensive use and time without issue will I consider it stable.


I got the 13900K primarily for better binned IMC and P cores. I always intended to shut off e-cores anyways.

I did try a 13700K I got for a good price after selling my 13900KS that I gave up on trying to cool at higher clocks but its IMC was horrible even though it seemed to run ok at 5.3GHz and 4.8GHz ring. Well actually maybe ring was unstable and not RAM afterall at even 4.8GHz ring which was unacceptable given a 12700K last gen did 4.8GHz ring easily so I gave up on it and went back to a used 13900KF for a good deal after selling 13700KF for $25 loss. The 13900KF seemed to have a very good IMC.

But that is still when I wanted to manual clock tune which once again did not work out and I was no longer wanting to put in effort and patience to test it more and really wasted so much time doing it already with multiple 13900K and one 13700K chip.

Now I gave up on that and am much happier with 7800X3D.
 
Last edited:
The P cores arch is actually good, but reliability and power consumption and signal balancing of DDR5 IMC leave a lot to be desired.

But the hybrid arch is absolute garbage and crap and has no place in the high end desktop.

Its an absolute shame and embarrassment that Intel puts those e-waste cores in all chips and you have no choice to buy any 8 core part without those e-waste cores!

I imagine if Intel actually had a die on latest arch with only 8 P cores they would be bigger and more reliable as QA would be easier.
 
It’s depends what you looking for in a chip. For a gaming only system that the OP seems interested in 8-10 big cores would probably be a better solution. The issues is Intel would get destroyed in tech reviews when compared to AMD.

Which is why I now went to 7800X3D. Got 8 big cores and no e-waste cores. And a whole bunch of L3 cache which almost all games love.
 
Yeah, for gaming you just want as many big cores as you can afford, but most workloads scale really well across cores and the odd one or two can benefit from improved memory performance. That where disabling one type of core might help. Although with mixed cores like Intel use it will always probaly be a compromise.


Yeah you want memory performance though the 3D cache really takes a toll off memory performance and is so fast it does the job.

I also tuned the memory a bit to give slight extra help even though not much difference on a 3D chiop every little but helps and why not if it was easy extra performance with rock solid unconditioned stability which I have.
 
Sounds as if you have found a really good solution. Intel probably should consider a cost down 8 big core desktop chip for those looking for such. Forcing a chip on people that see it as paying over the odds for 50% weak sauce just isn’t good optics. Maybe they should take another stab at 10 cores.

Yes totally agree.

They need to make another die with only 8-10 P cores and no e-waste cores.

Another die is needed as apparently too many defective e-core clusters makes it so they do not sell any with only 8 cores and no e-waste cores.

Plus the binning will be garbage.

Also Intel needs to get their process node issues fixed. They had issues with 10nm for a while until they finally got it working though still appears quality control problems and bad power consumption and still holds them back from other improvements.
Heck Arrow Lake still going to be on 10nm supposedly and at best 21% performance improvement where as at worst like 5% where as if their node was proper a 25% improvement across the board and lower power consumption too.

They need to get their node straight and their current 10nm even though it works now is kind of a patch and fix to their old problems of 10nm when they could not get proper yields and had to stay on 14nm. Their 10nm seems much weaker and susceptible to degradation especially on Raptor Lake above 5GHz. They brute force these chips and such to get great power, but causes bad heat and thermals and maybe degradation or inconsistency with stability and WHEAs or quality problems???
 
Yeah, but Intel getting production issues ironed out is hit and mis. As you point out 10nm is still maybe a little iffy and that’s maybe 10 years on now.


Ye true, Intel is so arrogant they refuse to use TSMC even tough they would solve their problems. They insist it must be in house even their their own process node has lots of issues.
 
The op title says single thread, and the few pages I’ve looked at are all about games running 8 cores.

Can anyone run at full stock CB r23 single thread with and without E cores.

Just to see the numbers


When I had 13900K or 2 last Fall I did single thread with CPUz benchmark with both e-cores on and off, In WIN10, made 0 difference and score always the same within margin of error. I also did same with CInebench and I think it was the same results

In WIN11, single thread score was worse by like 5-10% with all e-cores off and was inconsistent. With even 1 e-core on no inconsistency and always the same within margin of error. Same with CInebench though do not remember the inconsistency and how much as it took so much longer to run and do not remember exact results, but was similar I think.

My conclusion is that because WIN11 is thread director aware, it does not know how to distinguish between hyper threads and real threads and thus causes issues when there are no e-cores enabled.

WIN10 is not aware of the thread director so it treats the 13900K CPU as a normal 8 core 16 thread CPU and thus no performance loss in single thread with e-cores off.


That link explains why WIN10 vs WIN11

Though there may be more to the story. I suppose its possible having e-cores off on 13th Gen hurts even in WIN10 despite not hurting single thread score.

In theory 8 core 16 threads should be no different than with e-cores on except you lose the power of e-cores but the P cores should perform better or not be hurt right?? On 12th Gen true for sure.

13th Gen, probably at least in WIN10, but maybe something hidden we do not know about which caused me problems and WHEAs or maybe secret single thread performance regression due to Intel forcing it to be dependent on e-cores so the ring bus could run at higher speeds on 13th Gen with e-cores on and thus some weird unknown dependency??? Falkentyne at overclock.net theorized that but they were using WIN11. Thus more to the story???? Or maybe not?????
 
Last edited:
Back
Top Bottom