• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

On Intel Raptor Lake, any truth to the rumors that disabling all e-cores hurts single threaded performance of the p-cores??

Yikes 1.394V all core workload. No way to cool that on the best air coolers which are the dual tower ones with 120 to 140mm fans like NH-D15 and Dark Rock pro 3 and similar ones.

I imagine you must have insanely good AIO or custom loop water cooling for that.
Neverminded if you also have the 16 e-cores (4 clusters of 4 each which is like the same space of 4 extra P cores as 1 e-core is like same die space as 4 P cores) on which is another 35% power and heat added. You must have insane cooling for that.
Not sure what cooling you're using but mine was easy to cool when testing with my D15 and even under load was in the 80's, though my normal cooling is a AFII 420mm and it barely if ever leaves the 70's.

Also as has been mentioned many times the best way to set these up for gaming is to leave E-cores on and disable HT which is the way I have mine.

Adaptive voltage is always the way to go and right now while typing this I'm using 0.75v. ;) (you'll notice the idle 7800X3D power usage is probably double that of your raptorlake CPU, mine is)
 
And no I doubt AMD ever uses hybrid approach. They have already mentioned they are not....


I take all these predictions with a pinch of salt but it does not seem that it is beyond the realms of possibility.
 

I take all these predictions with a pinch of salt but it does not seem that it is beyond the realms of possibility.


Well yeah AMD has said they are considering this approach on mobile and APUs which are not really the enthusiast middle to high end chip parts.

APUs are more for those who do not want a discrete video card

Those are not high end desktop CPU SKUs. I have read that AMD is not going that route on high end desktop SKUs.

Those leaks are 4 P cores and 8 e-cores.

AMD on the mainstream highest end cores with Zen 3 and 4 have minimum of 6 cores and no 4 core parts.

So AMD not doing it on all CPUs.

With Intel shame on them that they do it on their Core i9 and i7 parts. No choice to not get the e-waste cores if you want 8 P cores which is just not right.

With AMD no evidence that they will be on anything pother than APUs, lower tier desktop SKUs, and mobile parts which is a great thing.
 
Last edited:
Not sure what cooling you're using but mine was easy to cool when testing with my D15 and even under load was in the 80's, though my normal cooling is a AFII 420mm and it barely if ever leaves the 70's.

Also as has been mentioned many times the best way to set these up for gaming is to leave E-cores on and disable HT which is the way I have mine.

Adaptive voltage is always the way to go and right now while typing this I'm using 0.75v. ;) (you'll notice the idle 7800X3D power usage is probably double that of your raptorlake CPU, mine is)


I used an NH-D15S and P cores at even 5.6GHz and vcore at 1.31, under load VCORE was like 1.25V and it was stable mostly, but temps got into the mid to high 90s and even could hit 100C. If I ran P95 Small FFTs AVX on or Y Cruncher SFT, no way temps would not hit 100 right away and throttle.

And it turned out not to be fully stable as CPU related WHEAs would come down the road running CInebench again or shader compilation in TLOU Part 1. And it did not always happen either, but was random. And only CPU WHEAs, not memory nor PCIE or anything else related but CPU only.

I had described in more detail above.
 
I used an NH-D15S and P cores at even 5.6GHz and vcore at 1.31, under load VCORE was like 1.25V and it was stable mostly, but temps got into the mid to high 90s and even could hit 100C. If I ran P95 Small FFTs AVX on or Y Cruncher SFT, no way temps would not hit 100 right away and throttle.

And it turned out not to be fully stable as CPU related WHEAs would come down the road running CInebench again or shader compilation in TLOU Part 1. And it did not always happen either, but was random. And only CPU WHEAs, not memory nor PCIE or anything else related but CPU only.

I had described in more detail above.
That was your first mistake. :) I stopped using P95 or Y-Cruncher etc for CPU stability testing years ago. If you are going to use a program then you're much better off using Realbench as that also stresses the GPU subsystem at the same time as the CPU so it is much more realistic to gaming performance.
 
Last edited:
That was your first mistake. :) I stopped using P95 or Y-Cruchner for CPU stability testing years ago. If you are going to use a program then you're much better off using Realbench as that also stresses the GPU subsystem at the same time as the CPU so it is much more realistic to gaming performance.


I also used Realbench and it passed with flying colors. I also use OCCT, Linpack XTREME. I used multiple programs to confirm it and it was confirmed.

Yet a random internal CPU WHEA appeared running CInebench R23 a few weeks later and also during TLOU Part 1 shader compilation. And the game sometimes crashed confirming I was ot actually fully stable afterall.

Have you tried The Last of Us Part 1 shadier compilation and no WHEAs??

And power consumption was like 230 watts during some of these loads and like 250 watts under Cinebench R23 with 5.6GHz 5GHz ring chip.

With another 13900K chip I had clocked lower with 5.4GHz all P core and 4.8GHz ring at only 1.225V LLC6. And load VCORE under tough tests went down to like 1.18 or so I think in tough stress tests. I even ran P95 Small FFT AVX enabled which I refused to do on the other chip as it would thermally throttle and it passed with flying colors at only 210 watts. Same with Y Cruncher SFT. Also ran all other tests that are not as tough and peak power usage was like 170 to 180 watts even during Cinebench R23 with peak CPU temp in low 80s. So I was finally fully stable so I thought. Even ran Realbench like 3-4 times again to confirm with peak temp of 84C with average only in high 70s and CPU power consumption at like 160 watts as opposed to 225 watts and so much heat with 5.6GHz clock. So I am finally stable and much less power usage to boot and confirmed it more this time.

Then weeks later boom CPU Internal WHEA after TLOUD Part 1 Shader compilation and I threw in towel on Intel Raptor Lake.
 
@MartinPrince IIRC I remember reading one of your posts where you mention memory timings / optimisations. Even tho you were running around 100mhz or so faster than Robert (???) he was ahead of your performance due to his memory timings. or something like that.

What memory is it that you are using on your DDR4 board, oh and what board is it..?

Thanks.
 
I also used Realbench and it passed with flying colors. I also use OCCT, Linpack XTREME. I used multiple programs to confirm it and it was confirmed.

Yet a random internal CPU WHEA appeared running CInebench R23 a few weeks later and also during TLOU Part 1 shader compilation. And the game sometimes crashed confirming I was ot actually fully stable afterall.

Have you tried The Last of Us Part 1 shadier compilation and no WHEAs??

And power consumption was like 230 watts during some of these loads and like 250 watts under Cinebench R23 with 5.6GHz 5GHz ring chip.

With another 13900K chip I had clocked lower with 5.4GHz all P core and 4.8GHz ring at only 1.225V LLC6. And load VCORE under tough tests went down to like 1.18 or so I think in tough stress tests. I even ran P95 Small FFT AVX enabled which I refused to do on the other chip as it would thermally throttle and it passed with flying colors at only 210 watts. Same with Y Cruncher SFT. Also ran all other tests that are not as tough and peak power usage was like 170 to 180 watts even during Cinebench R23 with peak CPU temp in low 80s. So I was finally fully stable so I thought. Even ran Realbench like 3-4 times again to confirm with peak temp of 84C with average only in high 70s and CPU power consumption at like 160 watts as opposed to 225 watts and so much heat with 5.6GHz clock. So I am finally stable and much less power usage to boot and confirmed it more this time.

Then weeks later boom CPU Internal WHEA after TLOUD Part 1 Shader compilation and I threw in towel on Intel Raptor Lake.
Your 2nd mistake was getting a 13900K when you really should have got a 13700K or even 13600K. :) You had no use for so many E-cores on a 13900k and that only results in more heat and power, no wonder you couldn't cool it, plus you also have HT on!

The maximum power usage I see on my overclocked CPU is about 140-150W (the same as my overclocked 9700K). I played Jedi Survivor yesterday and had HWInfo running from just after I started the game and after 3 hours my average CPU power consumption was 98W - not as good as the ~65W of my 7800X3D but nowhere near the power consumption Raptorlake horror stories you often read about.

Your 3rd mistake was thinking that because your system passes X or Y synthetic "stability" program that your system is stable. If you were getting WHEA areas your system simple isn't stable and you have to spend extra time refining it. The 7800X3D is so much easier in this regards as the potential overclocking headroom is so much less as AMD already "do it for you" by maxing out the silicon.

This is a key difference with Intel as historically there can be some decent headroom for overclocking depending on silicon lottery. Even now with Raptorlake I often saw people with 13600K with a default maximum turbo of 5.1Ghz hitting 6Ghz.(Linus for one)
 
Your 2nd mistake was getting a 13900K when you really should have got a 13700K or even 13600K. :) You had no use for so many E-cores on a 13900k and that only results in more heat and power, no wonder you couldn't cool it, plus you also have HT on!

The maximum power usage I see on my overclocked CPU is about 140-150W (the same as my overclocked 9700K). I played Jedi Survivor yesterday and had HWInfo running from just after I started the game and after 3 hours my average CPU power consumption was 98W - not as good as the ~65W of my 7800X3D but nowhere near the power consumption Raptorlake horror stories you often read about.

Your 3rd mistake was thinking that because your system passes X or Y synthetic "stability" program that your system is stable. If you were getting WHEA areas your system simple isn't stable and you have to spend extra time refining it. The 7800X3D is so much easier in this regards as the potential overclocking headroom is so much less as AMD already "do it for you" by maxing out the silicon.

This is a key difference with Intel as historically there can be some decent headroom for overclocking depending on silicon lottery. Even now with Raptorlake I often saw people with 13600K with a default maximum turbo of 5.1Ghz hitting 6Ghz.(Linus for one)


I wanted 8 good cores, so a 13600K was out for me as it only has 6. While 6 is enough for gaming for now, 8 provides more future proof solution.

Yes that is why I am now on 7800X3D.
 
@MartinPrince IIRC I remember reading one of your posts where you mention memory timings / optimisations. Even tho you were running around 100mhz or so faster than Robert (???) he was ahead of your performance due to his memory timings. or something like that.
Good memory! :D (no pun intended!)
It was this thread here... https://forums.overclockers.co.uk/threads/amd-vs-intel-single-threading.18872467/page-3

and the results were here. https://forums.overclockers.co.uk/t...ingle-threading.18872467/page-3#post-33385315

It was when Humbug, bless his cotton socks, said his 3600X (6C/12T) was faster than my overclocked 9700K (8C/8T) in photo editing only for it to turn out to be nearly 40% slower when actual tests were done rather than looking at a chart or graph on Techpowerup. ;)

Yes, originally Robert's was faster than mine though 100Mhz slower clocks on the CPU but after I "saw the light" and got some faster memory and tuned it they came out to the same result.


What memory is it that you are using on your DDR4 board, oh and what board is it..?

Thanks.

The board is an Asus ROG Strix Z690-A D4 and the memory is G.Skill Trident Z Neo F4-4000C16D-32GTZNA that I'm running at 4200C15. It will do 4300C15 stably but it needs a tad more voltage than I really want to use even though I've got fans directly cooling the memory.

53150013742_d90f6389bb_b.jpg
 
Last edited:
our 3rd mistake was thinking that because your system passes X or Y synthetic "stability" program that your system is stable. If you were getting WHEA areas your system simple isn't stable and you have to spend extra time refining it. The 7800X3D is so much easier in this regards as the potential overclocking headroom is so much less as AMD already "do it for you" by maxing out the silicon.

To be fair, passing all synthetic benchmarks and stability stress/tests proved full stability on my Coffee Lake and prior setups. as I never had any real world stability issues.

I also did test TLOU Part 1 shader compilation as well after the other synthetic tests as well as Cinebench multiple times. No issues and passed and no WHEAs. Then 3-4 weeks later after a clean WIN10 install I go to game and then boom WHEA internal CPU error during TLOU Part 1 shader compilation.

But 9900K I had perfectly fully stable in anything real world with overclock passing all benchmarks/stability/stress tests. Not the case with Raptor Lake.

I was not getting any WHEA CPU Internal errors at first. It came 3-4 weeks later despite passing Y Cruncher, Cinebench R23, OCCT, Realbench, Linpack XTREME, Prime95 and even TLOU Part 1 Shader compilation a couple of times. 3-4 weeks later when I went to do shader compilation in TLOU Part 1 after a fresh Windows install again (As I decided to reorganize my PC and start gaming on it) is when the first WHEA showed up when I thought I was rock stable.

And yes I had a fresh Windows install at first when I ran all stress tests that passed with no WHEAs. I have always done a fresh Windows install after validating my overclocked/tuned stability on the prior fresh Windows install that I tested overlock stability with for all my main gaming builds going back to 2006 and this one was no exception.
 
Last edited:
I wanted 8 good cores, so a 13600K was out for me as it only has 6. While 6 is enough for gaming for now, 8 provides more future proof solution.

Yes that is why I am now on 7800X3D.

Then the 13700K(F) would have been way better for you than that power monster 13900K! The 7800X3D is a splendid CPU and should keep you happy for many years to come. ;)

To be fair, passing all synthetic benchmarks and stability stress/tests proved full stability on my Coffee Lake and prior setups. as I never had any real world stability issues.

I also did test TLOU Part 1 shader compilation as well after the other synthetic tests as well as Cinebench multiple times. No issues and passed and no WHEAs. Then 3-4 weeks later after a clean WIN10 install I go to game and then boom WHEA internal CPU error during TLOU Part 1 shader compilation.

But 9900K I had perfectly fully stable in anything real world with overclock passing all benchmarks/stability/stress tests. Not the case with Raptor Lake.

I was not getting any WHEA CPU Internal errors at first. It came 3-4 weeks later despite passing Y Cruncher, Cinebench R23, OCCT, Realbench, Linpack XTREME, Prime95 and even TLOU Part 1 Shader compilation a couple of times. 3-4 weeks later when I went to do shader compilation in TLOU Part 1 after a fresh Windows install again (As I decided to reorganize my PC and start gaming on it) is when the first WHEA showed up when I thought I was rock stable.

And yes I had a fresh Windows install at first when I ran all stress tests that passed with no WHEAs. I have always done a fresh Windows install after validating my overclocked/tuned stability on the prior fresh Windows install that I tested overlock stability with for all my main gaming builds going back to 2006 and this one was no exception.

Your CPU was ultimately on the edge of stability and even something like a bios update or driver/software update will push you over the edge.

I only ever use Realbench or Cinebench as a preliminary baseline but the real stability test starts when I actually use it properly. So I will run my most intensive games or software and only after extensive use and time without issue will I consider it stable.
 
Then the 13700K(F) would have been way better for you than that power monster 13900K! The 7800X3D is a splendid CPU and should keep you happy for many years to come. ;)



Your CPU was ultimately on the edge of stability and even something like a bios update or driver/software update will push you over the edge.

I only ever use Realbench or Cinebench as a preliminary baseline but the real stability test starts when I actually use it properly. So I will run my most intensive games or software and only after extensive use and time without issue will I consider it stable.


I got the 13900K primarily for better binned IMC and P cores. I always intended to shut off e-cores anyways.

I did try a 13700K I got for a good price after selling my 13900KS that I gave up on trying to cool at higher clocks but its IMC was horrible even though it seemed to run ok at 5.3GHz and 4.8GHz ring. Well actually maybe ring was unstable and not RAM afterall at even 4.8GHz ring which was unacceptable given a 12700K last gen did 4.8GHz ring easily so I gave up on it and went back to a used 13900KF for a good deal after selling 13700KF for $25 loss. The 13900KF seemed to have a very good IMC.

But that is still when I wanted to manual clock tune which once again did not work out and I was no longer wanting to put in effort and patience to test it more and really wasted so much time doing it already with multiple 13900K and one 13700K chip.

Now I gave up on that and am much happier with 7800X3D.
 
Last edited:
@MartinPrince So, that was your lightbulb moment then, in terms of memory potential, interesting.
I suppose a DDR5 board and memory hasn't been considered due to the cost Vs gains with what you have already and the longevity of the platform, in terms of the socket..?

Another "lightbulb" moment for me, IIRC, is that you once noted that your Intel rig was closer to a AMD X3D rig in terms of games than they would be in their capability for application type performance.
No, I'm not stalking you :D

That was kinda interesting, interpretating it as if you can put the effort into it the potential within a RL platform can be pretty decent, whilst the "out of the box" experience for a AMD x3D rig is pretty much provided for you.
 
I got the 13900K primarily for better binned IMC and P cores. I always intended to shut off e-cores anyways.

I did try a 13700K I got for a good price after selling my 13900KS that I gave up on trying to cool at higher clocks but its IMC was horrible even though it seemed to run ok at 5.3GHz and 4.8GHz ring. Well actually maybe ring was unstable and not RAM afterall at even 4.8GHz ring which was unacceptable given a 12700K last gen did 4.8GHz ring easily so I gave up on it and went back to a used 13900KF for a good deal after selling 13700KF for $25 loss. The 13900KF seemed to have a very good IMC.

But that is still when I wanted to manual clock tune which once again did not work out and I was no longer wanting to put in effort and patience to test it more and really wasted so much time doing it already with multiple 13900K and one 13700K chip.

Now I gave up on that and am much happier with 7800X3D.
As you said hindsight is 20/20 and the silicon lottery can just as easily go against you as well as for you. I generally advise folk to avoid the 13900K unless you like to go extreme, as for me it was Intel's way of trying to reclaim the mulit-threaded crown (already having the Single threaded one) but the way they went about it was inelegant and an unwanted power hog. You probably would have had a better outcome going through three 13700k's rather than trying out 13900K's.

BTW, what motherboard were you using?

I've overclocked 2 of my friends 13700K's and they both did 5.6Ghz all core at relatively low voltages.
 
Last edited:
@MartinPrince So, that was your lightbulb moment then, in terms of memory potential, interesting.
I suppose a DDR5 board and memory hasn't been considered due to the cost Vs gains with what you have already and the longevity of the platform, in terms of the socket..?

You're correct; DDR5 was a non-starter for me for now as a lot of the programs/games I use were memory latency sensitive (and not bandwidth) and disappointingly I realised that the latency with DDR5 was not the improvement over DDR4 I was hoping for. If the improvement were there then cost wouldn't have been much of an issue but in reality, for me DDR5 would have been slower in many instances, so then I couldn't really consider it.

This is also why a 13700K is much better for me than a 7800X3D etc. This is a very well tuned (better than mine) 7800X3D (thanks to @Brizzles ) and compare the memory latency etc to my 13700K.

Well tuned 7800X3D DDR5
53151938693_e8245d4c05.jpg
[/url]

Mine. 13700K DDR4
53151874395_3c7941aa76.jpg


Only the write speed is way better but the all important latency (for my usage) is much worse. Hopefully as DDR5 matures and increases in speeds we will see the improvements that will make a change easy but for now I'm better off with DDR4. I only really bother to upgrade once I see a ~50% performance increase in the tasks I do, so this normally seems to be around 4 years which takes me close to or past the life time of any socket, so socket upgradeability doesn't really factor for me.

Another "lightbulb" moment for me, IIRC, is that you once noted that your Intel rig was closer to a AMD X3D rig in terms of games than they would be in their capability for application type performance.
No, I'm not stalking you :D

That was kinda interesting, interpretating it as if you can put the effort into it the potential within a RL platform can be pretty decent, whilst the "out of the box" experience for a AMD x3D rig is pretty much provided for you.

You're welcome to stalk me as long as you stop short at sending love letters! :D

Yes, that was the case for me and in my usage of both CPU's. This is when both CPU's are at stock but because generally the 13700K has greater scope for overclocking the CPU and tuning the memory then it swings much more in it's favour once both CPU's are overclocked and tuned for an all purpose system.

CPU-Z single thread score is a very good performance indicator for the software that I use. With my overclocked 7800X3D I'm scoring just over ~700 and with the 13700K I'm scoring ~ 1000.

Oh and to stay near to on topic, E-cores should generally always be left on! :)
 
Last edited:
DDR4 can be about 7-8x the latency of DDR3. I don’t recall much moaning about Skylake moving to DDR4.

The issue is Intel packing Atom cores and Lake cores on the same chip brings issues that requires workarounds and compromise. Intel’s Ringbus + Mesh chips are based on a DDR2/3 (Atom) and DDR3/4 (Skylake) family of designs.
 
DDR4 can be about 7-8x the latency of DDR3. I don’t recall much moaning about Skylake moving to DDR4.

The issue is Intel packing Atom cores and Lake cores on the same chip brings issues that requires workarounds and compromise. Intel’s Ringbus + Mesh chips are based on a DDR2/3 (Atom) and DDR3/4 (Skylake) family of designs.

Tuned ddr4 was 30-31ns for me on my 9900k.

By your 7-8x measure, ddr3 would around 4-5ns?

“Can be” is a useless metric as it assumes hundred of thousands different combinations and factors.

Stop posting in technical threads for everyone’s sake please.
 
Last edited:
The issue is Intel packing Atom cores and Lake cores on the same chip brings issues that requires workarounds and compromise. Intel’s Ringbus + Mesh chips are based on a DDR2/3 (Atom) and DDR3/4 (Skylake) family of designs.

Let me know where DDR2 or DDR3 support is on Gracemount/Tremont/Goldmont Plus?

We get you don't like these Intel chips, but continuing to troll by calling them Atom/Skylake? Either get over this already, or you'll likely find yourself not being able to "discuss" such topics
 
Back
Top Bottom