• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

On Intel Raptor Lake, any truth to the rumors that disabling all e-cores hurts single threaded performance of the p-cores??

It’s depends what you looking for in a chip. For a gaming only system that the OP seems interested in 8-10 big cores would probably be a better solution. The issues is Intel would get destroyed in tech reviews when compared to AMD.

Which is why I now went to 7800X3D. Got 8 big cores and no e-waste cores. And a whole bunch of L3 cache which almost all games love.
 
I was answering a direct question asked of me. Which question were you answering again?
So please demonstrate or prove the 7-8x difference in latency.

Edit: That's the last thing I write in this thread as I see where you're going to take it; typical you.

Yeah, you’re getting a bit rabid as usual.
 
Which is why I now went to 7800X3D. Got 8 big cores and no e-waste cores. And a whole bunch of L3 cache which almost all games love.

Yeah, for gaming you just want as many big cores as you can afford, but most workloads scale really well across cores and the odd one or two can benefit from improved memory performance. That where disabling one type of core might help. Although with mixed cores like Intel use it will always probaly be a compromise.

I actually have more use cases for the “E-waste” cores. Lots of slower cores with better power efficiency would suit a few jobs I do. Fingers crossed the C-cores filter down from server parts soon or Intel release a 40~E core part.
 
Last edited:
Yeah, for gaming you just want as many big cores as you can afford, but most workloads scale really well across cores and the odd one or two can benefit from improved memory performance. That where disabling one type of core might help. Although with mixed cores like Intel use it will always probaly be a compromise.


Yeah you want memory performance though the 3D cache really takes a toll off memory performance and is so fast it does the job.

I also tuned the memory a bit to give slight extra help even though not much difference on a 3D chiop every little but helps and why not if it was easy extra performance with rock solid unconditioned stability which I have.
 
Yeah you want memory performance though the 3D cache really takes a toll off memory performance and is so fast it does the job.

I also tuned the memory a bit to give slight extra help even though not much difference on a 3D chiop every little but helps and why not if it was easy extra performance with rock solid unconditioned stability which I have.

Sounds as if you have found a really good solution. Intel probably should consider a cost down 8 big core desktop chip for those looking for such. Forcing a chip on people that see it as paying over the odds for 50% weak sauce just isn’t good optics. Maybe they should take another stab at 10 cores.
 
Sounds as if you have found a really good solution. Intel probably should consider a cost down 8 big core desktop chip for those looking for such. Forcing a chip on people that see it as paying over the odds for 50% weak sauce just isn’t good optics. Maybe they should take another stab at 10 cores.

Yes totally agree.

They need to make another die with only 8-10 P cores and no e-waste cores.

Another die is needed as apparently too many defective e-core clusters makes it so they do not sell any with only 8 cores and no e-waste cores.

Plus the binning will be garbage.

Also Intel needs to get their process node issues fixed. They had issues with 10nm for a while until they finally got it working though still appears quality control problems and bad power consumption and still holds them back from other improvements.
Heck Arrow Lake still going to be on 10nm supposedly and at best 21% performance improvement where as at worst like 5% where as if their node was proper a 25% improvement across the board and lower power consumption too.

They need to get their node straight and their current 10nm even though it works now is kind of a patch and fix to their old problems of 10nm when they could not get proper yields and had to stay on 14nm. Their 10nm seems much weaker and susceptible to degradation especially on Raptor Lake above 5GHz. They brute force these chips and such to get great power, but causes bad heat and thermals and maybe degradation or inconsistency with stability and WHEAs or quality problems???
 
Let’s see if your attitude improves. Or are you now about to argue latency isn’t important.
So you are ignoring the question.

You made this statement.
DDR4 can be about 7-8x the latency of DDR3.

I believe it to be absolute cods wallop but I'm willing to entertain the fact that I could be wrong (wouldn't be the first time!) so I'm waiting for some proof from you to back it up and disprove my notion.
 
Last edited:
Yes totally agree.

They need to make another die with only 8-10 P cores and no e-waste cores.

Another die is needed as apparently too many defective e-core clusters makes it so they do not sell any with only 8 cores and no e-waste cores.

Plus the binning will be garbage.

Also Intel needs to get their process node issues fixed. They had issues with 10nm for a while until they finally got it working though still appears quality control problems and bad power consumption and still holds them back from other improvements.
Heck Arrow Lake still going to be on 10nm supposedly and at best 21% performance improvement where as at worst like 5% where as if their node was proper a 25% improvement across the board and lower power consumption too.

They need to get their node straight and their current 10nm even though it works now is kind of a patch and fix to their old problems of 10nm when they could not get proper yields and had to stay on 14nm. Their 10nm seems much weaker and susceptible to degradation especially on Raptor Lake above 5GHz. They brute force these chips and such to get great power, but causes bad heat and thermals and maybe degradation or inconsistency with stability and WHEAs or quality problems???

Yeah, but Intel getting production issues ironed out is hit and mis. As you point out 10nm is still maybe a little iffy and that’s maybe 10 years on now.
 
Yeah, but Intel getting production issues ironed out is hit and mis. As you point out 10nm is still maybe a little iffy and that’s maybe 10 years on now.


Ye true, Intel is so arrogant they refuse to use TSMC even tough they would solve their problems. They insist it must be in house even their their own process node has lots of issues.
 
The op title says single thread, and the few pages I’ve looked at are all about games running 8 cores.

Can anyone run at full stock CB r23 single thread with and without E cores.

Just to see the numbers
 
The op title says single thread, and the few pages I’ve looked at are all about games running 8 cores.

Can anyone run at full stock CB r23 single thread with and without E cores.

Just to see the numbers


When I had 13900K or 2 last Fall I did single thread with CPUz benchmark with both e-cores on and off, In WIN10, made 0 difference and score always the same within margin of error. I also did same with CInebench and I think it was the same results

In WIN11, single thread score was worse by like 5-10% with all e-cores off and was inconsistent. With even 1 e-core on no inconsistency and always the same within margin of error. Same with CInebench though do not remember the inconsistency and how much as it took so much longer to run and do not remember exact results, but was similar I think.

My conclusion is that because WIN11 is thread director aware, it does not know how to distinguish between hyper threads and real threads and thus causes issues when there are no e-cores enabled.

WIN10 is not aware of the thread director so it treats the 13900K CPU as a normal 8 core 16 thread CPU and thus no performance loss in single thread with e-cores off.


That link explains why WIN10 vs WIN11

Though there may be more to the story. I suppose its possible having e-cores off on 13th Gen hurts even in WIN10 despite not hurting single thread score.

In theory 8 core 16 threads should be no different than with e-cores on except you lose the power of e-cores but the P cores should perform better or not be hurt right?? On 12th Gen true for sure.

13th Gen, probably at least in WIN10, but maybe something hidden we do not know about which caused me problems and WHEAs or maybe secret single thread performance regression due to Intel forcing it to be dependent on e-cores so the ring bus could run at higher speeds on 13th Gen with e-cores on and thus some weird unknown dependency??? Falkentyne at overclock.net theorized that but they were using WIN11. Thus more to the story???? Or maybe not?????
 
Last edited:
The first post i read when disabling the E cores he overclocked the ring. This is not a fair comparison. I’d like to see just on and off everything to stock.

In bench marks the scores give a false reading as you get a full 24t score. But under full heavy load they should be off.. so should be 16t
 
Last edited:
Back
Top Bottom