• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Raptor Lake Leaks + Intel 4 developments

I have looked at reviews and most gaming benchmarks show Zen 4 7700X barely beating 12700K despite its P cores clocked 400-700MHz higher than 12700K.

I mean if they are same clock speed not sure.

Though I did see a 7950X at 5GHz trading blows or beating a 12900K at 4900MHz.

So they seem all over the place.


In this one P cores are clocked at 4700MHz for 12700K and the P cores on 7700X are clocked at 5400-5500MHz. And they trade blows. The e-cores ignore them as they do nothing for games and are crap so do not count


In this review the 12900K P cores are clocked around 4900MHz and 7950X all cores clocked around 5000MHz. Ignore Intel CPU clock as it averages in e-waste cores which do not count and do nothing for games, so look at P core clock. Since 7950X has only P cores look at overall core clock which is around 5000MHz.

They trade blows which proves your point here.



Ironically though this one trades blows with 7950X being 5500Mhz and 12900K P cores being around 4900MHz so again proves the point of IPC being better on Golden Cove??

Strange all around.

They aren’t really tests of IPC.
 
What if the 95c was doing productivity tasks faster at equal power usage

Couldn't care which side it was I'd go with the 95c
Agreed, but that's not the issue. The issue is some users that seem to support one particular company over another for unknown reasons make up lies and spread them around the forums. A page ago one of those users tried to pretend that the 7950x (and all zen 4 in general) operate at the same temperature as any other intel chip, which is just a lie.
 
I have looked at reviews and most gaming benchmarks show Zen 4 7700X barely beating 12700K despite its P cores clocked 400-700MHz higher than 12700K.

I mean if they are same clock speed not sure.

Though I did see a 7950X at 5GHz trading blows or beating a 12900K at 4900MHz.

So they seem all over the place.


In this one P cores are clocked at 4700MHz for 12700K and the P cores on 7700X are clocked at 5400-5500MHz. And they trade blows. The e-cores ignore them as they do nothing for games and are crap so do not count


In this review the 12900K P cores are clocked around 4900MHz and 7950X all cores clocked around 5000MHz. Ignore Intel CPU clock as it averages in e-waste cores which do not count and do nothing for games, so look at P core clock. Since 7950X has only P cores look at overall core clock which is around 5000MHz.

They trade blows which proves your point here.



Ironically though this one trades blows with 7950X being 5500Mhz and 12900K P cores being around 4900MHz so again proves the point of IPC being better on Golden Cove??

Strange all around.
Games arent testing IPC. It's mainly latency. That's why Intel always was better in gaming. RPL will be the last monolithic chip from intel as well.

And for the love of god, do not turn off e cores, lol. I've run a benchmark - full cpu bound on spiderman. Ecores are freaking great and they help with performance.
 
Games arent testing IPC. It's mainly latency. That's why Intel always was better in gaming. RPL will be the last monolithic chip from intel as well.

And for the love of god, do not turn off e cores, lol. I've run a benchmark - full cpu bound on spiderman. Ecores are freaking great and they help with performance.



e-cores do not do crap for performance in any game unless you are running intensive stuff in background with bloated Windows install that constantly checks for Windows updates while gaming or doing streaming. And even then extra P cores are way way way better than peasant cores. A 7950X with the game threads locked to one CCD and the other CCD for all intensive background stuff so no game thread cross latency penalty.

And games not testing IPC. Well isn't IPC related to latency as lower latency is better IPC.
 
Last edited:
In fact , zen 4 approach the temperatures of boiling water.

Which is relevant how? Or did you just want to add some random fact on the end of a statement to add some 'wow' factor to it?

It was 17c and sunny outside today, far too hot for snow, but hey luckily it wasn't as hot as the plasma inside of a nuclear fusion tokamak, which is 150 million Celsius, over 10x hotter than the centre of the Sun!!!111!!!!!11!!! :cry:
 
The main issue with temps is that they're your first barrier to performance. For majority of the consumers, power draw is only as relevant as it's impact on temps. If power draw was the main objection consumers had, people would not be buying high end GPU's at an alarming rate.

Undervolting is great if it leads to a net performance gain but I'm not someone who's buying a flagship model of a product only to run it below it's performance spec and certainly not well below it's full capabilities.

Given a choice between a 300w chip that runs at 80c or a 200w chip that runs at 95c, I'll take the 300w/80c all day. I have more room to play with and get more out of it. Each his own though.
 
Last edited:
The issue with Zen 4 is unless you purchase a 360 AIO your not going to get the performance shown in reviews which is especially relevant for those buying the lower end SKUs as most of them are paired with cheaper air coolers.
 
Given a choice between a 300w chip that runs at 80c or a 200w chip that runs at 95c, I'll take the 300w/80c all day. I have more room to play with and get more out of it. Each his own though.

As I have said before given I design systems (as a job/profession) that are primarily 1U, I am used to being constrained either by, space, power, heat, noise and explaining to people who are buying kit basic physics is something I get used to. "So why can't we have an EPYC 64c part in the tiny 30cm deep chassis, can't you just put a bigger fan in it?"

The issue is everyone understands things they way they want to, rather than with facts, and I know that getting rid of 100w of extra heat, and using 100w of extra power is simply impossible in certain circumstances. Obviously this doesn't included people who like to spend their time just tweaking systems to get out more performance, i do the same to some extent except my 5600G run under 25w, and can be cool by a bird flapping its wings somewhere near by. :p
 
Last edited:
As I have said before given I design systems (as a job/profession) that are primarily 1U, I am used to being constrained either by, space, power, heat, noise and explaining to people who are buying kit basic physics is something I get used to. "So why can't we have an EPYC 64c part in the tiny 30cm deep chassis, can't you just put a bigger fan in it?"

The issue is everyone understands things they way they want to, rather than with facts, and I know that getting rid of 100w of extra heat, and using 100w of extra power is simply impossible in certain circumstances. Obviously this doesn't included people who like to spend their time just tweaking systems to get out more performance, i do the same to some extent except my 5600G run under 25w, and can be cool by a bird flapping its wings somewhere near by. :p

Your use case is perfectly valid and dissipating 100w of heat in that package is a challenge.

Now the reality is that you're on a DIY enthusiast site where people build 'normal' systems with dedicated GPU's, Cpu's with anything from large air coolers, AIO's to custom loop.

Your use case in the DIY enthusiast space just doesn't apply regardless of how many sleepless nights it might give you in your daily work.
 
e-cores do not do crap for performance in any game unless you are running intensive stuff in background with bloated Windows install that constantly checks for Windows updates while gaming or doing streaming.
Great, now show us the data proving the point. No matter how hard i tried, i couldn't even once produce measurably better numbers with ecores off and the cache clocked to 4.9 ghz. In fact, in a bunch of games the results were worse. So, where is your data?
 
Last edited:
Great, now show us the data proving the point. No matter how hard i tried, i couldn't even once produce measurably better numbers with ecores off and the cache clocked to 4.9 ghz. In fact, in a bunch of games the results were worse. So, where is your data?


Its not just average FPS. Games run stuttery with the e-waste cores on. Command and Conquer Generals in particular slight lag and pauses in the videos. With them off, smooth as silk and best 8 core CPU.


Now Intel needs an 8 well binned P core Raptor Lake with no e-waste cores and lots more L3 cache for gamers specifically!
 
Its not just average FPS. Games run stuttery with the e-waste cores on. Command and Conquer Generals in particular slight lag and pauses in the videos. With them off, smooth as silk and best 8 core CPU.


Now Intel needs an 8 well binned P core Raptor Lake with no e-waste cores and lots more L3 cache for gamers specifically!

what about in general windows use ? i mean just basic tasks browsing / watching youtube etc with p and e cores enabled, P only enabled , power used ?
 
what about in general windows use ? i mean just basic tasks browsing / watching youtube etc with p and e cores enabled, P only enabled , power used ?


Much smoother with e-waste cores off and no scheduling issues. Unless you are doing productivity apps that are highly threaded where they help a lot (Though same number of extra P cores would help so much more and without the bogginess of hybrid arch)

If you manual overclock much less power used with e-waste cores off. Now if you do auto and disable them, then it may use more power trying to brute force the p cores much higher with higher vcores than needed.
 
Its not just average FPS. Games run stuttery with the e-waste cores on. Command and Conquer Generals in particular slight lag and pauses in the videos. With them off, smooth as silk and best 8 core CPU.


Now Intel needs an 8 well binned P core Raptor Lake with no e-waste cores and lots more L3 cache for gamers specifically!
Thats a game from 2003. Thats your example? Ok, so im convinced, ecores on it is
 
Much smoother with e-waste cores off and no scheduling issues. Unless you are doing productivity apps that are highly threaded where they help a lot (Though same number of extra P cores would help so much more and without the bogginess of hybrid arch)

If you manual overclock much less power used with e-waste cores off. Now if you do auto and disable them, then it may use more power trying to brute force the p cores much higher with higher vcores than needed.
Have you actually have an alderlake cpu? Cause none of what you are saying makes any sense
 
Much smoother with e-waste cores off and no scheduling issues. Unless you are doing productivity apps that are highly threaded where they help a lot (Though same number of extra P cores would help so much more and without the bogginess of hybrid arch)

If you manual overclock much less power used with e-waste cores off. Now if you do auto and disable them, then it may use more power trying to brute force the p cores much higher with higher vcores than needed.

Huh I thought in general use in Windows would use low power because of the e cores
 
Yes I have had a couple and yes this is my experience. I use Windows 10. WIN11 is hot garbage!!
Im sorry but i highly doubt that. I asked you for data and you point me to a 2003 game. Which ill also test just to be thorough.

I have actually tested, with videos which ill link when im back home, cyberpunk and spiderman at full cpu bound settings, with stock cache and ecores on and ecores off with 4.7 ghz cache. Wanna take a guess which one performs worse in EVERY metric, be it averages lows and power consumption? The only game that ecores off is better was riftbreakers, but the difference was literally 3 fps out of 200 in averages.

Im sorry but you are just wrong and no reviewer on the planet agrees with your findings. The fact that your only non validated example is a game from 2003 says it all.
 
Last edited:
Im sorry but i highly doubt that. I asked you for data and you point me to a 2003 game. Which ill also test just to be thorough.

I have actually tested, with videos which ill link when im back home, cyberpunk and spiderman at full cpu bound settings, with stock cache and ecores on and ecores off with 4.7 ghz cache. Wanna take a guess which one performs worse in EVERY metric, be it averages lows and power consumption?

Im sorry but you are just wrong and no reviewer on the planet agrees with your findings. The fact that your only non validated example is a game from 2003 says it all.


Maybe in WIN11 it is different but WIN10 has no awareness of hybrid arch and thread director. Are you using WIN10 or WIN11?
 
Back
Top Bottom