• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Raptor Lake Leaks + Intel 4 developments

Well, it looks like Spiderman is keeping the e cores busy compiling shaders in real time and this is why the power usage is so high.

The console version had precompiled shaders, but for some reason they decided to change that for the PC.

Obviously a game which actually keeps most cores utilised is going to use more power.

Or: saying Alder Lake's potential high power usage is not a problem for games ist only true while games after lightly threaded.

That's why pc games need direct storage, get rid of the cpu and ram
 
Well, it looks like Spiderman is keeping the e cores busy compiling shaders in real time and this is why the power usage is so high.

The console version had precompiled shaders, but for some reason they decided to change that for the PC.
PC games always compile the shaders as the game has no idea what GPU and drivers you are using.
Consoles are fixed, so can optimise for a single hardware config.

This is why drivers implemented shader caching, so there shouldn't be any need to recompile more than once, unless you update drivers or change gpu.
 
PC games always compile the shaders as the game has no idea what GPU and drivers you are using.
Consoles are fixed, so can optimise for a single hardware config.

This is why drivers implemented shader caching, so there shouldn't be any need to recompile more than once, unless you update drivers or change gpu.
Yes, but doing so during install / first launch makes more sense rather than seeing that the computer has spare cores (not even sure if knows they are E cores but I guess it might) and using those to compile while running.

Now it is possible that the way reviewers work - with short runs - the game never gets a chance to build up a compiled shaders to cache and actual gamers won't see that after the initial compilation. Well, not unless they too keep changing GPUs; and/or certain settings as I guess the shaders could need re-compiling for some settings.
 
it doesn't work like that, they don't just scale onto different cores infinitely.

some stuff can go on extra cores sure but they can't equally split everything, one part of the process will always limit the rests speed as well
splitting hairs now. It's true that single core performance counts for more in games.

The point I was making, is that it's easy to push CPUs to their limits in modern strategy games with lots of units (e.g Total War).
 
That's why pc games need direct storage, get rid of the cpu and ram
Watchdogs: Legion is another CPU intensive game that has high utilization across 8 threads /physical cores. Turning on hyperthreading definitely benefits this game.

I wonder if that's true for all games with high utilization on 8 CPU threads/physical cores?
 
but it's not because they won't use 20 cores or 50 cores
Yeah, tbf WH3 really doesn't benefit from hyperthreading at all (it can cause framerate dips). It doesn't seem to make a difference if more than 8 threads are set, or not. Also, they removed DX12 support due to stability problems, which doesn't help either.

Increasingly, I think L3 cache (and likely the cache ratio / clock rate) seems to be the thing that makes the most difference to framerates, particularly as IPC and clock rates have improved.
 
Last edited:
There are some absolutely comical power draw figures being thrown about here for consumer grade chips. I've got 64 core epyc chips with smaller TDP requirements. If this is the future I do not like.

Pushing the power envelope rather than the efficiency to "win" given the current climate seems mad and in the end costs us all.
Actually efficiency goes up drastically. You know you don't have to run the CPU at whatever power limits it ships with, right?
 
Very little is of course not a constant. That new Spiderman game has reports of really high CPU power draws: 5800X3D at 100W while gaming and 12900K at around 200W.

Sure cinebench probably still draws more, but that game is pretty heavy.
I just tried it, not true. 12900k in spiderman everything maxed out (at 800x600 too push the CPU) draws around 90-95w. There are some peaks here and there to 108w but that's it. We are talking about framerates all the way up to 180fps, although usually it sits to around 110-120
 
Actually efficiency goes up drastically. You know you don't have to run the CPU at whatever power limits it ships with, right?

we all know you like to fiddle and tune - We aren't talking about that, we are talking about the 99% of people that will buy a cpu and slot it in running stock and never touch it again. As in what basically everybody who buys these chips does. Remember you are the 0.1%.

We disagreed on your measure of efficiency in another thread - Lets not bring those same efficiency arguments into this thread as well.
 
we all know you like to fiddle and tune - We aren't talking about that, we are talking about the 99% of people that will buy a cpu and slot it in running stock and never touch it again. As in what basically everybody who buys these chips does. Remember you are the 0.1%.

We disagreed on your measure of efficiency in another thread - Lets not bring those same efficiency arguments into this thread as well.
No we are not talking about the 99% of people. We are talking about you that - you seem to - care about efficiency. Im not talking about tuning anything, just setting your power limits. You cant even get into the bios to enable XMP without the motherboard forcing you to choose your power limits.

Regarding the other 99.9% of people, they either care about efficiency, so they will set their power limits accordingly, or they don't, in which case it's not an issue. You think there is a single person on planet Earth that buys a 13900k or something and doesn't know that he can change the power limits if he wants to? I don't believe so
 
It's likely somewhere in the middle. There are plenty of people who buy halo products who don't bother with any tweaks but for products like the 12900k, the percentage of people who will tweak them will be far greater than 0.1%
 
Was more aimed at the whole 12900 forgot the k was the unlocked variant, still stands very very small percentage of these chip will ever be run power limited.
 
No we are not talking about the 99% of people. We are talking about you that - you seem to - care about efficiency. Im not talking about tuning anything, just setting your power limits. You cant even get into the bios to enable XMP without the motherboard forcing you to choose your power limits.

Regarding the other 99.9% of people, they either care about efficiency, so they will set their power limits accordingly, or they don't, in which case it's not an issue. You think there is a single person on planet Earth that buys a 13900k or something and doesn't know that he can change the power limits if he wants to? I don't believe so

99% of people don't build their own machines, 99% of people never enter the bios... 99% of people who own a PC probably don't know what a bios is... 99% of people dont know what XMP is... I could go on but I've wasted enough time in other threads arguing the toss. Raising a power limit to crazy to eek performance doesn't raise efficiency, it raises power draw.
 
Adding more E-cores doesn't make CPUs more efficient. P-Cores running at their base frequency (same as Zen3/4 cores) will give the best performance per watt. E-cores are what Intel does when they want more multithreaded performance for laptops, that can't handle more P-cores due to cooling constraints. The sapphire rapids server CPUs will mostly be ran at, or near the base clocks.

Nuff said. So yeah, Intel needs a high end design without E-cores. Also, P cores on 10nm ESF lose their efficiency at very high clocks (probably around 5ghz), and start chewing up a ton of power.

Intel is going to need their 7nm EUV ('Intel 4') technology to improve the efficiency of future CPUs, so prioritizing the Meteor Lake release will be very important for their future progress.
 
Back
Top Bottom