• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

These guys have no idea what they are talking about.

5950X with power limits removed would increase in efficiency as I know it can score > 31,000 points
without power limits 5950X would be approaching 5800X both in clocks and efficiency
twice the cores, twice the score, double power consumption

you'd need to got the other way, to 90W to hit peak efficiency on 5950X
 
TPU already did similar testing using a 12900K.

cinebench-multi.png
power-multithread.png


Granted this is in full system power but the only differences are CPU, Mobo and Ram of which CPU and Mobo are inevitably going to differ so it is not an unfair comparison IMO.

The 100W PL1/PL2 limited 12900K gets 86.157 points / watt.
The 125W PL1/PL2 limited 12900K gets 93.495 points / watt.
The 190W PL1/PL2 limited 12900K gets 110.925 points / watt.
The stock 5800X gets 90.0457 points / watt.
The stock 5950X gets 144.2 points / watt.
In the 5700X review it scored 13474pts @ 126W for a score of 106.937 points / watt.

So as @Vince very correctly points out there are settings you can use where the 12900K is more efficient than than the 5800X but this kind of tuning for the 12900K can be done on the 5800X via certain settings and you can easily increase the points/watt by setting it to 65W which will get it closer to the 5700X score.

I can also tell you that my 5800X3D with a -30 all core offset scores ~15100 while using less power than stock.

It would be interesting to see if the 5950X with power limits removed would increase in efficiency as I know it can score > 31,000 points but not sure what the system powerdraw would be. If it came in under 215 or less it would beat the stock 5950X 144.2 but I don't think it would.
Tpus test is horribly flawed though. He is using locked voltage while dropping the power limits. That's how he ended up with the 12900k being less efficient than a 12600k at same wattage, which is absurd.
 
Tpus test is horribly flawed though. He is using locked voltage while dropping the power limits. That's how he ended up with the 12900k being less efficient than a 12600k at same wattage, which is absurd.

It does not say that in the intro, he just changed PL1/PL2 settings and let the rest sort itself out so no idea where you get this 'locked voltage' nonsense from aside from not liking the results.*

Another site did some testing as well and they took power draw from the 12v rail feeding the CPU and there it looks even worse with the 8p core only being behind the stock config in efficiency.

Computer Base also did some good testing but in Corona. P cores only is worse for efficiency in applications than 8p + 1 e. I think it might be related to ring bus speed being limited with e-cores enabled but turn them off and it can clock higher which increases power draw. Great for gaming where that increases performance as can be seen elsewhere but for apps like Corona and CBR23 it makes little difference.

*EDIT I will say if you check his 'efficiency' charts they do seem flawed because on the basis of those scores you get a different ranking to what he gets in his efficiency charts. Might be he is taking the energy required to complete the CB23 benchmark without factoring in that it is a fixed 10minutes + current scene rather than working out energy required to render per scene.
 
Last edited:
Right so you have posted one thing about one game but stock out the box performance for multiple games is as linked etc. Being efficient out the box and running games as per the stock settings should be the base. Running something at 25watt is irrelevant. I am home tonight so can put my 5950x at 25watt for FC6 and my 6900xt and see what I get if you want but I don't have a 3090 and other factors would be involved unless you happen to have an AMD CPU to normal that is worthless info.
Sure, the 3d in games is insanely efficient out of the box, no arguments there. With that said, i personally dont care about out of the box for anything, so for me it's kinda irrelevant. I never run any cpu out of the box. Since i can max out my 3090 with the 12900k at 25w, thats what i do
 
It does not say that in the intro, he just changed PL1/PL2 settings and let the rest sort itself out so no idea where you get this 'locked voltage' nonsense from aside from not liking the results.

Another site did some testing as well and they took power draw from the 12v rail feeding the CPU and there it looks even worse with the 8p core only being behind the stock config in efficiency.

Computer Base also did some good testing but in Corona. P cores only is worse for efficiency in applications than 8p + 1 e. I think it might be related to ring bus speed being limited with e-cores enabled but turn them off and it can clock higher which increases power draw. Great for gaming where that increases performance as can be seen elsewhere but for apps like Corona and CBR23 it makes little difference.
He doesnt say it cause he didnt realize it himself. As you can see from the results, the test doesnt make sense. How is it possible for the 12600k to score the same with the 12900k at same wattage? It doesnt make sense, something is very wrong with his results. Anyone with a 12900k can confirm what im saying, a stock 12900k at 125w scores way higher than what he is getting. I think it scores around 23 to 24k
 
Sure, the 3d in games is insanely efficient out of the box, no arguments there. With that said, i personally dont care about out of the box for anything, so for me it's kinda irrelevant. I never run any cpu out of the box. Since i can max out my 3090 with the 12900k at 25w, thats what i do
But that isn't how anything is done, the vast majority run stock anything they purchase. And so efficiency when being shown is in that form and that is where Intel fail.

With that I also do not believe that your 12900k at 25w max's out your 3090 in all gaming cases at all. If that was the case then everyone would be doing that and well they are not because lots of games offer more performance with CPUs at higher power levels.

With that if you only play a few games and so can optimise for them yourself that is a very very specific situation that not company would test/produce for and no review would look at because it isn't realistic. With that as mentioned I will happily load up FC6 with my 5950x set to 25watt whilst having on of the CCX off to keep it reasonable and see what FPS I get but it would be on a 6900xt.

What I do know is out the box the 5950x for 1440p gaming with the 6900xt and graphical settings always set to max I average anything from 40watt to 80watt depending on game. I am sure I could make all of that more efficient without much issue if needed.
 
But that isn't how anything is done, the vast majority run stock anything they purchase. And so efficiency when being shown is in that form and that is where Intel fail.

With that I also do not believe that your 12900k at 25w max's out your 3090 in all gaming cases at all. If that was the case then everyone would be doing that and well they are not because lots of games offer more performance with CPUs at higher power levels.

With that if you only play a few games and so can optimise for them yourself that is a very very specific situation that not company would test/produce for and no review would look at because it isn't realistic. With that as mentioned I will happily load up FC6 with my 5950x set to 25watt whilst having on of the CCX off to keep it reasonable and see what FPS I get but it would be on a 6900xt.

What I do know is out the box the 5950x for 1440p gaming with the 6900xt and graphical settings always set to max I average anything from 40watt to 80watt depending on game. I am sure I could make all of that more efficient without much issue if needed.
Well even out of the box only the 3d surpasses alderlake in gaming efficiency. It goes kinda like this 3d > alderlake > cometlake > zen 3 >>>> rocket lake.
 
Well even out of the box only the 3d surpasses alderlake in gaming efficiency. It goes kinda like this 3d > alderlake > cometlake > zen 3 >>>> rocket lake.
I'm still surprised what the 3D can do for the power draw and clock speed. I think once all the Intel and AMD CPUs are running extra cache we're going to see some very nice increases in performance. Especially with more cache and high clock speeds.
 
Well even out of the box only the 3d surpasses alderlake in gaming efficiency. It goes kinda like this 3d > alderlake > cometlake > zen 3 >>>> rocket lake.

Except that just isn't true. The 5800x sat at 65watt average compared to the 12900k at 87watt. So you take that and compare the FPS figure of the 5800x to the 12900k/ks and you can see that just isn't true either and we already knew the 5800x was the most efficient for gaming compared to the 5900x/5950x due to their extra cores so why is that of surprise?

So no if you take the power usage and FPS out the box then Zen3 is the most efficient. Again you could make both more efficient if needed etc. The 5800X3D is just hugely more efficient than anything else released to date.
 
He doesnt say it cause he didnt realize it himself. As you can see from the results, the test doesnt make sense. How is it possible for the 12600k to score the same with the 12900k at same wattage? It doesnt make sense, something is very wrong with his results. Anyone with a 12900k can confirm what im saying, a stock 12900k at 125w scores way higher than what he is getting. I think it scores around 23 to 24k

The hardwareluxx result? No explanation of how they set the power limit and they are using package power for their tests which could very well be from HWInfo or similar rather than testing on the 12v rail or doing full system draw from the wall.

I see far more potential for flaws in the hardwareluxx testing.

Also worth noting this.

218067-9609a8ae1e7d00d56f9a875c64cb0f25.jpg

218068-7c26e61c6158a862f102d0027c28ed14.jpg


If these are mis-matched then power measurement reporting can be off which is why just using wall power or reading directly from the 12v rail is by far the preferred way to compare. 125W package power with a mismatch could be drawing more power than you think hence a higher score.

EDIT: After going through the thread there I like how someone mentions that fixed voltages could have been set, shows a chart where the results do not scale in a linear fashion and you conclude that Wizzard did in fact use fixed voltages.

It is also entirely plausible that on that particular combination of motherboard and bios lower power configs were pushing more to e-cores and ring like the same poster observed with their 12700K which would very easily explain a power limited 12900K being = a 12600K in performance and power consumption.
 
Last edited:
The hardwareluxx result? No explanation of how they set the power limit and they are using package power for their tests which could very well be from HWInfo or similar rather than testing on the 12v rail or doing full system draw from the wall.

I see far more potential for flaws in the hardwareluxx testing.

Also worth noting this.

218067-9609a8ae1e7d00d56f9a875c64cb0f25.jpg

218068-7c26e61c6158a862f102d0027c28ed14.jpg


If these are mis-matched then power measurement reporting can be off which is why just using wall power or reading directly from the 12v rail is by far the preferred way to compare. 125W package power with a mismatch could be drawing more power than you think hence a higher score.

EDIT: After going through the thread there I like how someone mentions that fixed voltages could have been set, shows a chart where the results do not scale in a linear fashion and you conclude that Wizzard did in fact use fixed voltages.

It is also entirely plausible that on that particular combination of motherboard and bios lower power configs were pushing more to e-cores and ring like the same poster observed with their 12700K which would very easily explain a power limited 12900K being = a 12600K in performance and power consumption.
I dont know what the problem is but its obvious that there is a problem. The 12600k cannot match a 12900k at same wattage in cbr23, that's a given. Again, i can assure you, anyone with a 12900k can find out for themselves, the cpu scores over 23k at 125w at complete stock settings. Maybe early bios was the issue with his test, i dont know, but its painfully obvious its flawed.

When I get back home ill post a cbr23 stock at 125w, believe me it's not going to score 18k.
 
I dont know what the problem is but its obvious that there is a problem. The 12600k cannot match a 12900k at same wattage in cbr23, that's a given. Again, i can assure you, anyone with a 12900k can find out for themselves, the cpu scores over 23k at 125w at complete stock settings. Maybe early bios was the issue with his test, i dont know, but its painfully obvious its flawed.

When I get back home ill post a cbr23 stock at 125w, believe me it's not going to score 18k.

Without a way to read power draw that does not involve reading values provided by the motherboard I won't believe it is actually limited to 125w.
 
Except that just isn't true. The 5800x sat at 65watt average compared to the 12900k at 87watt. So you take that and compare the FPS figure of the 5800x to the 12900k/ks and you can see that just isn't true either and we already knew the 5800x was the most efficient for gaming compared to the 5900x/5950x due to their extra cores so why is that of surprise?

So no if you take the power usage and FPS out the box then Zen3 is the most efficient. Again you could make both more efficient if needed etc. The 5800X3D is just hugely more efficient than anything else released to date.
Thats definitely not how you compare efficiency though. Thats actually a surefire way to end up at the wrong conclusions, just like you did. You have to normalize either for performance or for efficiency. Say you lock both cpus at X watts and see which one gives more fps, or you lock the fps to X and see how much power it uses.
 
Without a way to read power draw that does not involve reading values provided by the motherboard I won't believe it is actually limited to 125w.
I think you are just being irrational at this point. We know, and I hope you agree, that the 12900k should vastly outperform the 12600k at same wattage, right? I mean the core count difference and the bin difference is pretty large. Therefore the 12900k should score a lot higher than the 18k that the 12600k scores, right? I mean that's just logic 101

BTW there are lots of threads on various forums (tomshardware for example) saying that the tpu review is horribly flawed as well.
 
Last edited:
Thats definitely not how you compare efficiency though. Thats actually a surefire way to end up at the wrong conclusions, just like you did. You have to normalize either for performance or for efficiency. Say you lock both cpus at X watts and see which one gives more fps, or you lock the fps to X and see how much power it uses.
But you can do that with the results shown. You do it by taking the results and working out the % difference to compare. It is already shown as they stock to stock. So the 5800x has lower FPS by say 5% but it is using 30% less power then we still know that it is the more efficient CPU in that table out those two*

figures made up as couldn't be bothered to work out the exact % difference but premise stands.
 
I'm still surprised what the 3D can do for the power draw and clock speed. I think once all the Intel and AMD CPUs are running extra cache we're going to see some very nice increases in performance. Especially with more cache and high clock speeds.
Yeah, the 3d out of the box is a beast in gaming. But still the 12900k has the capacity to surpass it if you throw money at it, namely ram speed. Don't think it can keep up with an oced 12900k with 7000c30 kits for example.

The issue i have with the 3d is the inconsistent performance, on one game its faster than the not yet released 14900k,on other games it loses to a 5800x.
 
Yeah, the 3d out of the box is a beast in gaming. But still the 12900k has the capacity to surpass it if you throw money at it, namely ram speed. Don't think it can keep up with an oced 12900k with 7000c30 kits for example.

The issue i have with the 3d is the inconsistent performance, on one game its faster than the not yet released 14900k,on other games it loses to a 5800x.
I have no doubt a tuned 12900K with very fast DDR5 can be faster, that's where enthusiasts come in. I'm a bit too lazy, short of time, these days so out of the box is my go to now. I also agree on inconsistency but it's actually inconsistency in the games, some benefit from cache others don't. As I always maintain, decide on the job you need doing and pick the right tool. The real match up I want to see is the 14900K and the 7800X3D, I think everyone is going to be happy with the performance.
 
But you can do that with the results shown. You do it by taking the results and working out the % difference to compare. It is already shown as they stock to stock. So the 5800x has lower FPS by say 5% but it is using 30% less power then we still know that it is the more efficient CPU in that table out those two*

figures made up as couldn't be bothered to work out the exact % difference but premise stands.
Sure you can, im just saying the results you get are flawed. Higher performance usually results in higher power consumption so obviously the cpu that performa better will seem like its less efficient when its not the case. That's why you need to normalize for something
 
I think you are just being irrational at this point. We know, and I hope you agree, that the 12900k should vastly outperform the 12600k at same wattage, right? I mean the core count difference and the bin difference is pretty large. Therefore the 12900k should score a lot higher than the 18k that the 12600k scores, right? I mean that's just logic 101

BTW there are lots of threads on various forums (tomshardware for example) from 12900k saying that the tpu review is horribly flawed as well.

Relying on motherboard readouts for power consumption is horribly flawed because mobo vendors will do odd stuff to win benchmarks.
 
Back
Top Bottom