• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

5800x3d vs 12900k tuned gaming benchmarks

Soldato
Joined
28 Sep 2018
Posts
2,533
unknown.png


I'm part of a small group of PC enthusiast who enjoy tuning our systems so they perform at their optimal at all times. This also helps us identify bottlenecks and tune around them where possible. For example, we're happy to give up rendering performance for gaming if gaming is going to be main goal.

I bought the 5800x3d because I haven't owned a Zen system at anypoint and its performance in simracing is untouchable. Right? Well, let's find out.

Firstly, we'll share the configuration and tune being used by each system. Then we'll go through four games that cover different genres and different gaming engines. The focus here will be 1080p low/lowest present that's available. We will also compare SMT/HT on vs SMT/HT off as that's something that's sadly not tested much.

Stability: Both system have gone through extensive stress testing using Y cruncher, Testmem 5, Karhu, Occt large/avx2 and CoreCycler avx2/full fft's (amd only). These are not meant one to be one shot suicide runs for the sake of winning a bench but something we use 24/7 across a wide range of software. We strongly believe there's no such thing as testing via just playing games and calling it 'game stable' and no such thing as 'X amount of WHEA errors are ok.'

AMD System Configuration:
5800x3d
Asus B550 Strix-F II
8pack b-die 2x8 14-15-15/3600 1.45v xmp
3090FE (no oc on gpu) NVCP settings default

unknown.png


Intel System Configuration:
12900k (e-cores off to keep it 8 vs 8)
MSI Z690i Unify (itx)
Kingston SK hynix 40-40-40/6000 1.35v XMP
3090FE (no oc on gpu) NVCP Default

unknown.png




Cyberpunk 2077 Built in Benchmark. 1080p low preset.

AMD SMT ON:
unknown.png


Intel HT ON:
CP77_HT_ON.png


AMD SMT OFF:
unknown.png


INTEL HT OFF:
CP77_HT_OFF.png




Assetto Corsa Competizione: 1080p using a custom replay and settings. I can provide the replay and settings if someone would like test.

AMD SMT OFF and SMT ON:
unknown.png


INTEL HT OFF and HT ON:
Assetto_Corsa.png





SOTTR 1080p Lowest Preset. TAA off

AMD SMT ON:
unknown.png


Intel HT ON:
SOTTR_HT_ON.png


AMD SMT OFF:
unknown.png


Intel HT OFF:

SOTTR_HT_OFF.png




Final Fantasy XIV Endwalker Demo. 1080p Standard (Desktop)

AMD SMT ON:
unknown.png


Intel HT ON:

endwalker_HT_ON_5.3.png


AMD SMT OFF:
unknown.png


Intel HT OFF:

endwalker_HT_OFF.png




Gears of War 5: 1080p Low Preset

AMD SMT ON:
unknown.png


Intel HT ON:

gears_HT_ON.png


AMD SMT OFF:
unknown.png


Intel HT OFF:
gears_HT_OFF.png



Special thanks: Be honest, none of this would have been possible without 'matique' from our group who was diligent in his benching practices on the 12900k system to ensure we had like for like settings in all benchmarks.

Assassins Creed: Odyssey. New addition since it's out on game pass and has a built in bench:

5800X3D (smt off): Rest of the tune is in OP
unknown.png


12900k (HT and ecores off) 8c8t 5.4, 7000c30

unknown.png
 
Last edited:
I bought the 5800x3d because I haven't owned a Zen system at anypoint and its performance in simracing is untouchable. Right? Well, let's find out.

Is that actually some widely agreed fact?

As i understand it the respect for the 5800X3D comes from how effectively it can compete against a 12900K system costing A LOT more, i mean you're running £300 RAM at 7000MT/s, casing point. Its like you're pulling out all rabbits you can find to stop it from being beaten comprehensively, that's not a complaint, just an observation :)

The 5800X3D also has an ability to moster its way through some edge case games, Star Citizen, Microsoft Flight Sim 2020 and a bunch more, see video below. Leaving Alder Lake in its dust.

 
Last edited:
Is that actually some widely agreed fact?

As i understand it the respect for the 5800X3D comes from how effectively it can compete against a 12900K system costing A LOT more, i mean you're running £300 RAM at 7000MT/s, casing point.

The 5800X3D also has an ability to moster its way through some edge case games, Star Citizen, Microsoft Flight Sim 2020 and a bunch more, see video below. Leaving Alder Lake in its dust.
Apparently the 5800X3D is very popular with Simracers.
 
Apparently the 5800X3D is very popular with Simracers.

Okay, it seems to me its very popular with anyone who plays games, they all have their own communities and in the Star Citizen community the 5800X3D was a very hot topic when it hit the scene.

I'm sure the MS Flight Sim community was all over it too.

The Linux gaming community are mad for it.
 
Nice post. I only have one of those benchmarks you ran (SOTTR) so I'll add my contribution.

I'd like to preface this by agreeing with the sentiment above from @Robert896r1, one and done suicide runs are great but if the system is not truly stable then that should be considered.

With that sentiment in mind, here is my daily gaming overclock setup. I want to make this clear that this produces 0 WHEA errors, it is stable for running ycruncher overnight (1-7-0 options), 30 runs of TM5 1usmusv3, 100000% Karhu, 8 hours of OCCT large. ;)

Settings (timings, voltages and FCLK) 1.535 VDIMM
3XvDcxA.jpg

Here is the best CPU average score I achieved with a 5800X3D, SMT off (thanks Robert for that tip) Windows 11, 6900 XTXH using the exact settings shown above.
6MObAi7.png
I also tested SOTTR on Windows 10 and got a worse score vs my best above.

Something I found interesting is Warzone. It is also a decent CPU benchmark and I get better scores for Warzone on Windows 10 instead of 11, so it can definitely vary per game which OS provides better results.

I am still tuning and improving my overclock on the memory/FCLK for the 5800X3D. I have a better SOTTR score saved (higher than 411 :D) but since that profile is not yet stable (fails TM5 after a half dozen cycles) I will won't post it here to keep in the spirit Robert has outlined in his OP.
 
Interesting!

Summariesed your results and the SMT on/off being a bit all over the plce is really interesting and it is something most reviewers seem to miss:
nnX9pUT.png

EDIT: wonder if the SOTTR sub-scores tell anything useful? Min FPS and CPU scores just seem like more useful than Average FPS.
 
Game specific. Generally, no. In Warzone it hurts performance to turn it off.
Ok cheers, just that list of games happened to benefit. Useful to know though, I wouldn't have expected that these days, thought all those SMT issues had been resolved years ago.
 
Ok so for AMD it's definitely better to turn SMT off for all gaming?

So far yes in those game but always test your primary games for yourself and confirm. I have the Gears 5 results and they'll show similar when I post them later. The issue isn't SMT but how power suffocated X3D is. You quickly run out of power budget and when you turn on SMT, you can see that the core boosting is immediately impacted.

What X3D lacks are two things: a higher power ceiling and more granular voltage control. With those two things, it'd have way more performance to give.

Case in point, I'm equally stable at 1933 IF running the same timgings but I need to give VSOC, VDDG groups more voltage to get there. This in turn means the SOC takes more power away from the cores. So it looks good on AIDA but it creates a regression in actual games.

Other option is to put the X3D on a chiller. Lower temps = lower power draw. Meaning more power budget for the chip. Obviously, that's not a good idea for daily but you get my point.
 
I got my 5800x3D on Friday and have been able to play BF2042 now finally after 6 months- before with my 5800x the fps went from 80-90 to 120 but now it’s a solid 120.

I’m also looking forward to when the free fly event in Star Citizen is over to get a better understanding the improvements there too. I tried Star Citizen out over the weekend but it was too frustrating with all the crashes etc but then they got over 110k new account sign ups over the past week and a bit so the servers took a hammering and performance generally degraded.

Overall it’s a great CPU and glad I could find a decent UK reseller to get one.

This guide really helped with temps too.

 
So far yes in those game but always test your primary games for yourself and confirm. I have the Gears 5 results and they'll show similar when I post them later. The issue isn't SMT but how power suffocated X3D is. You quickly run out of power budget and when you turn on SMT, you can see that the core boosting is immediately impacted.

What X3D lacks are two things: a higher power ceiling and more granular voltage control. With those two things, it'd have way more performance to give.

Case in point, I'm equally stable at 1933 IF running the same timgings but I need to give VSOC, VDDG groups more voltage to get there. This in turn means the SOC takes more power away from the cores. So it looks good on AIDA but it creates a regression in actual games.

Other option is to put the X3D on a chiller. Lower temps = lower power draw. Meaning more power budget for the chip. Obviously, that's not a good idea for daily but you get my point.
Hence why under-volting can give more performance?
 
Hence why under-volting can give more performance?
It depends on your settings and that varies from system to system.

If you see more performance by reducing voltage to the processor, then you are power or temperature limited somewhere.

Using my settings above, I see no performance gain using curve optimiser, all I see is lower temperatures and power draw. However, I see performance regression if my curve optimiser settings are too aggressive (clock stretching). This becomes more apparent at higher clock speeds for CPU/ memory and FCLK. It's all about finding the right balance and this takes time and patience.
 
It depends on your settings and that varies from system to system.

If you see more performance by reducing voltage to the processor, then you are power or temperature limited somewhere.

Using my settings above, I see no performance gain using curve optimiser, all I see is lower temperatures and power draw. However, I see performance regression if my curve optimiser settings are too aggressive (clock stretching). This becomes more apparent at higher clock speeds for CPU/ memory and FCLK. It's all about finding the right balance and this takes time and patience.
Ok just Googled that:

"Clock Stretching is a safety feature that is built into all AMD Ryzen CPUs. When the CPU thinks the actual voltage is too low to sustain a stable system at a given frequency, it will reduce the clock period until the voltage is back at the acceptable level."

Makes sense, there is an optimal voltage and lower is not always better.
 
Ok just Googled that:

"Clock Stretching is a safety feature that is built into all AMD Ryzen CPUs. When the CPU thinks the actual voltage is too low to sustain a stable system at a given frequency, it will reduce the clock period until the voltage is back at the acceptable level."

Makes sense, there is an optimal voltage and lower is not always better.
Happens in GPUs too, you just need to keep an eye out for it and that's possible using HWINFO64, Look for effective clock speed for CPU and GPU.

I sometimes even have it visible on my overlay when gaming to make sure everything is as it should be under various workloads.
 
Back
Top Bottom