• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

5800x3d vs 12900k tuned gaming benchmarks

Associate
Joined
28 Sep 2018
Posts
2,496
unknown.png


I'm part of a small group of PC enthusiast who enjoy tuning our systems so they perform at their optimal at all times. This also helps us identify bottlenecks and tune around them where possible. For example, we're happy to give up rendering performance for gaming if gaming is going to be main goal.

I bought the 5800x3d because I haven't owned a Zen system at anypoint and its performance in simracing is untouchable. Right? Well, let's find out.

Firstly, we'll share the configuration and tune being used by each system. Then we'll go through four games that cover different genres and different gaming engines. The focus here will be 1080p low/lowest present that's available. We will also compare SMT/HT on vs SMT/HT off as that's something that's sadly not tested much.

Stability: Both system have gone through extensive stress testing using Y cruncher, Testmem 5, Karhu, Occt large/avx2 and CoreCycler avx2/full fft's (amd only). These are not meant one to be one shot suicide runs for the sake of winning a bench but something we use 24/7 across a wide range of software. We strongly believe there's no such thing as testing via just playing games and calling it 'game stable' and no such thing as 'X amount of WHEA errors are ok.'

AMD System Configuration:
5800x3d
Asus B550 Strix-F II
8pack b-die 2x8 14-15-15/3600 1.45v xmp
3090FE (no oc on gpu) NVCP settings default

unknown.png


Intel System Configuration:
12900k (e-cores off to keep it 8 vs 8)
MSI Z690i Unify (itx)
Kingston SK hynix 40-40-40/6000 1.35v XMP
3090FE (no oc on gpu) NVCP Default

unknown.png




Cyberpunk 2077 Built in Benchmark. 1080p low preset.

AMD SMT ON:
unknown.png


Intel HT ON:
CP77_HT_ON.png


AMD SMT OFF:
unknown.png


INTEL HT OFF:
CP77_HT_OFF.png




Assetto Corsa Competizione: 1080p using a custom replay and settings. I can provide the replay and settings if someone would like test.

AMD SMT OFF and SMT ON:
unknown.png


INTEL HT OFF and HT ON:
Assetto_Corsa.png





SOTTR 1080p Lowest Preset. TAA off

AMD SMT ON:
unknown.png


Intel HT ON:
SOTTR_HT_ON.png


AMD SMT OFF:
unknown.png


Intel HT OFF:

SOTTR_HT_OFF.png




Final Fantasy XIV Endwalker Demo. 1080p Standard (Desktop)

AMD SMT ON:
unknown.png


Intel HT ON:

endwalker_HT_ON_5.3.png


AMD SMT OFF:
unknown.png


Intel HT OFF:

endwalker_HT_OFF.png




Gears of War 5: 1080p Low Preset

AMD SMT ON:
unknown.png


Intel HT ON:

gears_HT_ON.png


AMD SMT OFF:
unknown.png


Intel HT OFF:
gears_HT_OFF.png



Special thanks: Be honest, none of this would have been possible without 'matique' from our group who was diligent in his benching practices on the 12900k system to ensure we had like for like settings in all benchmarks.

Assassins Creed: Odyssey. New addition since it's out on game pass and has a built in bench:

5800X3D (smt off): Rest of the tune is in OP
unknown.png


12900k (HT and ecores off) 8c8t 5.4, 7000c30

unknown.png
 
Last edited:
Ok so for AMD it's definitely better to turn SMT off for all gaming?

So far yes in those game but always test your primary games for yourself and confirm. I have the Gears 5 results and they'll show similar when I post them later. The issue isn't SMT but how power suffocated X3D is. You quickly run out of power budget and when you turn on SMT, you can see that the core boosting is immediately impacted.

What X3D lacks are two things: a higher power ceiling and more granular voltage control. With those two things, it'd have way more performance to give.

Case in point, I'm equally stable at 1933 IF running the same timgings but I need to give VSOC, VDDG groups more voltage to get there. This in turn means the SOC takes more power away from the cores. So it looks good on AIDA but it creates a regression in actual games.

Other option is to put the X3D on a chiller. Lower temps = lower power draw. Meaning more power budget for the chip. Obviously, that's not a good idea for daily but you get my point.
 
Are you not leaving performance on the table in these CPU bottleneck scenarios only have 2 single rank dimms on the Ryzen?

Is the 12900k losing much having the e-cores turned off?

Edit*
4:02 in this video

ecores aren't harmful for the most part but can be left off for primarily gaming setups. One of our guys did extensive testing here on the various core configurations for ADL: https://kingfaris.co.uk/blog/12900k-core-configs/summary We also felt 8 vs 8 was more direct as I mentioned in the OP.

DR Bdie would help to a degree but this was a what I'd call budget build to have some fun on Zen. The current kit I'm using is from my 9900k setup so no additional build cost. A new DR Bdie kit is about 190 so not really up for spending that much. https://www.overclockers.co.uk/team...00c14-3200mhz-dual-channel-kit-my-003-8p.html

If you got one for me to test with, happy to!
 
Last edited:
I better not point out my 32GB of 8 Pack B-die C14 RAM then :cry: Looks like massive overkill with the 5800X3D ;)

Yeah that's the ram I would like. Still bugs me that I only have SR ram. A lot of scores would be nicely imporoved esp the min fps with DR instead of SR. Running these timings wouldn't be an issue either since frequency itself is so low.

Dual rank bdie is pretty OP.
 
Assassins Creed: Odyssey. New addition since it's out on game pass and has a built in bench. 1080p Low preset:

5800X3D (smt off): Rest of the tune is in OP
unknown.png


12900k (HT and ecores off) 8c8t 5.4, 7000c30

unknown.png
 
Last edited:
5800X3D is smashing outta the park there, lower score but almost 1 GHz less CPU speed and RAM at around 50% of the LGA1700 part as well. Crazy what a bit of cache can offer, and that frame time consistency on the X3D is way better, with two weird low spikes vs one on the LGA1700.

Frame time consistency is better on the 12900k. Look at the smoothness of the CPU and GPU graphs.

The top graph is deceptive as it's range is based on min/med/avg so the 12900k looks 'worse' because it's using a smaller range for the data. The smoothness you're seeing is range compression.
 
That's the end of this series of benches. Will come back prob around Xmas with tuned RPL vs Zen4. Both on DDR5 and maxed out for gaming as in the OP.

Main reason for the delay is letting both ecosystems mature in terms of bios and learning any unique aspects to either platform so we do them justice.
 
Back
Top Bottom