• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Ryzen 7 5800X 3D Cache Eight Core 4.5GHz (Socket AM4) Processor - Retail - Go Go Go xD

Anyone know if Asus BIOS has anything similar to Kombo strike?
There is a setting called performance optimizer and if you disable it. IT forces the cpu to run at 65watts. i did try it and game performance in god of war was only down about 10%. With other games your millage will vary
 
Most pastes these days are good. While there's definitely a difference between them regards to conductivity, only time I really notice a stark difference between, say MX4 and Cryonaut is under a customer loop. On air I tend to only notice about tops a 1-2 degree difference under my use case and that's probably margin of error stuff. I just end up using whatever I have kicking about provided it hasn't dried up but it's not like I'm tearing everything down and rebuilding constantly these days.

In my case the missus is getting a "safer" 12700, I'm getting a "new and experimental" 5800x3d...

This was what piqued my curiosity to be honest.

I was expecting a side-grade from a 9900KS with a win some, loose some scenario in terms of performance but was quite surprised by the single threaded and sequential performance was about on par despite the 10% frequency deficiency. Biggest difference I've noticed is in applications that involve a lot of memory access and memory streaming such as simulation and open world titles. Normally you could have a rough stab at how a new platform would compare to the existing one as the increases in IPC have ground to a craw generation to generation and frequency has been the biggest increase. With HEDT platforms you at least knew what you could gain from the extra cores, PCIe lanes and cache but seems like both Intel and AMD have abandoned those platforms to focus on adding more cores to mainstream platforms now.

Intel having a stab at innovation with E Cores seemed like it might add something but turns out it largely boils down to frequency increases again for high performance in gaming. With 3D Vcache it's a bit of a mystery how it might perform in the future as it's so application dependent. It's clear that sequentially executed single threaded workloads are going to struggle to keep up as the frequency is lower but in parallelised multi threaded applications, especially those that are constantly accessing memory, I have the feeling this CPU might have a bit more longevity than expected.

Plus every other game these days is open world as the industry is creatively bankrupt. Be interesting to see how the CPU landscape looks when proper 'next gen' titles are out and there are some actual modern engines such as Unreal Engine 5 released in commercial products.
 
This was what piqued my curiosity to be honest.
I never had an AMD CPU build (but happy GPU customer) and she uses her PC for work so she will replace her 4th gen i7 with as 12th gen one while I'm going to try going AMD for the first time.
Both builds are expected to last >6 years so it will be an interesting experiment in longevity of a "tried and true" vs an "exotic" solution.

I'm betting on extra-large cache getting an advantage over e-cores for newer games, if I'm wrong hopefully I'll be able to snatch a 5950x for cheap in 5 years or so.
 
On my Rog Strix X570-E Gaming, you can access PBO and it works on the 5800X3D. It is hidden away, but head to;
  • Advanced > AMD CBS > NBIO Common Options > XFR Enhancement > Precision Boost Overdrive
Set it to Manual, and tune away. :)
Ah fantastic, that's the very motherboard that I currently have!
Any pointers from your testing? Also which BIOS do you have installed?
 
Last edited:
1080P 3733 C14 vs 3600 C16
I assume with 4090 this means that GPU is not fully utilized in 1080p and you are testing only CPU right?
I have 3080Ti so I ran the test in 720p to also eliminate GPU as a bottleneck, and I got 184fps, compared to yours 200.
I have 3200 CL14 1T memory (using XMP profile). Could 3200 compared to 3600 really mean 16fps difference?
Or maybe it is due to me not doing the PBO -30 undervolt thing?
I assume you had the "crowd density" setting at high?

Me, 3090 Ti, Cyberpunk, 1080p, High. 84 low, 188 average

You as well, crowd density is set to high I assume?

I got 173fps average in 1080p, with 3080Ti.
 
Last edited:
I assume with 4090 this means that GPU is not fully utilized in 1080p and you are testing only CPU right?
I have 3080Ti so I ran the test in 720p to also eliminate GPU as a bottleneck, and I got 184fps, compared to yours 200.
I have 3200 CL14 1T memory (using XMP profile). Could 3200 compared to 3600 really mean 16fps difference?
Or maybe it is due to me not doing the PBO -30 undervolt thing?
I assume you had the "crowd density" setting at high?
He's got 9% more. With faster ram and PBO settings that sounds about right.
 
Ah fantastic, that's the very motherboard that I currently have!
Any pointers from your testing? Also which BIOS do you have installed?
To be honest I just use PBO2tuner with -30 CO which runs as soon as Windows starts.

As my D15 cooler tames the CPU to 73c peak under full load, I've just left my CPU to do it's thing.
In gaming, the CPU is very efficient, but then I am gaming at 4K, so my CPU usage will be less.
 
I assume with 4090 this means that GPU is not fully utilized in 1080p and you are testing only CPU right?
I have 3080Ti so I ran the test in 720p to also eliminate GPU as a bottleneck, and I got 184fps, compared to yours 200.
I have 3200 CL14 1T memory (using XMP profile). Could 3200 compared to 3600 really mean 16fps difference?
Or maybe it is due to me not doing the PBO -30 undervolt thing?
I assume you had the "crowd density" setting at high?



You as well, crowd density is set to high I assume?

I got 173fps average in 1080p, with 3080Ti.
Yeah, the GPU was not fully utilises at 1080p, so was limited by CPU and memory for these tests.
My CPU is using -30 CO so an all core boost of 4450MHz compared to 4225MHz all core stock.

Crowd density was set to high.

I did do some Shadow of the tomb raider test with 3200 14 13 13 13 28 42 1T timings a few pages back. 3600 C16 was slightly faster.
 
Back
Top Bottom