• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fallout 4 CPU benchmark thread(need some Zen3 and Zen4 results!)

For what it is worth this is a video showcasing Strange Brigade's GPU usage with an RTX 3070 on lowest with 1024x768 and is why I argue against 1080P low for the vast majority of users, it is not a CPU bound resolution for this game.

Crazy a 5600x can still fill out an RTX 3070 at VGA res in some cases. These CPU's are seriously quick and I imagine the 12th gen Intel are just as nuts too.

 
For anyone wanting to get Resident Evil 5 working which is a legacy benchmark, you will need to download these files I have provided and install them (GFWLSETUP work around, these are legitimate files from MS and not tampered). (7Zip required)

https://mega.nz/file/BqQRyKaI#3DgC-iQxzmxh7DY7kqQH3O8Is0LROCyrSgZ4m5g48gU

If you don't own the game there is a standalone benchmark availlable.
https://www.techpowerup.com/download/resident-evil-5-benchmark-utility/

I will test to see how RE5 does resolution wise and what settings for a CPU bench.

These settings below produced 60% maximum GPU usage from my RTX 3070, I will say it is a very safe resolution to use.

re5dx9-2022-02-25-03-14-05-838.png


re5dx9-2022-02-25-03-14-09-647.png


Select variable benchmark.

re5dx9-2022-02-25-03-13-59-263.png


Result.

re5dx9-2022-02-25-03-13-47-815.png
 
Last edited:
@Tired9 @Robert896r1 @humbug As a request any chance you can make a new thread for the other games you want to test please? I will contribute quite happily to them and try and buy the games for them too.

There is hardly any information for this game past Zen+ and the this thread is to investigate how newer hardware performs with it. The more contributions the merrier as it can even out any outlier results.

We're back to that IPC word. Instructions-per-cycle. Nice but not in any way precise as what we really want is performance while running task X which is pretty different. Let's call it ipcX. Problem is that a CPU (or maybe we should say system as memory etc. plays a huge role) which is very good at ipcX may not be as good at ipcY (that is performance while running task Y), or ipcZ.

It may be mostly single-threaded as the Process Explorer shot I posted in #12 shows one thread being far more used than the others.For some reason, the engine used for Fallout 4 (and Skyrim etc.) doesn't do that well on certain processers.

At the end of the day, this benchmark measures one thing only: the performance you get when running Fallout 4 at those GPU low settings. So ipcFO4 or even ipcFO4low. And this may not even be the same as ipcSkyrim despite being based on the same engine.

I think people are missing this,but some of us seriously want to use this thread to get a rough ballpark of performance with a game which hasn't been tested for a few years.

Yes,a game which seriously is poorly optimised. ATM,most information is very anecdotal about it.

Also,TBF the Zen3 results are not that bad.

Icu2kU4.png

A tuned Ryzen 9 5950X with an RTX3080TI isn't that much slower than a similar clockspeed Core i5 12600K with the same memory and a marginally RTX3080 - about 12% slower overall. The Ryzen 9 5950X is 21% faster than my semi-tuned Ryzen 7 3700X. I pushed the RAM as much as I could - if anything I could get away with better memory settings but Fallout 4 would throw up errors.

@humbug Also comments about the Core i9 9900K. It's also close to the lower of the two Alderlake results at 92FPS and 113FPS,because of the extreme tuned memory at 4400MHZ and a 5.2GHZ overclock. That is a huge increase in memory performance there.

Then the performance going down to 77FPS and 101FPS when the memory is dialed back to 3600CL14,which is below the top Zen3 result(with a 5.2GHZ overclock). At least comparing to the top Zen3 result,it is running at a higher clockspeeds,so if you want to use "IPC" Zen3 is better than CFL.

It's also showing how memory dependent this game is which itself is useful. My Ryzen 7 3700X is running 3600MHZ RAM at CL16-19-20-38-60 1T timings. KompuKare has a Ryzen 5 3600 running only about 100~200MHZ slower than my Ryzen 7 3700X,and the game really only uses 6 cores AFAIK,so that difference is down to the RAM timings IMHO.

It sort of confirms some information which Techspot showed a while back,but it does show tuning memory is a way to extract extra performance in games.
 
Last edited:
@Tired9 @Robert896r1 @humbug As a request any chance you can make a new thread for the other games you want to test please? I will contribute quite happily to them and try and buy the games for them too.

There is hardly any information for this game past Zen+ and the this thread is to investigate how newer hardware performs with it. The more contributions the merrier as it can even out any outlier results.



I think people are missing this,but some of us seriously want to use this thread to get a rough ballpark of performance with a game which hasn't been tested for a few years.

Yes,a game which seriously is poorly optimised.

It's not about proving whether your CPU or my CPU is better,its about seeing how they perform,and the more results the merrier. ATM,most information is very anecdotal about it.

Also,TBF the Zen3 results are not that bad.

Icu2kU4.png

A tuned Ryzen 9 5950X with an RTX3080TI isn't that much slower than a similar clockspeed Core i5 12600K with the same memory and a marginally RTX3080 - about 12% slower overall. The tune Ryzen 9 5950X is 21% faster than my mostly tuned Ryzen 7 3700X.

Also comments about the Core i9 9900K. It's also close to the lower of the two Alderlake results at 92FPS and 113FPS,because of the extreme tuned memory. Then the performance going down to 77FPS and 101FPS when the memory is dialed back.

It's also showing how memory dependent this game is which itself is useful.

[A tuned Ryzen 9 5950X with an RTX3080TI isn't that much slower than a similar clockspeed Core i5 12600K with the same memory and a marginally RTX3080 - about 12% slower overall. The tune Ryzen 9 5950X is 21% faster than my mostly tuned Ryzen 7 3700X./QUOTE]

Zen 3 @ 4.85Ghz, 3800MT/s CL16: 86 FPS (100%)
ADL @ 5.0Ghz, 3800MT/s CL16: 96 FPS (111%)
ADL @ 5.3Ghz, 4166MT/s CL15: 113 FPS (131%)

That suggests whatever this game is doing it is limited by your memory speed.
 
@CAT-THE-FIFTH Memory tuning, properly, has always lead to a performance bump in games. If you just oc your cores and cache, the moment the pipeline has to wait on memory, you're going to have a performance bottleneck. The whole point of tuning mem is to speed it up and limit the waiting the cpu has to do. Unfortunately, it takes a lot of time and built up knowledge to tune mem to it's maximum potential but it's a fair amount of performance people leave at the table while often focusing in the wrong areas.
 
Zen 3 @ 4.85Ghz, 3800MT/s CL16: 86 FPS (100%)
ADL @ 5.0Ghz, 3800MT/s CL16: 96 FPS (111%)
ADL @ 5.3Ghz, 4166MT/s CL15: 113 FPS (131%)

That suggests whatever this game is doing it is limited by your memory speed.

I still don't know why you only got 66FPS in the first test on your Ryzen 7 5800X,but the second test was around 25% better than my Ryzen 7 3700X. I suspect if you fiddled about with your Ryzen 7 5800X you could get better scores. The slightly higher clockspeed and slightly higher IPC of ADL,is probably contributing towards that 11% extra performance,but memory tuning does help in the game.


@CAT-THE-FIFTH Memory tuning, properly, has always lead to a performance bump in games. If you just oc your cores and cache, the moment the pipeline has to wait on memory, you're going to have a performance bottleneck. The whole point of tuning mem is to speed it up and limit the waiting the cpu has to do. Unfortunately, it takes a lot of time and built up knowledge to tune mem to it's maximum potential but it's notable performance people leave at the table while often focusing in the wrong areas.

I agree - but Fallout 4 seems quite sensitive to it,why I tried my best to tune it on my Ryzen 7 3700X,but sadly I didn't end up with the best kit and I am stuck on a B450 motherboard(and I have a SFF system too). I could test better tuned settings but it was always this game which would bug out!
 
https://kingfaris.co.uk/blog/intel-ram-oc-impact/summary my buddy did mem scaling testing a while back on cml. You can see the summary results here. Depending on the game and where the bottlenecks are, you can gain a lot but even on games where the bump in avg fps isn't as big, min fps still shows a notable gain. Games such as CS:GO, F1 2019 which can mainly run in cache naturally see the least gained.
 
@CAT-THE-FIFTH

It was a cheap 32GB kit, £90 and its not been tested by my MB vendor so XMP is running in a safe mode, normally its 18-21-21 on the main timings, i have set those to 16-18-18 (which is what XMP should be) i have not gone in to sub timings which no doubt are also excessively lose.

I can't be #### to tune it, altho i probably should, normally i don't think it makes much difference, i do play high FPS games as most of the games i play tend to be older and with a 2070 Super the frame rates blow out to many 100's and don't seem to bottleneck the GPU, those frame rates are also a lot higher than they were with the 3600.

I have also tested it against other Zen 3 CPU's and while mine is not the fastest Zen 3 its also not lagging much behind, so again i don't feel like my crap memory matters much. :)
 
Last edited:
https://kingfaris.co.uk/blog/intel-ram-oc-impact/summary my buddy did mem scaling testing a while back on cml. You can see the summary results here. Depending on the game and where the bottlenecks are, you can gain a lot but even on games where the bump in avg fps isn't as big, min fps still shows a notable gain. Games such as CS:GO, F1 2019 which can mainly run in cache naturally see the least gained.

That is a really interesting set of results. I wonder if the Ryzen 7 5800X3D with its 192MB cache will be a monster in some games,then?


@CAT-THE-FIFTH

It was a cheap 32GB kit, £90 and its not been tested by my MB vendor so XMP is running in a safe mode, normally its 18-21-21 on the main timings, i have set those to 16-18-18 (which is what XMP should be) i have not gone in to sub timings which no doubt are also excessively lose.

I can't be #### to tune it, altho i probably should, normally i don't think it makes much difference, i do play high FPS games as most of the games i play tend to be older and with a 2070 Super the frame rates blow out to many 100's and don't seem to bottleneck the GPU, those frame rates are also a lot higher than they were with the 3600.

I have also tested it against other Zen 3 CPU's and while mine is not the fastest Zen 3 its also not lagging much behind, so again i don't feel like my crap memory matters much. :)

I tuned mine specifically because of prior experience with the game,but took a bet on getting B-die,but got Hynix CJR instead. My results are also for a SFF system running a Noctua L12S cooler in an NCase M1,so probably the CPU is running a bit hotter than it should.

I personally think Starfield should be better optimised though!
 
That is a really interesting set of results. I wonder if the Ryzen 7 5800X3D with its 192MB cache will be a monster in some games,then?




I tuned mine specifically because of prior experience with the game,but took a bet on getting B-die,but got Hynix CJR instead. My results are also for a SFF system running a Noctua L12S cooler in an NCase M1,so probably the CPU is running a bit hotter than it should.

I personally think Starfield should be better optimised though!

Yes, in 2022 or 2024 or when ever Starfiled is due out a game being so heavily bottlenecked by a modern high performance CPU loaded at 10% would be a joke, you just can't get away with pretending its 1997 anymore.
 
That is a really interesting set of results. I wonder if the Ryzen 7 5800X3D with its 192MB cache will be a monster in some games,then?

For cache bound games or those on the edge of being able to live in cache only, yes. For “bigger more intense” games which need mem access, not really.

If you think of mem as L4 cache it starts to make more sense of these things are related.
 
For cache bound games or those on the edge of being able to live in cache only, yes. For “bigger more intense” games which need mem access, not really.

If you think of mem as L4 cache it starts to make more sense of these things are related.

That's kind of right.

Its an FP performance hierarchy.

L1
L2
L3
Mem

What doesn't fit in L1 gets evacuated to L2, which is slower than L1... and so on.

The larger the L3 the more hits you will get in it, the speed difference between L3 and DDR4 on Zen 3 at least is 1,200 GB/s L3 vs about 60 GB/s DDR4.
 
More interesting is RDNA2 GPU's.

What AMD call "Infinity Cache" is basically L3 for GPU's, its a large pool (128MB) of memory slower than L2 but very much faster than GDDR6, its enough to catch hits which by AMD's estimation make a 256Bit memory bus architecture as fast as a 512Bit architecture.

The reason AMD claim to do this is because a 512Bit bus would use a lot more power than their architecture, which is true, but the L3 Cache is part of the memory subsystem which AMD use to make Infinity Fabric work, its the "Glue" of AMD's MCM design, and there fore in my view RDNA2 is actually the prototype for the worlds first MCM gaming GPU. RDNA3
 
Last edited:
Yes, in 2022 or 2024 or when ever Starfiled is due out a game being so heavily bottlenecked by a modern high performance CPU loaded at 10% would be a joke, you just can't get away with pretending its 1997 anymore.

It actually shows scaling to six cores,but just loads one or two a huge amount. Being a DX9/DX11 engine is no wonder. What the game needs is more effective load balancing which I suspect only DX12 or Vulkan has. It also ties into why some of the AMD dGPU results are not as good as Nvidia dGPUs,because of Nvidia trying to make their DX11 drivers more multi-threaded.

For cache bound games or those on the edge of being able to live in cache only, yes. For “bigger more intense” games which need mem access, not really.

If you think of mem as L4 cache it starts to make more sense of these things are related.

It will be interesting to see what games benefit the most from! I suspect it will be those of the type you describe.

To me this game has the hallmarks of an X87 instruction, an ancient and for many many years obsolete (Direct to Memory Operand) compiler.

It used to a decade ago,until they moved to SSE type instructions with the newer versions of the engine. If you think performance is not great now,it was even worse a decade ago!

Having said that the Phenom II X4 980 was the fastest AMD CPU in Skyrim for years. An FX8350 could barely match it.
 
Last edited:
It used to a decade ago,until they moved to SSE type instructions with the newer versions of the engine. If you think performance is not great now,it was even worse a decade ago!
Some modders did that first, and that made such a big difference that eventually for FO4 and Skyrim SE Bethesda were embarrassed into to supporting it too!
Okay, it might have been a bit more than hitting a compiler flag but x87 instructions even when Skyrim came out was already antiquated. You'd hope for the next time they re-use this old engine they make it scale better.

GPU scaling and NPC scaling are of course not necessarily the same thing. Scaling well to many threads to get great draw call performance is one thing, scaling the AI to many threads is another.

Still disappointed that the consoles didn't come with any neural net, NPU, in their SoC designs. After all, almost all phones have some. An NPU plus a good AI framework should really be in the next gen consoles.
 
Some modders did that first, and that made such a big difference that eventually for FO4 and Skyrim SE Bethesda were embarrassed into to supporting it too!
Okay, it might have been a bit more than hitting a compiler flag but x87 instructions even when Skyrim came out was already antiquated. You'd hope for the next time they re-use this old engine they make it scale better.

GPU scaling and NPC scaling are of course not necessarily the same thing. Scaling well to many threads to get great draw call performance is one thing, scaling the AI to many threads is another.

Still disappointed that the consoles didn't come with any neural net, NPU, in their SoC designs. After all, almost all phones have some. An NPU plus a good AI framework should really be in the next gen consoles.

Skyboost! Yeah,having some network would be fantastic for running NPCs AI.
 
Last edited:
Insurgency an old game from New World Interactive, a tiny indy dev, their game scales infinitely better than this.

There are no excuses for this, its really bad and it shouldn't be blamed on DX11, there are a huge amount of DX11 games far more complex than this which don't choke the GPU on 10% CPU load.

You said it your self CAT this will not fly on consoles :) they will have to hire someone who knows how to code because using 10 - 20% of a consoles CPU before the performance chokes will not work, this game will not run on the PS5 or Xbox Series X at the required minimum 60Hz, not even close, be lucky to get more than 30.
 
This is what ED Odyssey failed to do.

The ships have no interior, so there are no Physics grids with in physics grids to worry about.
There is a disguised load screen transitioning from space to planet surface, another reason you can't have ship interiors in Odyssey, you can't have your mate walking about in your ship while you are loading in to the planet surface instance.
On foot on surfaces you have limited oxygen and none of the planets have an atmosphere, so you can't travel too far from where you landed, this gets around having to render infinite draw distances or load the rest of the planet to memory.
Every surface you go to in Odyssey is a barren moon with crude outpost points of interest, again this saves on rendering resources.
On getting in the ship you press a button, fade to black and wake up in the cockpit seat, a consequence of not having ship interiors.
No EVA, because no world Physics grid.

CIG from day 0 wanted to avoid all of that ^^^ its taken a long time and a lot of proprietary tech but they have done it.

All ships have full interiors, some are 200+ Metres long. No instancing, no loadscreens ever anywhere, fully traversable plants on foot if you have a few weeks.
Some are larger than earth, tho not the one below, its much smaller, still there is no skybox, no baked lighting or distance props and textures, because you can always go there and see that, it's always rendered no matter how far way it is, the shading, the lighting, the volumetric clouds and they didn't skimp on quality to do it.

All on DX11, its a powerful API if you know how to code and put the time in.

Yes Frame rates are low surface side here, this was the Ryzen 3600, which is why i upgraded again. and now they wrapped the whole planet in Volumetric clouds..... :o

Watch at least to 11 minutes, this is what i hope Starfield is, competition is good... But this on this engine with the state its in now..... 1 FP-Hour.

https://youtu.be/gkhoYqs9imw?t=139
 
Last edited:
Back
Top Bottom