• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Starfield CPU performance reviews

Caporegime
Joined
9 Nov 2009
Posts
25,002
Location
Planet Earth
Now a few reviews have come out,I thought it will be useful to have the information in one thread.

CPU benchmarks

1.)PCGameshardware(05/09/23)


CPUs run at default RAM speeds


4hKogzU.png


2.)Hardware Unboxed(05/09/23)


AM5 CPUs(6000MHZ DDR5),Intel 12000/13000 series(7200MHZ DDR5),rest 3600MHZ DDR4.

moz1aMb.png


HSYVNTi.png


3.)Gamersnexus(05/09/23)


DDR5 6000MHZ for all supported CPUs and 3200MHZ DDR4 for other CPUs.




VPniTGn.png
 
Last edited:
Conclusions


So far from the initial data,the scaling looks similar to Fallout 4:

Basically Zen3 was around the same performance as Intel 10000/11000 series parts.

The main difference is the X3 parts don't show as much of an uplift over the normal Zen3/Zen4 parts. Memory scaling seems less extreme. With DDR5 platforms,performance does seem to scale to higher frequencies but not as severely as Fallout 4 did.

BTW,all the charts are here:

So take away notes so far:
1.)Intel does better in the game
2.)Decent speed RAM is important,but less important than in Fallout 4 so far
3.)Even 4 cores can run the game with reasonable averages,but with poor minimum FPS
4.)8 cores are required if you want to have good average and minimum FPS. Six cores might suffice.
5.)X3D CPUs are better than the normal Zen counterparts,but not as much as they were in Fallout 4

The CPU benchmarks won't be a worst case scenario. Once people start building more outposts,expect even worse performance! :(
 
Last edited:
@KompuKare


It appears memory scaling isn't as much of an issue. But Starfield runs better on Intel CPUs. It looks eerily similar to the chart in this thread.
Without someone with the hardware and time and skill to totally trace and debug this, we will probably never know. AMD engineers could probably have done so but sponsorship doesn't seem to mean they do anything deep!

Could be anything:
from a compiler flag - but Bethesda no longer leave things in x87 fall back mode!
Maybe even a single instruction Intel does better with and which is used a lot by, e.g., the Papyrus runtime.
Or some branch where Intel's branch predictor is better.

Just hope that whatever it is doesn't get fixed to prevent some speculative execution exploit!
 
Without someone with the hardware and time and skill to totally trace and debug this, we will probably never know. AMD engineers could probably have done so but sponsorship doesn't seem to mean they do anything deep!

Could be anything:
from a compiler flag - but Bethesda no longer leave things in x87 fall back mode!
Maybe even a single instruction Intel does better with and which is used a lot by, e.g., the Papyrus runtime.
Or some branch where Intel's branch predictor is better.

Just hope that whatever it is doesn't get fixed to prevent some speculative execution exploit!

It does seem there is RAM scaling. Also the performance profile is similar to Fallout 4,except the X3D CPUs do worse now.

Edit!!

The Ryzen 7 5800X3D is still 15% to 20% faster than the Ryzen 7 5800X.
 
Last edited:
Seems to me like it just loves single core performance, hence why the 13900k is well ahead in all the benches.

Certainly looks like RAM speed has an impact, but not as significant as expected.

Clearly doesn't give a **** about threads/cores that much looking at the 7600 - 7950 spread...
 
Seems to me like it just loves single core performance, hence why the 13900k is well ahead in all the benches.

Certainly looks like RAM speed has an impact, but not as significant as expected.

Clearly doesn't give a **** about threads/cores that much looking at the 7600 - 7950 spread...

It shows similar scaling to the results in the Fallout 4 thread,except:
1.)Less extreme RAM scaling
2.)X3D CPUs have less of a performance uplift
4.)8 cores has an effect,ie,better 1% lows but six cores might be enough.
 
Last edited:
Good work @CAT-THE-FIFTH with the thread. :)

The biggest mistake reviewers made was not using the fastest GPU to test the game. In my testing the XTX tuned is faster than my 4090 tuned, even with the 4090 running a 660W BIOS (XTX locked to 464W) and with the 4090 clocked to 3.125Ghz on the core clock.

Here's the XTX running 1080P High Settings in various areas throughout the world.

I also downloaded the pcgameshardware save game file and took a screenshot right at the spot they start benchmarking. I used 720P, lowest settings, but crowd density high, FSR on and set to 50%. Game looks trash but about as heavy as you can get on the CPU, just look at the CPU power draw and CPU utilisation on my 7950X3D. It's almost double what the power draw is for most other games. :cry: @Poneros
I0ncDEF.png
 
Last edited:
Seems to me like it just loves single core performance, hence why the 13900k is well ahead in all the benches.

Certainly looks like RAM speed has an impact, but not as significant as expected.

Clearly doesn't give a **** about threads/cores that much looking at the 7600 - 7950 spread...
It's interesting because it seems to like spilling out slightly onto the second ccd of the 7950x3d. Potentially why it's performing slightly worse than the 7800x3d I guess - might need to just tell it to not spill out.

Eitherway, I have a 3080, so I'm severely getting GPU bound before my CPU is an issue, lol.
 
Even the 8700k is giving a spanking to many of the chips there. I knew it was at least similar if not better than Zen 2 in games but didn't expect it to do quite as well as that today.

If you followed the Fallout 4 benchmark thread I created,it's not as surprising as you would think:

gf8fikX.png


A tweaked Core i9 9900K beat almost all the Zen3 CPUs on the list. I am more surprised how "badly" the X3D chips do.
 
BTW,if anyone sees more CPU benchmarks please put them in here. At some point maybe me and @KompuKare will attempt to make a community benchmark. But I first need to play the game,and see what others find out about the most CPU intensive areas of the game.

I expect the user made outposts might end up being that situation,just like in Fallout 4.
 
BTW,if anyone sees more CPU benchmarks please put them in here. At some point maybe me and @KompuKare will attempt to make a community benchmark. But I first need to play the game,and see what others find out about the most CPU intensive areas of the game.

I expect the user made outposts might end up being that situation,just like in Fallout 4.
The exact spot that pcgameshardware shared with their save game file is ideal if you decide to use that. It's extremely CPU heavy and as long as the user doesn't move the mouse, it should be suitable for purpose.
 
The exact spot that pcgameshardware shared with their save game file is ideal if you decide to use that. It's extremely CPU heavy and as long as the user doesn't move the mouse, it should be suitable for purpose.
In our previous benchmark thread we used an ENB profiler to normalise to drawcalls. But we can use that save file.
 
BTW,if anyone sees more CPU benchmarks please put them in here. At some point maybe me and @KompuKare will attempt to make a community benchmark. But I first need to play the game,and see what others find out about the most CPU intensive areas of the game.

I expect the user made outposts might end up being that situation,just like in Fallout 4.
That PCGH save might indeed do as a quick thing anyone could almost without a save game.

However, as long as the stock game comes with outpost then taking that, going to the console to spawn enough resources to make big outputs, cram or herd as many NPCs in there and offer that as save game which people can verify against makes sense.

With Fallout 4 that would have been harder as it required DLCs. And for heavily modded Skyrim it would have been far harder even in the age of Wabbajack Modlists.
 
Is the 13100‘s near 20% lead over the 5600x in HUB’s testing mostly down to its DDR5?

judging by the memory test on raptor lake in their video, the memory difference would only likely account for about 10%.

It is far more likely due to the fact that it is faster on a single core/thread than the 5600x.

It doesn't seem to care about cores/threads as much as core performance (look at the 7xxx series spread)
 
Last edited:
Back
Top Bottom