Need some help identifying where a bottleneck (if any) exists

Associate
Joined
7 Oct 2003
Posts
463
Location
Telford
As per the title, I think I am running into a bottleneck in my system and I need some help identifying where it is occurring, if at all.

I have done a fair bit of research on the subject, so I believe I have a fairly decent understanding of the principles in question. However, the results I have obtained seem to contradict what I would expect to see, hence why I am here for some 3rd party input.

So, first of all, computer Specs...

MSI X99S SLI Plus Motherboard
Intel I7-5930k OC'd to 3.9 GHz
16GB 2133 MHz DDR4 - Stock
GTX 1080 - Stock speeds

The reason I have suspected there may be a bottleneck is that I play a lot of Satisfactory and I have noticed it gets very chuggy in situations where I would expect the CPU to be stressed. Now, I know that Satisfactory is in EA, so it is no way optimised, so poor performance could well be expected, but it has made me curious.

As such, I have done a bunch of benches in Shadow of the Tomb Raider, as it provides some useful data about the system's performance during the benchmarks. I have done 3 benches, all at same settings, with V-sync and G-sync off, at 3 resolutions, to try and ascertain where any bottleneck occurs. Please see the links below...

1080p - https://www.dropbox.com/s/poluvonyyhbklat/SOTTR 1080p (no G-sync).png?dl=0
1440p - https://www.dropbox.com/s/xda9v1w445hvxyo/SOTTR 1440p (no G-sync).png?dl=0
2160p - https://www.dropbox.com/s/5sdskkag4ckqopx/SOTTR 2160p (no G-sync).png?dl=0
Settings - https://www.dropbox.com/s/gsh4td2avjiq7yy/SOTTR Settings.png?dl=0

I ran in DX11, as DX12 crashed whenever I attempted to load the game. This, apparently, is a known issue.

At 1080p, it seems clear that the CPU is the limit, which makes sense. The thing that I find odd, is that during the benchmark, the CPU doesn't max out in the task manager. The highest I saw it hit was about 80% on a few cores, and this wasn't consistent. If it was truly CPU limited, I would expect to see them pegging at 95-100% all the time.

At 1440p, the results indicate that the system is 18% GPU limited, which makes sense, as, towards the end of the bench, the frame rate appears to be bound to the CPU, rather than the GPU. This I would expect to see as the resolution increases. The GPU utilisation in this was hitting 95%, according to task manager, which indicates that the GPU was being fully used

However, at 4k, things get odd. The bench indicates there is no GPU limitation, even though the CPU utilisation is nowhere near the GPU, which would say to me it is entirely GPU bound? The utilisation was also hitting 100% in task manager, which is what I would expect.

As I mentioned before, in none of these tests, did I see the CPU really get hammered. Only at 1080p did it approach anything close to what I would say would be "Heavy" utilisation. As such, I would say that it is a GFX bottleneck (I tend to prefer higher resolutions that maxed out FPS, especially as my G-sync monitor is an older one, and only works up to 60 fps), but I am not sure based upon these results.

Please throw your advise into the ring, as I am perplexed at these results and it is quite important I get this right, as I will probably upgrade the guilty party when I can ID what is the issue.

If you need any more information from me, then please ask, and if I can provide it, I certainly will.

Thanks for your time :)
 
At 1080p, it seems clear that the CPU is the limit, which makes sense. The thing that I find odd, is that during the benchmark, the CPU doesn't max out in the task manager. The highest I saw it hit was about 80% on a few cores, and this wasn't consistent. If it was truly CPU limited, I would expect to see them pegging at 95-100% all the time.
Problem is that unlike synthetic benchmarks, or workloads like video encoding/3D rendering, games have lots of momentary load variation.
So even if longer time average/"smoothed" usage shows having "spare room", momentary load spikes can reach 100%.
That can cause hiccups/stutter, which doesn't show in average fps.

Same really applies also to GPU load.
For example in fps game some explosion stresses both CPU and GPU momentarily lot more than average.

Also while that CPU has still very decent 6 cores/12 threads, Meltdown/Spectre "family" security patches have caused more performance penalty for pre-Skylake CPUs.
So especially if you have lots of stuff running on background, that could be part of issue.
 
I don't think you can read too much in to tests. I always remember when I moved from a 4790K to an 8700K people said it would make very little difference. How wrong they were! I remember the game I was playing at the time suffered glitches at specific times in the game, but the CPU was at 40% so I didn't think it was that. Even though the 8700K didn't improve much over the 4790K when the game was running well, it made a significant difference when the game was struggling. It completely eliminated the glitches I talked about. I mean to me the obvious weak point in your system is the CPU, RAM and chipset. But just because the game I was playing was improved with a new doesn't mean your game will! The question is whether you would see it as worthwhile to replace all that given that you may not see an overall improvement, rather it just "may" eliminate some glitches in some games?
 
I don't think you can read too much in to tests. I always remember when I moved from a 4790K to an 8700K people said it would make very little difference. How wrong they were! I remember the game I was playing at the time suffered glitches at specific times in the game, but the CPU was at 40% so I didn't think it was that. Even though the 8700K didn't improve much over the 4790K when the game was running well, it made a significant difference when the game was struggling. It completely eliminated the glitches I talked about. I mean to me the obvious weak point in your system is the CPU, RAM and chipset. But just because the game I was playing was improved with a new doesn't mean your game will! The question is whether you would see it as worthwhile to replace all that given that you may not see an overall improvement, rather it just "may" eliminate some glitches in some games?
 
I don't think you can read too much in to tests. I always remember when I moved from a 4790K to an 8700K people said it would make very little difference. How wrong they were! I remember the game I was playing at the time suffered glitches at specific times in the game, but the CPU was at 40% so I didn't think it was that. Even though the 8700K didn't improve much over the 4790K when the game was running well, it made a significant difference when the game was struggling. It completely eliminated the glitches I talked about.
50% more cores and threads helps with background junk.
If game itself wants more than two cores dedicated for it, quad core just doesn't have that lot of spare resources to run some antivirus and other bloated shovelware at the same time with game.

Of course if you also updated amonut of memory that makes major difference, if it was running out earlier.
 
Thanks for the reply guys.

My research would seem to indicate that the CPU + GPU are pretty well matched (for want of a better word), so any bottleneck will be slim (if any).

It may explain the results obtained. I understand the points raised in reg. the testing mechanism. I did do a bunch of tests using 3dMark, but the issue I have with that is it tends to lean more towards stressing the GPU, rather than the CPU. Yes, I know there are CPU tests, but the core of the tests (AFAIK) don't stress both components.

Indeed, I struggled to max out the CPU in 3dMark as well, even when running at exceedingly low resolutions (480p). Once again, it could be an issue with the test, rather than the rig that caused these particular results.

I am a little torn as to how to proceed. I was thinking about jumping on the Ryzen 3xxx bandwagon when they are released. I think it depends on the pricing + performance when they are finally launched. It may well be cheaper than an upgrade to a 2080ti, which was my preferred GFX upgrade path.

Having said that, I don't know if the upgrade in CPU, RAM + Mobo will make a massive improvement, especially as I tend to use higher resolutions (I am not a competitive player, and as I have mentioned before, I am usually perfectly happy at lower frame rates. Plus, my existing monitor only syncs to 60hz, so I see no need to push it higher than that.)

I am currently leaning towards the GFX upgrade, as it would be easier. Not necessarily cheaper, thanks to the ridiculous nVidia tax, but it would produce a better improvement than a CPU upgrade.

As always, lean in with your thoughts if you think I may be wrong. I will take whatever advice I can get at this stage!
 
Back
Top Bottom