• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

CPU Longevity?

Thats exactly the reason for it. Its not choked like the i7. On game engines that can utilise more threads the higher core count pulls away despite the clock speed disadvantage.
The games that ryzen doesn't do so well on are older titles with engines that are not multicore aware
Heres another
2z7j1p1.png
Yea but why specifically are Ryzen s minimum fps so much better exactly? As opposed to the max fps can be more on par or beaten by Intel. From what I've seen Intel can't touch Ryzens minimum numbers in any games, old or new.
 
Heres another
ap741i.png

https://www.youtube.com/watch?v=cbm0nedUuu8

Feel free to dispute these as much as you like, the truth is that its happening. More cores are the future.
Cool. Makes me even happier now that I didn't bite earlier in the year on an Intel set up. Personally a gazillion fps at the upper end has never mattered much to me. Any thing above 50-60 is pretty much fine, it's been the lower end, namely the minimums that interest me more for that smoother experience.
 
So basically this thread is just a bunch of cherry-picked benchmarks to prove a point either way. Everyone was complaining that void was doing it, but what has been done since is the same thing. I mean, I guess that shows that it's a more even fight than some suggest (albeit Ryzen is far more future proof). A single frame grab of where Ryzen or Intel is ahead proves nothing, even individual benchmark scores are useless without a lot of context regarding how the tests were run.
 
So basically this thread is just a bunch of cherry-picked benchmarks to prove a point either way. Everyone was complaining that void was doing it, but what has been done since is the same thing. I mean, I guess that shows that it's a more even fight than some suggest (albeit Ryzen is far more future proof). A single frame grab of where Ryzen or Intel is ahead proves nothing, even individual benchmark scores are useless without a lot of context regarding how the tests were run.

Watch the videos. Some even show real-time footage with afterburner running. You said yourself ryzen is more future proof which was the whole point of this thread.
 
So basically this thread is just a bunch of cherry-picked benchmarks to prove a point either way.

This is all anybody in here is doing. For example that screenshot gavinh87 posted above. Watch the video and you can see the framerate in both systems is highly variable with both processors being the fastest at different points in time. He just chose to share that screen grab because it supports his narrative, not because it's necessarily representative.

Frankly I'm getting rather tired of this stuff on these forums. I come here to learn new things about technology and how it works. I asked a question on page 4 hoping that somebody could explain why some benchmark results didn't make sense to me. That was ignored and instead there has been ~10 pages of people bitching at each other. Nowadays these forums feel only marginally more useful than the Youtube comments section.
 
In several of those benchmarks no cores on the Ryzen chip are hitting 100% and the GPU is also nowhere near 100% usage. So what is likely the bottleneck in these situations? Is it RAM or storage bandwidth / latency? Is there too little CPU cache or is it too slow? Is the "usage" measure simply inaccurate? (I imagine it's a gross simplification of what is going on inside the CPU / GPU). Is a frame limiter being hit? I'd be really interested if anybody could offer some insight.

To answer your question: the Ryzen was bottlenecked by single thread performance (for the master thread). You don't see 100% on any of its core because SMT was enabled.
 
If any single core hits 100% you know you have a CPU bottleneck, but as voidshatter says that's not the only indicator. I see occasional drops below 99% GPU usage on my system but I never see any core hit 100%.
 
To answer your question: the Ryzen was bottlenecked by single thread performance (for the master thread). You don't see 100% on any of its core because SMT was enabled.

Thanks for the response! I realised I didn't understand SMT/HT that well so just had a quick read on the Layman's of it again.

Still it does seem rather odd though - often threads are at really low usage levels. This is especially the case in the screenshot below. The highest usage on a thread is 42% on the 6800k and 53% on the 1800x. If you are right then I take it that the "Usage" measurement is actually doing a bad job of informing the user of what is going on in the CPU. Honestly I still don't know what "usage" is even measuring.

Also another question: How are threads named within Windows? Are CPU1 and CPU2 both threads on the first core, CPU3 & CPU4 on the second and so on?

OFKQqLW.jpg
 
Remember that in multithreaded applications the threads need to sync up all the time, and this adds overhead. You may never have any thread able to run at 100% because it's waiting for others to complete, for example. However, your example just looks GPU bottlenecked. At greater than 1080p resolutions, the CPU matters less and less (as long as it's not ancient of course), so it's not surprising that the two CPUs featured produce near-identical FPS figures.

Yes, Windows typically groups virtual cores together so in those screenshots CPU1 and CPU2 are the two logical cores on the first physical core.
 
Thanks for the response! I realised I didn't understand SMT/HT that well so just had a quick read on the Layman's of it again.

Still it does seem rather odd though - often threads are at really low usage levels. This is especially the case in the screenshot below. The highest usage on a thread is 42% on the 6800k and 53% on the 1800x. If you are right then I take it that the "Usage" measurement is actually doing a bad job of informing the user of what is going on in the CPU. Honestly I still don't know what "usage" is even measuring.

Also another question: How are threads named within Windows? Are CPU1 and CPU2 both threads on the first core, CPU3 & CPU4 on the second and so on?

Yes, CPU 1 and 2 (or 0 and 1 in Windows Task Manager) would correspond to the first physical core, and so on.

When you run a single thread on a multi-core CPU, especially with HT/SMT enabled, unless you set affinity, you'll see the 100% usage of one core evenly split among all logical cores (and in the case of 4C8T, it's gonna be for each logical core to keep oscillating around 12.5%). You'll need to force affinity to have one logical core at 100% while the others at 0%.

fSnZFZ8.jpg

Am9Cfn0.jpg

8c2SFYB.jpg
 
As for the Ryzen screenshot case, if you can manage to set the affinity of the master thread to be assigned to a single logical core, then you'll see 100% measurement on that single logical core. Without manually setting affinity, it it is just randomly split among all logical cores without any at 100%. Either way, the master thread is bottlenecked by single thread performance.
 
In several of those benchmarks no cores on the Ryzen chip are hitting 100% and the GPU is also nowhere near 100% usage. So what is likely the bottleneck in these situations? Is it RAM or storage bandwidth / latency? Is there too little CPU cache or is it too slow? Is the "usage" measure simply inaccurate? (I imagine it's a gross simplification of what is going on inside the CPU / GPU). Is a frame limiter being hit? I'd be really interested if anybody could offer some insight.

Sorry for the late reply, i didn't see this until it was quoted much later ^^^^

There are 3 screenshots, in all of them at least one thread on the Intel side is at 100%, as has been explained there is a master thread, at least so far as nVidia's drivers go one thread is always used to prioritise threads, that puts extra load on one thread, the so called 'master thread'

Now, when you look at those images all threads are over 90%, what that means is it can no longer find anywhere to divide up the workload, if the scheduler moves a workload from the 100% full thread on to the one at 94% it will end up at 100%, so you end up with the same problem, its probably using more than the 6% to schedule threads anyway, so it really has no where to go, ergo the Intel 4 core on the right is limited by how many compute threads it has.

As for the Ryzen threads not hitting 100% and yet still bottlenecking the GPU, that is thread balancing in windows, what that does is divide compute threads among compute threads, it doesn't and can't make those extra threads available to the GPU's command processor, only it can do that, to a limited extent. Windows is simply dividing the workload up where is can so there is less physical stress on individual CPU threads.

So at the limit, using Metro Last Light as an example, nVidia's driver is able to schedule enough threads for the CPU to push 160 FPS, given it has enough threads to do that, where it doesn't the GPU is more bottlenecked, such as with the 4 thread i5 at 100 FPS.

Given the 60% performance difference and the amount of compute threads each CPU has 'it looks to me' like Metro Last Light is using 8 compute threads, but because Ryzen 1600 has 12 Compute threads Windows is also able to divide that up to reduce the stress on individual threads, which would result in smoother game play, something else a lot of reviewers are reporting even where Intel are in raw FPS terms faster.

Hope that explains it sufficiently :)
 
Last edited:
These are not max settings. For example, the second one you linked reads "IQ HIGH, SSAA OFF, Normal Tessellation etc"


So is the one that you posted... ? "High detail, normal tesselation" But somehow its at 30fps compared to all the others at 60+. SSAA is supersampling. However, the tomshardware one IS at max settings which is only turning tesselation from normal to high anyway.
 
Last edited:
So is the one that you posted... ? "High detail, normal tesselation" But somehow its at 30fps compared to all the others at 60+. SSAA is supersampling. However, the tomshardware one IS at max settings which is only turning tesselation from normal to high anyway.

You should really try the game yourself before commenting what is really called "max settings".
 
Back
Top Bottom