• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

Associate
Joined
31 Dec 2010
Posts
2,460
Location
Sussex
When ryzen first came out, there was a lot of comments from people on how something like the 1600 made games feel a lot smoother with less stutter, I wonder if this nvidia software scheduling played a part in that even in those days.
Another possibility is of course just other non gaming background tasks, which choke quad cores.
Which reminds me, are there any reviewers who don't use clean Windows installs to try and replicate typical user's systems?
Possibly a hard ask as Windows installs and background tasks vary so much.
 
Soldato
Joined
6 Feb 2019
Posts
17,674
Another possibility is of course just other non gaming background tasks, which choke quad cores.
Which reminds me, are there any reviewers who don't use clean Windows installs to try and replicate typical user's systems?
Possibly a hard ask as Windows installs and background tasks vary so much.

benchmark machines will be clean installs or lean installs with no background junk running
 
Permabanned
Joined
21 Feb 2021
Posts
474
Very interesting findings these, and it does change the landscape somewhat. Having watched videos on this and followed the thread here etc, I do find one thing a little bit lacking in clarity; defining what a "low end cpu" is these days. This is referred to a lot throughout the videos from HWUB and co, where they will say that the system needs to be a good balance and just to keep in mind that if you have a lower end CPU, then you may look more towards Radeon based GPUs. Fine...but define "low end".

1: I assume straight away we are saying that almost anything with 4 cores is "low end" and now a bottleneck?
2: What about Ryzen 5 CPUs like the 1600AF? This has 6 cores and is hardly ancient! Is this "low end" and will cause a bottleneck?
3: What about Ryzen 7 8 core CPUs like the 3700x? What if I assign only 6 of the cores (some people run virtualized gaming VMs with only 4 or 6 cores assigned).
4: Does it depend more on clock speeds or cores or both?

It seems crazy to be talking about "older" CPUs that are barely a 1-2 years old.
Well, things have changed, a lot, what is the actual difference between an 2600k and 7700k in 6 years from 2011 to 2017? Roughly %30-35, and even less if they're overclocked to the same. The actual difference mostly comes from increased bandwidth of DDR4 rams.

In just 3 years, Zen and Zen+ CPUs became 'low endish' because they're so slow, compared to the new ones.

Imagine this, a 5600x will easily surpass a 2700x at a minimum of %50, and sometimes, it can actually beat 2700x at a whopping %80 difference. I don't even talk about 1600AF. I'm pretty sure 5600x will easily double what the 1600AF renders in games.

When you look from this perspective, it's clear to see that Zen/Zen+ practically became more modern "FX" like CPUs. Remember that FX were beaten by %40-70 by their Intel counterparts, and they were considered really bad for gaming, often not being able to hit locked 60 fps, and even sometimes, dropping below 30 FPS in awry situations.

To be honest, 1600af, zen and zen+ CPUs already had lackluster performance compared to their intel counterparts. You can easily find benchmarks where a 8700k will surpass a 2700x by upto %20-40 margins. They were already much slower than the industry standart set in 2017-2019 by Intel.

And now that AMD managed to solve the problems with its own Zen architecture, we've 5000 series, which is %70-80 faster. This might be the first time since 2005s that we saw a %50-70 uplift in single core/ipc in cpu architectures in just a timespan of 3 years. But there's a catch of course, and that catch is, Zen/Zen+ was so inferior to the Intel that AMD had to engineer amazing IPC uplifts. And the result is 1600AF is slower than a 5600x by %80 margin, and you never want your cpu to be so inferior compared to the fastest CPUs.

They may be 3 years old, but since they were getting beaten by 4790k in certain cases, you may as well consider them 7-8 years old because that's the performance level they will get you.

I'm not an AMD hater, please don't misunderstand me, I perfectly understand and I'm frustrated myself due to having a zen+ cpu, but i make do with what I have and i'm somewhat content enough, for now. At least AMD graced B450 chipset a great viable upgrade path, but i need those 5000 series at a cheaper price. Current price feels a bit scummy

Now imagine a developer targeting 60 FPS for 5000 series IPC.

And even more, Zen 4 supposedly will bring an another %25-30 ipc uplift along with %10 clocks? that's insane.
 
Last edited:
Soldato
Joined
6 Feb 2019
Posts
17,674
Yep the 1600 may be 3 years old but its performance is 7 years old and that's all that matters. And the 1600 was never a good cpu, it's gaming performance was already terrible when it launched so I'm really surprised that people complain that they don't get top performance on 3 generations later graphics cards
 
Caporegime
Joined
4 Jun 2009
Posts
31,156
Ryzen 1xxx CPUs were never really that great for gaming in the first place either, their IPC was poor.

Personally I would class anything around/below a ryzen 2600 and i7 8700k to be old and on the verge of needing an upgrade but only IF you're not playing at 4k.
 
Soldato
Joined
10 Jul 2008
Posts
7,792
Wow. This is really bad news for pc gamers. They are being priced out. It's bad enough finding and affording a GPU let alone then realising the upgrade did nothing because their perfectly good CPU is in fact not perfectly good anymore, and is in fact inadequate.
 
Soldato
Joined
26 Aug 2004
Posts
5,035
Location
South Wales
Ryzen 1xxx CPUs were never really that great for gaming in the first place either, their IPC was poor.

Personally I would class anything around/below a ryzen 2600 and i7 8700k to be old and on the verge of needing an upgrade but only IF you're not playing at 4k.
Yeah I remember seeing the first Zen benchmarks vs Intel in gaming and being disappointed.. luckily my x99 board decided to give up right around the time Zen 2 came out, which were decent but still beaten by Intel in games then, just not as much.
 
Man of Honour
Joined
12 Jul 2005
Posts
20,559
Location
Aberlour, NE Scotland
I haven't read all through this thread but does this overhead also apply to stripped down drivers which would indicate it's all the bloatware that's causing it? I only install the clean drivers and even then remove the audio driver so I only install the actual GPU driver and physx.
 
Permabanned
Joined
21 Feb 2021
Posts
474
Ryzen 1xxx CPUs were never really that great for gaming in the first place either, their IPC was poor.

Personally I would class anything around/below a ryzen 2600 and i7 8700k to be old and on the verge of needing an upgrade but only IF you're not playing at 4k.
nah, 2600 can still be considered bad, it can be up to %40 slower than lightly oc'ed 8700k. 8700k practically can still beat a 3600 actually, which is a bit funny and shows that anything below amd 5000 was actually not competitive enough
Wow. This is really bad news for pc gamers. They are being priced out. It's bad enough finding and affording a GPU let alone then realising the upgrade did nothing because their perfectly good CPU is in fact not perfectly good anymore, and is in fact inadequate.
Well, they can chose to go with AMD GPU.
I haven't read all through this thread but does this overhead also apply to stripped down drivers which would indicate it's all the bloatware that's causing it? I only install the clean drivers and even then remove the audio driver so I only install the actual GPU driver and physx.
i dont think driver bloat would result to %25 cpu overhead in fortnite

it simply that way because of the way nvidia approached to this scheduling thing
 
Last edited:
Associate
Joined
20 Aug 2020
Posts
2,041
Location
South Wales
Wonder what difference it would be if any with Windows hardware GPU scheduling turned on as I think for Nvidia only pascal onwards are supported.

Wonder if it would help the slower CPUs with nvidia cards.
 
Caporegime
Joined
4 Jun 2009
Posts
31,156
Wonder what difference it would be if any with Windows hardware GPU scheduling turned on as I think for Nvidia only pascal onwards are supported.

I've read it makes little to no difference, probably worse having it on as can apparently cause some issues in games so most people said better of leaving it off.

nah, 2600 can still be considered bad, it can be up to %40 slower than lightly oc'ed 8700k. 8700k practically can still beat a 3600 actually, which is a bit funny and shows that anything below amd 5000 was actually not competitive enough

Depends on the scenario where a 2600 is being used though i.e. mine is OC to 4.1GHz and it's literally only cyberpunk where it has noticeably bottlenecked my 3080 (mostly in the 0.1% and 1% lows) but as said I either game at 4k60 or 3440x1440/144fps with mostly max settings.

As for the non bloated nvidia drivers, personally I found it helped ever so slightly with having less cpu usage in cyberpunk but nothing to shout from the rooftops about.
 
Permabanned
Joined
21 Feb 2021
Posts
474
I've read it makes little to no difference, probably worse having it on as can apparently cause some issues in games so most people said better of leaving it off.



Depends on the scenario where a 2600 is being used though i.e. mine is OC to 4.1GHz and it's literally only cyberpunk where it has noticeably bottlenecked my 3080 (mostly in the 0.1% and 1% lows) but as said I either game at 4k60 or 3440x1440/144fps with mostly max settings.

As for the non bloated nvidia drivers, personally I found it helped ever so slightly with having less cpu usage in cyberpunk but nothing to shout from the rooftops about.
I agree, and that's also me with my 2700x and 3070. The bigger problem is the future actually, not today. I can't envision what kind of optimizations we will get with PC CPUs, now that consoles have 3.5 GHz clocked Zen 2 cores. It terrifies me a bit tbh. Considering we're struggling to hit 60 FPS in certain games that were designed to hit rock solid 30 FPS with 1.6 ghz jaguar cores in ps4/xbone XD

The amount of CPU performance lost between console ports and pc means that even a 5600x might struggle in future at 60 FPS in a game designed to max out the consoles at 60 FPS. But that is up to developers, but they won't care for lesser CPUs at some point.

By theory, 3700x (it has 4x more l3 cache, can have up to %16 more frequency) should go along with ps5/xbone for the entirety of generation, but I bet that 4 years later, PS5/XBX will have no trouble hitting rock solid 60 FPS in games, where as with 3700x, i'm pretty sure it will fall flat to 35-45 fps in games, maybe even worse. I do wish that it does not happen, but with so many bad ports/games/disappointments, that seems like a pure fantasy to me. I guess the unified bandwidth of consoles may also be playing a part here, feeding the CPU with huge bandwidth maybe has great performance benefits that cannot be observed on PC. I have no idea why 6 1.6 ghz jaguar are enough to run Doom Eternal at locked 60 FPS, while you would need at least a 4 ghz 4/8 chip to ensure that you're getting 60 FPS. Consoles really had superior optimization this gen.
 
Soldato
Joined
30 Dec 2013
Posts
6,306
Location
GPS signal not found. (11)
By theory, 3700x (it has 4x more l3 cache, can have up to %16 more frequency) should go along with ps5/xbone for the entirety of generation, but I bet that 4 years later, PS5/XBX will have no trouble hitting rock solid 60 FPS in games, where as with 3700x, i'm pretty sure it will fall flat to 35-45 fps in games, maybe even worse.
This will not happen.
 
Soldato
Joined
30 Aug 2014
Posts
5,975
Wonder what difference it would be if any with Windows hardware GPU scheduling turned on as I think for Nvidia only pascal onwards are supported.

Wonder if it would help the slower CPUs with nvidia cards.
They tested that in the first video, it made no difference.
 
Soldato
Joined
26 May 2014
Posts
2,959
Consoles really had superior optimization this gen.
People say this every single time there's a new console generation and they initially deliver impressive results. Then a few years down the line PCs will have sailed off into the distance again, as they always do, and the consoles will be dropping resolution and struggling framerate-wise, as they always do. Hell, they struggle with RT on even with launch titles, and developers will be shoving that into everything going forwards. Eventually Sony and Microsoft will release new, updated versions to rake in even more money and close the gap a bit. Probably with AMD's second-gen RT cores to boost that area in particular.
 
Permabanned
Joined
21 Feb 2021
Posts
474
People say this every single time there's a new console generation and they initially deliver impressive results. Then a few years down the line PCs will have sailed off into the distance again, as they always do, and the consoles will be dropping resolution and struggling framerate-wise, as they always do. Hell, they struggle with RT on even with launch titles, and developers will be shoving that into everything going forwards. Eventually Sony and Microsoft will release new, updated versions to rake in even more money and close the gap a bit. Probably with AMD's second-gen RT cores to boost that area in particular.
honestly, zen 2 3.5 ghz set s a high standard. you cant compare it to the ps4/xbone's 1.6 ghz jaguar cores, i do believe developers will have easier time developing
 
Caporegime
Joined
17 Mar 2012
Posts
47,949
Location
ARC-L1, Stanton System
I haven't read all through this thread but does this overhead also apply to stripped down drivers which would indicate it's all the bloatware that's causing it? I only install the clean drivers and even then remove the audio driver so I only install the actual GPU driver and physx.

No, its the difference between AMD using a Hardware Scheduler and Nvidia using a Software Scheduler, Nvidia are using about 20% of your CPU to do what AMD do on the GPU its self.
 
Caporegime
Joined
17 Mar 2012
Posts
47,949
Location
ARC-L1, Stanton System
Well, things have changed, a lot, what is the actual difference between an 2600k and 7700k in 6 years from 2011 to 2017? Roughly %30-35, and even less if they're overclocked to the same. The actual difference mostly comes from increased bandwidth of DDR4 rams.

In just 3 years, Zen and Zen+ CPUs became 'low endish' because they're so slow, compared to the new ones.

Imagine this, a 5600x will easily surpass a 2700x at a minimum of %50, and sometimes, it can actually beat 2700x at a whopping %80 difference. I don't even talk about 1600AF. I'm pretty sure 5600x will easily double what the 1600AF renders in games.

When you look from this perspective, it's clear to see that Zen/Zen+ practically became more modern "FX" like CPUs. Remember that FX were beaten by %40-70 by their Intel counterparts, and they were considered really bad for gaming, often not being able to hit locked 60 fps, and even sometimes, dropping below 30 FPS in awry situations.

To be honest, 1600af, zen and zen+ CPUs already had lackluster performance compared to their intel counterparts. You can easily find benchmarks where a 8700k will surpass a 2700x by upto %20-40 margins. They were already much slower than the industry standart set in 2017-2019 by Intel.

And now that AMD managed to solve the problems with its own Zen architecture, we've 5000 series, which is %70-80 faster. This might be the first time since 2005s that we saw a %50-70 uplift in single core/ipc in cpu architectures in just a timespan of 3 years. But there's a catch of course, and that catch is, Zen/Zen+ was so inferior to the Intel that AMD had to engineer amazing IPC uplifts. And the result is 1600AF is slower than a 5600x by %80 margin, and you never want your cpu to be so inferior compared to the fastest CPUs.

They may be 3 years old, but since they were getting beaten by 4790k in certain cases, you may as well consider them 7-8 years old because that's the performance level they will get you.

I'm not an AMD hater, please don't misunderstand me, I perfectly understand and I'm frustrated myself due to having a zen+ cpu, but i make do with what I have and i'm somewhat content enough, for now. At least AMD graced B450 chipset a great viable upgrade path, but i need those 5000 series at a cheaper price. Current price feels a bit scummy

Now imagine a developer targeting 60 FPS for 5000 series IPC.

And even more, Zen 4 supposedly will bring an another %25-30 ipc uplift along with %10 clocks? that's insane.

Even Zen 2 isn't that great, its not bad but an equivalent to 3600: 10600K is 20% faster in games.

Zen 3 tho, is a monster, 46% ahead of Zen 2, 22% ahead of the 10600K. Even the 9900K is left standing in its dust

46% in one generation...

YbkeDDn.png
 
Permabanned
Joined
21 Feb 2021
Posts
474
Even Zen 2 isn't that great, its not bad but an equivalent to 3600: 10600K is 20% faster in games.

Zen 3 tho, is a monster, 46% ahead of Zen 2, 22% ahead of the 10600K. Even the 9900K is left standing in its dust

46% in one generation...

YbkeDDn.png
This is just the beginning, i'm sure in games developed between 2017-2020, developers tried to offsett the inter-ccx latency present in zen/zen+/zen 2 cpus.

zen 3 brings coherent l3 cache, all cores are interconnected this time, so, zero, actual zero interccx latency...

if developers start to not care about offloading thread load evenly between seperate ccxs, really troubling times await these cpus.

so i really advise people to really go with radeon. these cpus already can be problematic in cpu bound titles, nvidia doesn't help any (this is a funny situation of "do what i say, don't do what i do" :D
 
Back
Top Bottom