• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

From my observations this approach seems to be somewhat sensitive to overall RAM latency/throughput - the quad channel w/ higher than mainstream frequency RAM and tuned timing on my setup seems to help a bit to offset it versus other platforms of the era which used dual channel but other wise perform the same or better overall - possibly ties into the above with Ryzen memory as well.

There are some advantages to doing it this way though it can hinder performance on older CPUs.

There is just so many areas we still don't have much data on. I clearly remember there was a couple small reviewers who tested 16gb vs 32gb on the 2080ti and 2 DIMM vs 4 DIMM and found consistently that 32gb and 4 DIMM kits offered 10% higher performance in minimum frame rates, and thats the particular area where CPU performance and driver overhead would be a problem. I wonder if we repeat those tests on a 3090 what the outcome would be?
 
Wanna know whats actually really funny and I just thought about this.

The bazillion laptops out there that use RTX GPU's and a 1080p screen with nerfed Intel CPU's hahah. Think of all those laptops that would see higher performance if they had a 4k screen or if they had a better CPU
 
Wanna know whats actually really funny and I just thought about this.

The bazillion laptops out there that use RTX GPU's and a 1080p screen with nerfed Intel CPU's hahah. Think of all those laptops that would see higher performance if they had a 4k screen or if they had a better CPU

Couldn't you just increase the resolution of the game? I guess then you would need to find the sweets spot for performance.
 
Wanna know whats actually really funny and I just thought about this.

The bazillion laptops out there that use RTX GPU's and a 1080p screen with nerfed Intel CPU's hahah. Think of all those laptops that would see higher performance if they had a 4k screen or if they had a better CPU
He he LOL. That's pretty dark humour Grim! Class. :D
 
The RX 580 is officially faster than the RTX 3080. :cool:

fortnite9ojv4.png
 
So essentially, it's only a serious problem if:

- you play at 1080P (bit pointless having a 3080 etc. at this res.)
- run less than max/ultra/very high settings (seems a bit odd to do this if you have the likes of a 3080....)

People who are playing at 1440p and/or 4k with max settings need not worry too much.... I skipped through the video and couldn't see any tests at 1440P or higher with max settings???

Guess my 2600 @ 4.1 will do until amd get their act together and match intel for price to performance but I play most games at 4k 60 now anyway so most of the work is on the 3080 and it can maintain a locked 60 @4k here now except for cyberpunk 2077



Still though, nvidia clearly have a problem here, which needs addressed.
 
So essentially, it's only a serious problem if:

- you play at 1080P (bit pointless having a 3080 etc. at this res.)
- run less than max/ultra/very high settings (seems a bit odd to do this if you have the likes of a 3080....)

People who are playing at 1440p and/or 4k with max settings need not worry too much.... I skipped through the video and couldn't see any tests at 1440P or higher with max settings???

Guess my 2600 @ 4.1 will do until amd get their act together and match intel for price to performance but I play most games at 4k 60 now anyway so most of the work is on the 3080 and it can maintain a locked 60 @4k here now except for cyberpunk 2077



Still though, nvidia clearly have a problem here, which needs addressed.

An RX 580 is 15% faster than a GTX 1080 with a 4790K, don't tell me that's not a problem.
------------------

I submitted a bug report in Star Citizen a couple of days ago, out of curiosity i've had a look to see what other people who are confirming the bug are running, there are only 5 other people who responded to the bug, two of them are running configurations that would be effected by this, especially in Star Citizen which will load up a 24 thread Ryzen 3900X easily.

It has to be said AMD's Hardware based Thread Scheduling is clearly better than Nvidia's Software solution, no i just don't think its good enough to say "Well use a faster CPU" why when clearly a better solution is possible should the cost of getting around Nvidia's worse solution be laid on the consumer buying these GPU's?

Nvidia must fix this.

n0xt4sO.png


w2LcXyr.png
 
Read my post again and all of it.....

Also, given that is because nvidia are lacking the hardware to improve/fix this, it is unlikely that they'll be able to do much, if anything to address the issue. The same way AMD won't be able to match DLSS with their "software" FSR approach.
 
Read my post again and all of it.....

I did, an RX 580 / GTX 1080 with a 4790K is not an uncommon combination and they are unlikely to be playing at 1440P or 4K.

The GTX 1080 is a much faster GPU than the RX 580, about 50% faster, with a 4790K the RX 580 is 15% faster.
 
I did, an RX 580 / GTX 1080 with a 4790K is not an uncommon combination and they are unlikely to be playing at 1440P or 4K.

The GTX 1080 is a much faster GPU than the RX 580, about 50% faster, with a 4790K the RX 580 is 15% faster.
yeah but gtx 1080 at medium is a bit overkill. 1080/2060 can still drive 1080p high-ultra in most of games

even my friend with gtx 1060 refuses to go medium. most pc gamers, especially the ones that would buy a 1080, would not go below high settings with sucha gpu

overhead caused rx 580 and 1080 to perform the same, at 72 fps, in wd legion at medium settings

but in reality, gtx 1080 can, in theory, run the game at 70 fps with much higher settings (theoritically, you can push %60-65 gpu performance worth of graphical fidelity on the 1080)

i can respect if you want 100+ frames at medium settings, and thats perfectly valid. some gamers will want that. others will want different things. at high-ultra settings, rx 580 would melt below 60 fps, while gtx 1080 staying above 60 fps. i hope i was clear enough?
 
Last edited:
yeah but gtx 1080 at medium is a bit overkill. 1080/2060 can still drive 1080p high-ultra in most of games

even my friend with gtx 1060 refuses to go medium. most pc gamers, especially the ones that would buy a 1080, would not go below high settings with sucha gpu

overhead caused rx 580 and 1080 to perform the same, at 72 fps, in wd legion at medium settings

but in reality, gtx 1080 can, in theory, run the game at 70 fps with much higher settings (theoritically, you can push %60-65 gpu performance worth of graphical fidelity on the 1080)

i can respect if you want 100+ frames at medium settings, and thats perfectly valid. some gamers will want that. others will want different things. at high-ultra settings, rx 580 would melt below 60 fps, while gtx 1080 staying above 60 fps. i hope i was clear enough?

Yes that was clear enough :)

The GPU my 2070S replaced was a GTX 1070, a good GPU but early on i ran that with an overclocked 4690K and it was bloody awful, there was far too much load on the CPU, it just wouldn't run smooth in a few games, For example getting 150 FPS in the original Insurgency but it feeling like i was moving around across a cheese grater.
Its the reason i got a Ryzen 1600 and eventually a Ryzen 3600.

I do like to get the highest FPS i can where i can, i don't like playing First Person Shooters at anything less than 150 FPS and while i have a high end CPU (5800X if you're on mobile and can't see my Sig) to drive my 2070S it shouldn't be a requirement when clearly the reason for that is at least partially down to Nvidia's Software based Thread Scheduling.
 
Yes that was clear enough :)

The GPU my 2070S replaced was a GTX 1070, a good GPU but early on i ran that with an overclocked 4690K and it was bloody awful, there was far too much load on the CPU, it just wouldn't run smooth in a few games, For example getting 150 FPS in the original Insurgency but it feeling like i was moving around across a cheese grater.
Its the reason i got a Ryzen 1600 and eventually a Ryzen 3600.

I do like to get the highest FPS i can where i can, i don't like playing First Person Shooters at anything less than 150 FPS and while i have a high end CPU (5800X if you're on mobile and can't see my Sig) to drive my 2070S it shouldn't be a requirement when clearly the reason for that is at least partially down to Nvidia's Software based Thread Scheduling.

yeah 2017 was a tough year quad core, non ht cpus. i think trend started with ac origins, or even before that, battlefield 1 gave serious signals

i do believe that nvidia will somewhat optimize their scheduler, if this gets more exposition. a total huge improvements may not be possible due to lack of hardware, but who knows, even a %10 improvement would be more than welcome.

i mean they can do some good from time to time. i was shocked actually when they gave support to freesync. even more shocking was that they enabled it for pascal series as well (sadly not for maxwell :( ) i was genuienly surprised that once in a while they did something useful for the benefit of user. i immediately invested in a freesync monitor after this and im happy ever since

so i have still some hopes but not that much. but relying on cpu improvements is not good for consumers if they continue with their current software solution
 
yeah 2017 was a tough year quad core, non ht cpus. i think trend started with ac origins, or even before that, battlefield 1 gave serious signals

i do believe that nvidia will somewhat optimize their scheduler, if this gets more exposition. a total huge improvements may not be possible due to lack of hardware, but who knows, even a %10 improvement would be more than welcome.

i mean they can do some good from time to time. i was shocked actually when they gave support to freesync. even more shocking was that they enabled it for pascal series as well (sadly not for maxwell :( ) i was genuienly surprised that once in a while they did something useful for the benefit of user. i immediately invested in a freesync monitor after this and im happy ever since

so i have still some hopes but not that much. but relying on cpu improvements is not good for consumers if they continue with their current software solution

I have no doubt Nvidia are more than capable of fixing it, this has served them well to get around Microsoft with their horrendously poorly optimised DX11 which likes to cram everything into a single thread like its 1993, if you split that work in one thread between 4 then you're going to get a lot more performance even if you're using some of the CPU just to do that.

But times are changing, they changed a few years ago, modern API's spread that load across however many cores you have, these days even DX11 does this, these days Game Developers are using a lot more CPU resources and its not good when your driver is now competing for those resources, its past time for Nvidia to move with the times and integrate the Scheduler into the GPU its self, AMD already did this years ago.

Nvidia need to know this just isn't good enough or they will continue to drag their feet.
 
I have no doubt Nvidia are more than capable of fixing it, this has served them well to get around Microsoft with their horrendously poorly optimised DX11 which likes to cram everything into a single thread like its 1993, if you split that work in one thread between 4 then you're going to get a lot more performance even if you're using some of the CPU just to do that.

But times are changing, they changed a few years ago, modern API's spread that load across however many cores you have, these days even DX11 does this, these days Game Developers are using a lot more CPU resources and its not good when your driver is now competing for those resources, its past time for Nvidia to move with the times and integrate the Scheduler into the GPU its self, AMD already did this years ago.

Nvidia need to know this just isn't good enough or they will continue to drag their feet.
Exactly, I can't believe people are doing apologetics for a multi billion dollar company. With games development being based on the new Zen 2 consoles the demands on the CPU in games will increase and this will become a big issue sooner rather than later.
 
If PC gamers are still playing at 1080P and/or using low/medium settings, then they be better of getting a next gen console for a better gaming experience :eek: :D :p

Exactly, I can't believe people are doing apologetics for a multi billion dollar company. With games development being based on the new Zen 2 consoles the demands on the CPU in games will increase and this will become a big issue sooner rather than later.

With consoles targeting 4k and being either locked to 30 or 60 fps, I don't think it'll be that big of an issue tbh. At 4k or even 1440p with max/high graphic settings, it's more down to the GPU.



Nvidia do have a problem but they probably can't do much to address it till the next lot of gpus are out where they can add back the hardware. Software approach will never be as good as a hardware approach.
 
If PC gamers are still playing at 1080P and/or using low/medium settings, then they be better of getting a next gen console for a better gaming experience :eek: :D :p



With consoles targeting 4k and being either locked to 30 or 60 fps, I don't think it'll be that big of an issue tbh. At 4k or even 1440p with max/high graphic settings, it's more down to the GPU.



Nvidia do have a problem but they probably can't do much to address it till the next lot of gpus are out where they can add back the hardware. Software approach will never be as good as a hardware approach.

In Hitman and Watch Dogs legion the 3080 struggled to get over 60fps minimums at 1080p medium with the 4790k and the r5 1400. Sure it could handle high and still get the same frame rate but if you have an older cpu why not buy a cheaper 6700XT, get more frames at 1080p high due to lower CPU load and save the money towards a full rig upgrade with alder lake refresh / zen 5 when the DDr5 premium will be lower and new platform issues will have been ironed out.
 
Very interesting findings these, and it does change the landscape somewhat. Having watched videos on this and followed the thread here etc, I do find one thing a little bit lacking in clarity; defining what a "low end cpu" is these days. This is referred to a lot throughout the videos from HWUB and co, where they will say that the system needs to be a good balance and just to keep in mind that if you have a lower end CPU, then you may look more towards Radeon based GPUs. Fine...but define "low end".

1: I assume straight away we are saying that almost anything with 4 cores is "low end" and now a bottleneck?
2: What about Ryzen 5 CPUs like the 1600AF? This has 6 cores and is hardly ancient! Is this "low end" and will cause a bottleneck?
3: What about Ryzen 7 8 core CPUs like the 3700x? What if I assign only 6 of the cores (some people run virtualized gaming VMs with only 4 or 6 cores assigned).
4: Does it depend more on clock speeds or cores or both?

It seems crazy to be talking about "older" CPUs that are barely a 1-2 years old.
 
Very interesting findings these, and it does change the landscape somewhat. Having watched videos on this and followed the thread here etc, I do find one thing a little bit lacking in clarity; defining what a "low end cpu" is these days. This is referred to a lot throughout the videos from HWUB and co, where they will say that the system needs to be a good balance and just to keep in mind that if you have a lower end CPU, then you may look more towards Radeon based GPUs. Fine...but define "low end".

1: I assume straight away we are saying that almost anything with 4 cores is "low end" and now a bottleneck?
2: What about Ryzen 5 CPUs like the 1600AF? This has 6 cores and is hardly ancient! Is this "low end" and will cause a bottleneck?
3: What about Ryzen 7 8 core CPUs like the 3700x? What if I assign only 6 of the cores (some people run virtualized gaming VMs with only 4 or 6 cores assigned).
4: Does it depend more on clock speeds or cores or both?

It seems crazy to be talking about "older" CPUs that are barely a 1-2 years old.

Based on their charts and other data you can get bottlenecking upto the 3600(X). It seems less impactful on 8c parts but I have not seen testing on the 1800X or some of the older hedt 6c + intel parts.

Also it is a moving target since the new consoles have only just come out. Give it 12 months when more games are built around 8c zen2 as the base line CPU and i expect more recent and powerful CPUs will also show signs of this issue.
 
Back
Top Bottom