• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

Yeah even a 10400f +z490+ with highly oc'ed and tuned ram can do some serious things in games, at a much cheaper price

But I guess 11400f+b560 will be the new "b450+3600" combo of gaming, since b560 will allow ram oc this time (i hope above 3000 mhz)

One of the privileges of being a knowledgeable PC enthusiast, one knows a 10700K will serve one just as well as a 5800X in the real world. Not that there is anything wrong with burning that bit of extra cash in going for the best knowing its a placebo in the real world, enthusiasts have been doing that for ages.

RE: RTX 3090 vs RTX 3080.
 
I wonder if Nvidia are going to come out with a miracle driver that we used to see them doing a few years ago. I remember with I think it was the 980 Ti and then the 1080 Ti they made a big announcement of a super driver that boosted performance massively and got rid of a lot of latent bugs, And it did.
 
I wonder if Nvidia are going to come out with a miracle driver that we used to see them doing a few years ago. I remember with I think it was the 980 Ti and then the 1080 Ti they made a big announcement of a super driver that boosted performance massively and got rid of a lot of latent bugs, And it did.

They did that last year I think with the big driver with the DX12 improvements.

I think around October last year it came out.
 
Last edited:
Was thinking I'm not really seeing these issues with my 10 year old E5-1650 V2 @ 4.7GHz...

STIsMIk.png


Never mind then... (for most games basically the same as the 1680 unless the game makes use of the extra 2 cores/4 threads).
 
Was thinking I'm not really seeing these issues with my 10 year old E5-1650 V2 @ 4.7GHz...

STIsMIk.png


Never mind then... (for most games basically the same as the 1680 unless the game makes use of the extra 2 cores/4 threads).

From this video.

 
I remember years ago Nvidia launced Maxwell which was able to take incoming game code and spliting the GPU draw calls to different CPU cores that way it avoided bottlenecking the primary CPU core like it did on AMD cards (DX11 did support draw calls spliting but at the time this had to be enabled in game code which was rarely the case). Once DX12 became standard and DX11 was being phased out bottlenecking one CPU became a thing of the past and Nvidia's approach was less effeicient as it added delay to the GPU sending the draw call to the CPU, I wonder if this 'driver overhead' issue is a hang up from design choices made back in 2014?
 
I remember years ago Nvidia launced Maxwell which was able to take incoming game code and spliting the GPU draw calls to different CPU cores that way it avoided bottlenecking the primary CPU core like it did on AMD cards (DX11 did support draw calls spliting but at the time this had to be enabled in game code which was rarely the case). Once DX12 became standard and DX11 was being phased out bottlenecking one CPU became a thing of the past and Nvidia's approach was less effeicient as it added delay to the GPU sending the draw call to the CPU, I wonder if this 'driver overhead' issue is a hang up from design choices made back in 2014?

That code was only enabled for DX11 - it literally hooked into the DX11 API and replaced functions and/or intercepted calls and re-optimised commits - DX12 was handled differently.
 
I remember years ago Nvidia launced Maxwell which was able to take incoming game code and spliting the GPU draw calls to different CPU cores that way it avoided bottlenecking the primary CPU core like it did on AMD cards (DX11 did support draw calls spliting but at the time this had to be enabled in game code which was rarely the case). Once DX12 became standard and DX11 was being phased out bottlenecking one CPU became a thing of the past and Nvidia's approach was less effeicient as it added delay to the GPU sending the draw call to the CPU, I wonder if this 'driver overhead' issue is a hang up from design choices made back in 2014?

Thread scheduling, AMD have hardware on the GPU's its self to handle that, Nvidia's is software, IE it runs on the CPU, so the CPU load is high on Nvidia cards, what one might call a "Driver Overhead" as a result a game that would be by its self heavy on the CPU would bottleneck an Nvidia GPU sooner than an AMD GPU given it doesn't use any CPU cycles just to drive its self.

HUB found at its most extreme the difference in performance can be as much as 40%.
 
Thread scheduling, AMD have hardware on the GPU's its self to handle that, Nvidia's is software, IE it runs on the CPU, so the CPU load is high on Nvidia cards, what one might call a "Driver Overhead" as a result a game that would be by its self heavy on the CPU would bottleneck an Nvidia GPU sooner than an AMD GPU given it doesn't use any CPU cycles just to drive its self.

HUB found at its most extreme the difference in performance can be as much as 40%.

I'm afraid that this is not true since Pascal: https://nvidia.custhelp.com/app/ans...-support-hardware-accelerated-gpu-scheduling?

Furthermore, hardware scheduling is disabled by default in Windows 10/11 for AMD and NVIDIA GPUs. It has to be enabled manually under settings > system > display > graphics > hardware-accelerated GPU scheduling.
 
If it not true can you explain the performance difference?
I don't work on NVIDIA's dev team writing their drivers, so alas, I cannot. I don't mean that in a sarcastic manner - quite literally they are the only people that could explain this problem and I'm sure they understand it very well. It could be sitting in their backlog or marked as "won't do" - who knows.

My point was that your logic around AMD using hardware scheduling vs NVIDIA using software scheduling has gone astray as neither use hardware scheduling by default and both actually support hardware scheduling.

Take a look at this Microsoft blog post for some more insight: https://devblogs.microsoft.com/directx/hardware-accelerated-gpu-scheduling/
 
Last edited:
I don't work on NVIDIA's dev team writing their drivers, so alas, I cannot. I don't mean that in a sarcastic manner - quite literally they are the only people that could explain this problem and I'm sure they understand it very well. It could be sitting in their backlog or marked as "won't do" - who knows.

My point was that your logic around AMD using hardware scheduling vs NVIDIA using software scheduling has gone astray as neither use hardware scheduling by default and both actually support hardware scheduling.

So basically: "I don't know, but you're wrong because there is a setting inside windows called hardware scheduling that you can turn on for both AMD and Nvidia" is that about right?

as neither use hardware scheduling by default

How did you arrive at that?
 
I think Microsoft are using "Hardware Scheduling" as a generic term, it was on in my settings, whether or not its on by default might depend on what you say you use the system for in initial setup.

I would be interested to know if setting this on or off makes any difference at all to AMD GPU's on DX12 and Vulkan, i suspect it doesn't, but i think it will in DX11.
 
Last edited:
So basically: "I don't know, but you're wrong because there is a setting inside windows called hardware scheduling that you can turn on for both AMD and Nvidia" is that about right?



How did you arrive at that?

Well, of course I don't know; neither do you... Nobody in this thread knows unless there's an NVIDIA dev lurking on here somewhere.

That setting is disabled by default - therefore, all software scheduling. According to the MS blog, the WDDM decides who will do the scheduling and it's toggled on and off via this setting. But if you have some sources or resource to backup your claims and show that AMD always use hardware scheduling and NVIDIA always use software, please share them. Is this where I write something like "I don't know, but you're wrong" in anticipation to your reply in an attempt to be condescending? How exciting.
 
Was thinking I'm not really seeing these issues with my 10 year old E5-1650 V2 @ 4.7GHz...

STIsMIk.png


Never mind then... (for most games basically the same as the 1680 unless the game makes use of the extra 2 cores/4 threads).

Tech yes city just did a video on this topic

The old Intel CPUs with high core counts for their time still handle games very respectably. The old AMD CPUs don't though

And due to a number of factors, old Intel 6, 8, 10 core HEDT CPUs will continue to be good for games for several more years because they are on par or even slightly faster than the current consoles

 
Last edited:
Well, of course I don't know; neither do you... Nobody in this thread knows unless there's an NVIDIA dev lurking on here somewhere.

That setting is disabled by default - therefore, all software scheduling. According to the MS blog, the WDDM decides who will do the scheduling and it's toggled on and off via this setting. But if you have some sources or resource to backup your claims and show that AMD always use hardware scheduling and NVIDIA always use software, please share them. Is this where I write something like "I don't know, but you're wrong" in anticipation to your reply in an attempt to be condescending? How exciting.

Its common knowledge that's been covered by several tech journalists, hence the title video i linked. Maybe watch that.
 
Last edited:
Its common knowledge that's been covered by several tech journalists, hence the title video i linked. Maybe watch that.

If it's so common then why can't you provide a source? If it's as common as you say, it should be easy... But don't worry, I'll do some legwork as I like to provide information and state claims with supported references.

The release notes for AMD's driver 20.5.1 beta: https://www.amd.com/en/support/kb/release-notes/rn-rad-win-20-5-1-ghs-beta
  • Windows® May 2020 Update
    • AMD is excited to provide beta support for Microsoft’s Graphics Hardware Scheduling feature. By moving scheduling responsibilities from software into hardware, this feature has the potential to improve GPU responsiveness and to allow additional innovation in GPU workload management in the future. This feature is available on Radeon RX 5600 and Radeon RX 5700 series graphics products.
Microsoft hasn't stated specifically which DirectX version this change applies to, so the graphics API used by each game may vary the scheduler API-to-API. It would be reasonable to conclude that all DirectX games used software scheduling prior to the introduction of this feature and setting. My point: NVIDIA has hardware scheduling on the GPU, on their driver, and Windows 10/11 supports the enablement of it. Therefore your claim: "Thread scheduling, AMD have hardware on the GPU's its self to handle that, Nvidia's is software, IE it runs on the CPU, so the CPU load is high on Nvidia cards, what one might call a "Driver Overhead" as a result a game that would be by its self heavy on the CPU would bottleneck an Nvidia GPU sooner than an AMD GPU given it doesn't use any CPU cycles just to drive its self." is false.
 
Back
Top Bottom