• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

If it's so common then why can't you provide a source? If it's as common as you say, it should be easy... But don't worry, I'll do some legwork as I like to provide information and state claims with supported references.

The release notes for AMD's driver 20.5.1 beta: https://www.amd.com/en/support/kb/release-notes/rn-rad-win-20-5-1-ghs-beta
  • Windows® May 2020 Update
    • AMD is excited to provide beta support for Microsoft’s Graphics Hardware Scheduling feature. By moving scheduling responsibilities from software into hardware, this feature has the potential to improve GPU responsiveness and to allow additional innovation in GPU workload management in the future. This feature is available on Radeon RX 5600 and Radeon RX 5700 series graphics products.
Microsoft hasn't stated specifically which DirectX version this change applies to, so the graphics API used by each game may vary the scheduler API-to-API. It would be reasonable to conclude that all DirectX games used software scheduling prior to the introduction of this feature and setting. My point: NVIDIA has hardware scheduling on the GPU, on their driver, and Windows 10/11 supports the enablement of it. Therefore your claim: "Thread scheduling, AMD have hardware on the GPU's its self to handle that, Nvidia's is software, IE it runs on the CPU, so the CPU load is high on Nvidia cards, what one might call a "Driver Overhead" as a result a game that would be by its self heavy on the CPU would bottleneck an Nvidia GPU sooner than an AMD GPU given it doesn't use any CPU cycles just to drive its self." is false.

This doesn't prove Nvidia has hardware thread scheduling or that my statement was false, its an AMD drive statement, the fact that testing still proves there is a higher CPU overhead, significantly so, on Nvidia hardware would indicate they still don't, at least not up to and including Ampere, don't know about Lovelace.
 
Last edited:
This doesn't prove Nvidia has hardware thread scheduling or that my statement was false, its an AMD drive statement, the fact that testing still proves there is a higher CPU overhead, significantly so, on Nvidia hardware would indicate they still don't, at least not up to and including Ampere, don't know about Lovelace.

That CPU overhead could be caused by literally anything in the driver. A bad do while loop, a poorly written function, an inefficient API call, some debug logging enabled when it shouldn't be (this happened a few months ago)... you cannot see high CPU usage and categorically state it is the scheduler that is causing it. This is what the debug and RCA processes are for - identifying the root cause of an issue and addressing it. Only NVIDIA can diagnose this issue and state with certainty what you state with confidence verging on arrogance. I get the distinct impression that you do not work in technology and have never written or troubleshooted software.

I would suggest putting the shovel down now. Goodnight.
 
That CPU overhead could be caused by literally anything in the driver. A bad do while loop, a poorly written function, an inefficient API call, some debug logging enabled when it shouldn't be (this happened a few months ago)... you cannot see high CPU usage and categorically state it is the scheduler that is causing it. This is what the debug and RCA processes are for - identifying the root cause of an issue and addressing it. Only NVIDIA can diagnose this issue and state with certainty what you state with confidence verging on arrogance. I get the distinct impression that you do not work in technology and have never written or troubleshooted software.

I would suggest putting the shovel down now. Goodnight.

Nvidia are a lot of things but incompetent in software is not one of them, not by a long shot.
 
Last edited:
Nvidia are a lot of things but incompetent in software is not one of them, not by a long shot.

You mean they write software for millions of devices used in a myriad of configurations, for extremely wide usages and do it with a reasonable amount of success? Yes.

I would highly recommend sending in your CV to NVIDIA and heading up their development teams - you could U-turn their terrible software and be replacing Jensen as CEO by 2024! This is too much. I really am calling it a night. Enjoy watching your YouTube videos and making your prophetic (or clairvoyant) claims.
 
You mean they write software for millions of devices used in a myriad of configurations, for extremely wide usages and do it with a reasonable amount of success? Yes.

I would highly recommend sending in your CV to NVIDIA and heading up their development teams - you could U-turn their terrible software and be replacing Jensen as CEO by 2024! This is too much. I really am calling it a night. Enjoy watching your YouTube videos and making your prophetic (or clairvoyant) claims.

Your argument is essentially that AMD make better drivers than Nvidia, that is 3 or more years of this problem existing Nvidia have still not fixed what you argue is a software problem. That's one hell of a reach.

No.
 
Last edited:
Tech yes city just did a video on this topic

The old Intel CPUs with high core counts for their time still handle games very respectably. The old AMD CPUs don't though

And due to a number of factors, old Intel 6, 8, 10 core HEDT CPUs will continue to be good for games for several more years because they are on par or even slightly faster than the current consoles

Βut how? I though the R7 1700 for 300€ was much faster than the 6900x. :D :D :D
 
@humbug @aaronyuri
You guys seem to be talking about different things here with 'hardware scheduler' term.
1 is about a 2020 update to how Windows handles application priority on the GPU, and the other is something from way back, referencing some instruction scheduling and optimisation for execution within the GPU hardware, and goes back 10 years to the Kepler days (how time flies!) - and I have no idea how relevant it is to Nvidia's modern GPU architectures - I'd hazard a guess things have changed.
https://en.wikipedia.org/wiki/Kepler_(microarchitecture)


GF100 was essentially a thread level parallelism design, with each SM executing a single instruction from up to two warps. At the same time certain math instructions had variable latencies, so GF100 utilized a complex hardware scoreboard to do the necessary scheduling. Compared to that, GK110 introduces instruction level parallelism to the mix, making the GPU reliant on a mix of high TLP and high ILP to achieve maximum performance. The GPU now executes from 4 warps, ultimately executing up to 8 instructions at once if all of the warps have ILP-suitable instructions waiting. At the same time scheduling has been moved from hardware to software, with NVIDIA’s compiler now statically scheduling warps thanks to the fact that every math instruction now has a fixed latency. Finally, to further improve SMX utilization FP64 instructions can now be paired with other instructions, whereas on GF100 they had to be done on their own.

The end result is that at an execution level NVIDIA has sacrificed some of GF100’s performance consistency by introducing superscalar execution – and ultimately becoming reliant on it for maximum performance. At the same time they have introduced a new type of consistency (and removed a level of complexity) by moving to fixed latency instructions and a static scheduled compiler. Thankfully a ton of these details are abstracted from programmers and handled by NVIDIA’s compiler, but for HPC users who are used to getting their hands dirty with low level code they are going to find that GK110 is more different than it would seem at first glance.
 
Last edited:
I have one of them on ignore so might be getting things a bit mixed up but what I think they are actually talking about is command lists and/or the compute shader dispatcher.
 
Last edited:
That and as said, cyberpunk is just a poorly optimised game.
giphy.gif
 
From this video.


He tried a G3258 for the LOLs in a previous video:

What's more interesting is the RX6700 10GB is still available in the US as a sub $300 dGPU,so is more likely to be paired with a stock clocked,cheaper CPU.

This comparison is with a Core i3 10100F:

The RTX2080TI generally is slightly faster than an RX6700XT:
 
Last edited:
Look back at the chain of posts and the date ;) Game worked well for me but sadly on launch, many had issues. Looking back though, cp 2077 on launch makes it look like the gold standard compared to games recently :cry: But you know, it seems ok/acceptable to spend ££££ to avoid/brute force through issues these days now and it's never the game devs fault now... ;) :p
 
Last edited:
Your argument is essentially that AMD make better drivers than Nvidia, that is 3 or more years of this problem existing Nvidia have still not fixed what you argue is a software problem. That's one hell of a reach.

No.
I don't have an argument. I gave you referenced facts and provided some development insights based on my professional experience. The argument you speak of is confined to your head.

What you do with the information is up to you. I have nothing further to say. Happy gaming.
 
  • Like
Reactions: TNA
Look back at the chain of posts and the date ;) Game worked well for me but sadly on launch, many had issues. Looking back though, cp 2077 on launch makes it look like the gold standard compared to games recently :cry: But you know, it seems ok/acceptable to spend ££££ to avoid/brute force through issues these days now and it's never the game devs fault now... ;) :p
The crazy thing is I did, but I guess my mind transformed Mar into May, I saw 20-something and must've thought "close enough".

Oops.. :eek:
 
Tech yes city just did a video on this topic

The old Intel CPUs with high core counts for their time still handle games very respectably. The old AMD CPUs don't though

And due to a number of factors, old Intel 6, 8, 10 core HEDT CPUs will continue to be good for games for several more years because they are on par or even slightly faster than the current consoles


Not sure I'm seeing it on that vid? The 5600 is about as fast vs the 150W old Intel part as you'd usually expect to see. And the 4500 looks to be about as slow as it always has been relative. Not seeing anything new there?
 
Back
Top Bottom