• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The CPU determines GPU performance. On Nvidia anyway.

The CPU limit is imposed by Nvidia’s GPU design, it’s the not the CPU that is the problem. A Normal* situation for Nvidia would be something like an 11900K/5800X. Less, and the scenario becomes distinctly less normal.

The minimum system requirements for Ampere are just incredible. A RX 5600 XT shouldn’t be able to beat a RTX 3090 regardless of system specification.

How many people have upgraded from RDNA 1 to Ampere and are unaware they have suffered performance regression.
 
Last edited:
Did I read that right, in that it suggests a Zen 5 3600 is sufficient if paired with GPUs up to the RTX3070, after which a 5600x would be a better match? And once an appropriate CPU/GPU match is made, the different in performance between AMD and Nvidia is fairly minimal?

8 cores seem* to offer the least performance hit.
 
Yeah you might be right.

Seems AMD first introduced Asynchronous Compute Engines with the AMD Tahiti GPU line of chips in 2011. Nvidia can’t be 10 ten years behind.

I’m not buying the software schedular problem as Nvidia would have simply updated its driver within the last 10 years.
 
Last edited:
Seems AMD first introduced Asynchronous Compute Engines with the AMD Tahiti GPU line of chips in 2011. Nvidia can’t be 10 ten years behind.

I’m not buying the software schedular problem as Nvidia would have simply updated its driver within the last 10 years.

Are you serious?

Ok lets ignore the fact that we know the difference in Schedulers, what would you suggest is the 20 - 30% higher CPU load on Nvidia GPU's?
 
Did I read that right, in that it suggests a Zen 5 3600 is sufficient if paired with GPUs up to the RTX3070, after which a 5600x would be a better match? And once an appropriate CPU/GPU match is made, the different in performance between AMD and Nvidia is fairly minimal?

You should be ok as long as you're running 1440P, IE you're not running 1080P because you want very high Frame Rates. If you do get a 5600X.
 
Are you serious. You can’t think Nvidia aren’t aware of this issue.

They are aware of it, up to this point at least they haven't wanted to create a hardware schedular, why that is only they know but it will increase the die size and increase power consumption.

And if most people don't know about this why should they increase their costs and up until Ampere vs RDNA2 at least their power efficient GPU reputation? Nvidia only care about how it looks, not what it actually does and up until now no one has talked about this, even now other than Hardware Unboxed no one is, just like the fact that no one is talking about the extra input latency that DLSS causes.
 
Last edited:
They are aware of it, up to this point at least they haven't wanted to create a hardware schedular, why that is only they know but it will increase the die size and increase power consumption.

And if most people don't know about this why should they increase their costs and up until Ampere vs RDNA2 at least their power efficient GPU reputation? Nvidia only care about how it looks, not what it actually does and up until now no one has talked about this and now other than Hardware Unboxed no one is, just like the fact that no one is talking about the extra in put latency that DLSS causes.

That in itself is an architectural issue, I have some suspicions of what the issue is but exposing the API/driver layer on Nvidia seems impossible.

Nvidia needs a much better design or at least a honest minimum system requirement. How many people have upgraded to an RTX card and dropped performance because the CPU overhead has been increased.
 
That in itself is an architectural issue, I have some suspicions of what the issue is but exposing the API/driver layer on Nvidia seems impossible.

Nvidia needs a much better design or at least a honest minimum system requirement. How many people have upgraded to an RTX card and dropped performance because the CPU overhead has been increased.

They wouldn't drop performance, at worst they may not gain performance.

The HUB slides are very deliberate to make the point, the most extreme example, the slide in this post for example the CPU on the 5700XT is also running at about 90% of its highest level of performance for that GPU, in this case its 17% faster than the RTX 3090 because its actually about 50% slower than the CPU needs to be and only about half of it is down to the software scheduler on the Nvidia card, the other half is just lack of CPU umph, Hardware Unboxed have a habit of deliberately exaggerating to make the point, that's not to say it isn't true, it absolutely is and with the 2600X the 3090 is only capable of 93 FPS in that game in that particular part of it and from a 3090 you would want a shed load more than that, you would probably get about 130 from a 5600X because that CPU has the umph to deal with the software scheduler and the extra headroom needed, tho you might get even more with a 5800X.

This is a problem for modern gaming because games now lean quite heavily on the CPU, and thank #### for high core count high IPC CPU's because with out them Nvidia's GPU's are not getting any faster.

As for what the real issue is, if its not as simple as a lack of will on Nvidia's part, i have no idea but AMD created a schedular for the modern age and Nvidia have not yet replicated it, there is such a thing as Intellectual Property rights.
 
Last edited:
They wouldn't drop performance, at worst they may not gain performance.

The HUB slides are very deliberate to make the point, the most extreme example, the slide in this post for example the CPU on the 5700XT is also running at about 90% of its highest level of performance for that GPU, in this case its 17% faster than the RTX 3090 because its actually about 50% slower than the CPU needs to be and only about half of it is down to the software scheduler on the Nvidia card, the other half is just lack of CPU umph, Hardware Unboxed have a habit of deliberately exaggerating to make the point, that's not to say it isn't true, it absolutely is and with the 2600X the 3090 is only capable of 93 FPS in that game in that particular part of it and from a 3090 you would want a shed load more than that, you would probably get about 150 from a 5600X because that CPU has the umph to deal with the software scheduler and the extra headroom needed, tho you might get even more with a 5800X.

This is a problem for modern gaming because games now lean quite heavily on the CPU, and thank #### for high core count high IPC CPU's because with out them Nvidia's GPU's are not getting any faster.

As for what the real issue is, if its not as simple as a lack of will on Nvidia's part, i have no idea but AMD created a schedular for the modern age and Nvidia have not yet replicated it, there is such a thing as Intellectual Property rights.

Well it looks as if Intel have overcome the problem and the Nvidia software layer is so locked down they could be infringing on IP suppose. If Nvidia are borrowing IP, it would explain the black box aspect of the SDK. But either way I’m sure Intel and AMD would licence IP to Nvidia and that would allow all developers to optimise for Nvidia.
 
Well it looks as if Intel have overcome the problem and the Nvidia software layer is so locked down they could be infringing on IP suppose. If Nvidia are borrowing IP, it would explain the black box aspect of the SDK. But either way I’m sure Intel and AMD would licence IP to Nvidia and that would allow all developers to optimise for Nvidia.

They might, but do Nvidia want to pay AMD, or Intel for licensing technology?

Leather Jacket Man is proud, and by proud i mean narcissist.

Even Intel are not that proud.
 
Wasn't that Maxwell was the first without hardware scheduler which allowed it to be more power efficient?
I think there was also talk about how nVIDIA drivers made good use of that (software scheduler), enabling to get better performance out of DX11, while AMD could not do that much since it was HW scheduler.
 
That's relative, if you want high FPS and play on lower resolution, you may be limited by the CPU or if you want RT without DLSS. Or... just simple variation in frame rate in CPU intense areas in some games.

The only time I can see it being a problem is if people are doing the "low pro" thing and running a lot of settings dialled back at a lower resolution. Most people in that situation would be aiming to be on a fast CPU.

Did I read that right, in that it suggests a Zen 5 3600 is sufficient if paired with GPUs up to the RTX3070, after which a 5600x would be a better match? And once an appropriate CPU/GPU match is made, the different in performance between AMD and Nvidia is fairly minimal?

I'm using a 3070FE at 1440p (and a bit of 4K) paired with a 2013 era 6 core 12 thread CPU at 4.4GHz - there is pretty much no instance where I see performance degraded to the point an older GPU would be no worse or better except maybe some older games if I drop settings for silly high FPS and the CPU becomes a limit and almost any GPU would be the same or worse.
 
Wasn't that Maxwell was the first without hardware scheduler which allowed it to be more power efficient?
I think there was also talk about how nVIDIA drivers made good use of that (software scheduler), enabling to get better performance out of DX11, while AMD could not do that much since it was HW scheduler.

I think the GTX 500 series was the last Nvidia GPU with a hardware scheduler so yes probably Maxwell.

Early GCN GPU's like the HD 7000 (GCN 1.0) and R9 290 (GCN 1.1) weren't good with games that had a high main thread load, this is where Nvidia's software scheduler did a far better job by splinting the main thread into multiple threads.

You could see that at the time where there was often quite a large disparity between the two where the CPU was the bottleneck, exactly like you are seeing here but in reverse, and yes tech journalists at made an enormous amount of noise about it at the R9 290 time. Odd that now the tables have turned no one other than HUB wants to earn any content revenue out of it.

AMD refined the scheduler with the next iteration of GCN, GCN 1.2, RX 400 series to do the same thing at the hardware level with the main thread Nvidia were doing at the software level, if you look at old DX11 games, for example CS:GO, and one that i personally know: Insurgency, the CPU limited performance is the same on equivalent post GCN 1.2 GPU's as it is on Nvidia.
 
Last edited:
BTW, i think its a good thing that tech journalists make a fuss about things these hardware companies could improve on, it forces them to pull their fingers out of their behinds and put the work in. And that has worked very well over the years. In that sense they are doing their job.

I just wish they would be a lot more consistent with that, because its not just one vendor that's needed to make improvements over the years!
 
The only time I can see it being a problem is if people are doing the "low pro" thing and running a lot of settings dialled back at a lower resolution. Most people in that situation would be aiming to be on a fast CPU.

Or on Ultrawide and fast monitors, multiple monitor setups, VR, etc.

Anyway, there is still a significant difference in 1440p Ultra when it comes to 6900xt vs 3090.

https://youtu.be/JLEIJhunaW8?t=743
1440p Ultra Quality in HZD

https://youtu.be/JLEIJhunaW8?t=863
1440 Ultra Quality in WD:L.

RT also requires extra work from the CPU, so just because 3090 is theoretically better at it, could mean that the gap could be maintained in other titles down the road or existing ones.
 
Or on Ultrawide and fast monitors, multiple monitor setups, VR, etc.

Anyway, there is still a significant difference in 1440p Ultra when it comes to 6900xt vs 3090.

https://youtu.be/JLEIJhunaW8?t=743
1440p Ultra Quality in HZD

https://youtu.be/JLEIJhunaW8?t=863
1440 Ultra Quality in WD:L.

RT also requires extra work from the CPU, so just because 3090 is theoretically better at it, could mean that the gap could be maintained in other titles down the road or existing ones.

From previous tests the nVidia software driver overhead seems to depend a lot on memory bandwidth and latency as well - which my 1650 with its quad channel and tuned 2400MHz dimms probably see less impact than even more recent dual channel CPUs.
 
From previous tests the nVidia software driver overhead seems to depend a lot on memory bandwidth and latency as well - which my 1650 with its quad channel and tuned 2400MHz dimms probably see less impact than even more recent dual channel CPUs.

Probably, but people with "lower end" processors, such as those in the tests, don't really have access to that. :)
 
Back
Top Bottom