• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The CPU determines GPU performance. On Nvidia anyway.

Soldato
Joined
28 May 2007
Posts
19,098
Upgrading to Ampere for the majority of users will see performance regression.


In simple terms, that Intel CPU or non 5000 series Ryzen with 3600 IF speeds can’t cut the mustard.
 
Ideally I want to game at 4k so would a 6700xt be up to the job? I would happily take the recommendation of the 6600 but after using the 8gb 3070 I've noticed some games are right on the 8gb mark When it comes to vram usage!

Looks like a case of buy a Radeon card or a Ryzen 5800X and fast memory to pair with a Nvidia card.
 
Buy a console, graphics cards today are a rip-off period.

£1200 graphics card, £1000 for the rest of the PC and it runs Watch Dogs at 1080P/medium with 112fps average when £450 consoles run it at 4K/60fps? not to mention no monitors have good HDR. I'd rather have that than minor texture upgrades and other subtle effects that tank performance for very little visual benefit.

That’s not the case with an all AMD system it seems. Apart from the £1200 if you want a RX 6900XT.
 
I am quite sure that the CPU itilisation never even approaches anything remotely close to 100% :cry:
The problem is in Nvidia's software engineers, and maybe Microsoft.

But the software design is based off the available hardware to a large extent. It would be nice to see what is happening within the Nvidia driver/API.
 
Last edited:
Hmm, yeah interesting this. I'm about to make the jump from a 6700K @4.5 to either 5800/5900X. Wonder if I'll see a decent jump with a 2070S. Mainly hoping for better performance in msfs.

If you go to 9:15 in the video they have the RTX 2080 ti, but yeah seems all Nvidia cards need the highest end system to hit quoted figures.
 
You don't really need to 5800X with the 2070S, a 5600X will be more than enough, unless you're planing to upgrade to a 3080 at some point with the new CPU.

In general tho i would seriously consider upgrading from the 6700K anyway, its probably on the limit with most things even with the 2070S.

I’ve been playing around with an overclocked
Xeon 1260L v5. With a RDNA Radeon card it’s holding up OK under gaming. Switch to an Nvidia card the and performance tanks.

Obviously the Skylake chip gets taken to the cleaners. But yeah AMD FTW
 
Thanks. Spent the last couple of weeks getting upto speed with new hardware and I agree I think the 5600X for purely gaming sub 4K looks the pick of the punch but I would be looking at upgrading the GPU in a year or so. That's how I've often upgraded in the past. I must admit I'm still pretty pleased with how well it handles the vast majority of my games and I'm more satisfying an itch.

A RX 6600 XT would be the best upgrade :p
 
It worked out well for Nvidia back when games were very main thread heavy, a problem with the DX10 / 11 API where it dumped everything on to one thread.

Nvidia got around it with a Scheduler built in to the driver that split the worker thread in to 4, first seen with the GTX 700 series, the problem with that is you're using the CPU to do that work.

AMD took a different approach, they built a thread scheduler in to the GPU its self, those are these "ACE" units or Asynchronous Compute Engine, first seen in GCN 1.2 (Polaris), again 4 of them, the GPU its self is doing all the work.

Wasn't it Hawaii or Tonga? Some GPU named after an island in the Pacific.
 
The CPU limit is imposed by Nvidia’s GPU design, it’s the not the CPU that is the problem. A Normal* situation for Nvidia would be something like an 11900K/5800X. Less, and the scenario becomes distinctly less normal.

The minimum system requirements for Ampere are just incredible. A RX 5600 XT shouldn’t be able to beat a RTX 3090 regardless of system specification.

How many people have upgraded from RDNA 1 to Ampere and are unaware they have suffered performance regression.
 
Last edited:
Did I read that right, in that it suggests a Zen 5 3600 is sufficient if paired with GPUs up to the RTX3070, after which a 5600x would be a better match? And once an appropriate CPU/GPU match is made, the different in performance between AMD and Nvidia is fairly minimal?

8 cores seem* to offer the least performance hit.
 
Yeah you might be right.

Seems AMD first introduced Asynchronous Compute Engines with the AMD Tahiti GPU line of chips in 2011. Nvidia can’t be 10 ten years behind.

I’m not buying the software schedular problem as Nvidia would have simply updated its driver within the last 10 years.
 
Last edited:
They are aware of it, up to this point at least they haven't wanted to create a hardware schedular, why that is only they know but it will increase the die size and increase power consumption.

And if most people don't know about this why should they increase their costs and up until Ampere vs RDNA2 at least their power efficient GPU reputation? Nvidia only care about how it looks, not what it actually does and up until now no one has talked about this and now other than Hardware Unboxed no one is, just like the fact that no one is talking about the extra in put latency that DLSS causes.

That in itself is an architectural issue, I have some suspicions of what the issue is but exposing the API/driver layer on Nvidia seems impossible.

Nvidia needs a much better design or at least a honest minimum system requirement. How many people have upgraded to an RTX card and dropped performance because the CPU overhead has been increased.
 
They wouldn't drop performance, at worst they may not gain performance.

The HUB slides are very deliberate to make the point, the most extreme example, the slide in this post for example the CPU on the 5700XT is also running at about 90% of its highest level of performance for that GPU, in this case its 17% faster than the RTX 3090 because its actually about 50% slower than the CPU needs to be and only about half of it is down to the software scheduler on the Nvidia card, the other half is just lack of CPU umph, Hardware Unboxed have a habit of deliberately exaggerating to make the point, that's not to say it isn't true, it absolutely is and with the 2600X the 3090 is only capable of 93 FPS in that game in that particular part of it and from a 3090 you would want a shed load more than that, you would probably get about 150 from a 5600X because that CPU has the umph to deal with the software scheduler and the extra headroom needed, tho you might get even more with a 5800X.

This is a problem for modern gaming because games now lean quite heavily on the CPU, and thank #### for high core count high IPC CPU's because with out them Nvidia's GPU's are not getting any faster.

As for what the real issue is, if its not as simple as a lack of will on Nvidia's part, i have no idea but AMD created a schedular for the modern age and Nvidia have not yet replicated it, there is such a thing as Intellectual Property rights.

Well it looks as if Intel have overcome the problem and the Nvidia software layer is so locked down they could be infringing on IP suppose. If Nvidia are borrowing IP, it would explain the black box aspect of the SDK. But either way I’m sure Intel and AMD would licence IP to Nvidia and that would allow all developers to optimise for Nvidia.
 
From previous tests the nVidia software driver overhead seems

What tests? I can’t see what the Nvidia driver is doing. All we can guess is whatever the driver is doing, it’s doing it it wrong.

Im still not convinced the driver is the issue.
 
I remember well, during the days of owning my 290x and vega56 many a fanboys comments were about how much better nvidia's power delivery was and general sniping/sneering. As you say its pushing the narrative and loudly/consistently across so that the mindshare absorbs it and dances along like the pied piper! :cry:

I remember pointing out how my 780ti pulled more power, produced more heat and had less performance than my 290. The 780ti was also close to double the price too. Clearly the days of reasonably priced graphics are over. It’s now all about how many you can hoard to mine with.
 
Back
Top Bottom