• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The CPU determines GPU performance. On Nvidia anyway.

BTW, i think its a good thing that tech journalists make a fuss about things these hardware companies could improve on, it forces them to pull their fingers out of their behinds and put the work in. And that has worked very well over the years. In that sense they are doing their job.

I just wish they would be a lot more consistent with that, because its not just one vendor that's needed to make improvements over the years!
That a very huge ask.

As I pointed out elsewhere when someone said that Nvidia seemed to be doing very well with Samsung's cheaper 8nm:

Mostly because they somehow managed to change the message and now perf/watt has gone the same place where "upscalers are for console peasants" has gone: it's disappeared from the narrative.

It is amazing and almost sinister how well Nvidia are able to set the narrative all the time.
 
Also look at the frame rates, 1080P Ultra, lows of 80 FPS on the 3090 vs 93 on the 6900XT, and 74 FPS at 1440P Ultra, this with a 5600X.

So even at 1440P Ultra with the 3090 the 5600X is right on the brink, ok you're unlikely to run a 5600X with a 3090 but i would have been interested to see if there is still a bottleneck with a 5800X at 1080P Ultra, there is 17% between the 3090 and 6900XT even with the 5600X at 1080P Ultra and this is where your future proofing comes in because a 5800X is the sort of CPU with a 3090 do buy and if they get Nvidia next latest and greatest they might find its not going to be any better.

So again they will have to change the CPU, assuming Alderlake if its going to be any faster in games than Zen 3, or Zen 3D or Zen 4.
 
That a very huge ask.

As I pointed out elsewhere when someone said that Nvidia seemed to be doing very well with Samsung's cheaper 8nm:

Mostly because they somehow managed to change the message and now perf/watt has gone the same place where "upscalers are for console peasants" has gone: it's disappeared from the narrative.

It is amazing and almost sinister how well Nvidia are able to set the narrative all the time.

Look at how they treated HUB when they didn't follow Nvidia's narrative on DLSS 2 to the letter, people always give Nvidia the benefit of the doubt, you can see that when tech journalists over the years are forced to answer why they aren't saying anything about something related to Nvidia, its always along the lines of "Nvidia are aware of it and promised they will deal with it so there is no story here" but never ever give that curtsey to AMD, they are afraid of Nvidia.
 
Last edited:
From previous tests the nVidia software driver overhead seems

What tests? I can’t see what the Nvidia driver is doing. All we can guess is whatever the driver is doing, it’s doing it it wrong.

Im still not convinced the driver is the issue.
 
I remember well, during the days of owning my 290x and vega56 many a fanboys comments were about how much better nvidia's power delivery was and general sniping/sneering. As you say its pushing the narrative and loudly/consistently across so that the mindshare absorbs it and dances along like the pied piper! :cry:

I remember pointing out how my 780ti pulled more power, produced more heat and had less performance than my 290. The 780ti was also close to double the price too. Clearly the days of reasonably priced graphics are over. It’s now all about how many you can hoard to mine with.
 
What tests? I can’t see what the Nvidia driver is doing. All we can guess is whatever the driver is doing, it’s doing it it wrong.

Im still not convinced the driver is the issue.

It isn't technically a driver issue but the way nVidia using software for scheduling and multi-thread optimisations, similar to the DX11 intercept - it has potential to provide big benefits but also has some penalties if the way a game is developed works against it and/or where CPU resources are under contention.

I largely blame DX12/Vulkan really as it isn't the approach most developers want for the problem so they end up using lazy, inefficient ways to work around where you have to reinvent the wheel with DX12/Vulkan which basically results in having a DX11 like layer between the game and the GPU anyway.
 
Also look at the frame rates, 1080P Ultra, lows of 80 FPS on the 3090 vs 93 on the 6900XT, and 74 FPS at 1440P Ultra, this with a 5600X.

So even at 1440P Ultra with the 3090 the 5600X is right on the brink, ok you're unlikely to run a 5600X with a 3090 but i would have been interested to see if there is still a bottleneck with a 5800X at 1080P Ultra, there is 17% between the 3090 and 6900XT even with the 5600X at 1080P Ultra and this is where your future proofing comes in because a 5800X is the sort of CPU with a 3090 do buy and if they get Nvidia next latest and greatest they might find its not going to be any better.

So again they will have to change the CPU, assuming Alderlake if its going to be any faster in games than Zen 3, or Zen 3D or Zen 4.

If I was buying now I’d buy the highest end RDNA2 card and Zen3 CPU I could afford.

I have 5 home systems and only one configuration would be suitable for an RTX upgrade. The rest would either end up slower or would be better off with a Radeon RDNA2 card. Hopefully Intel can offer an alternative in Q1
 
Last edited:
It isn't technically a driver issue but the way nVidia using software for scheduling and multi-thread optimisations, similar to the DX11 intercept - it has potential to provide big benefits but also has some penalties if the way a game is developed works against it and/or where CPU resources are under contention.

I largely blame DX12/Vulkan really as it isn't the approach most developers want for the problem so they end up using lazy, inefficient ways to work around where you have to reinvent the wheel with DX12/Vulkan which basically results in having a DX11 like layer between the game and the GPU anyway.

Is that just your opinion of what might be happening? Because what you’re saying makes no sense. You can’t see what is happening under the hood. There is no way to measure or test.

What tests have you seen?

I’d assume Nvidia use an extra layer of abstraction and developers have to guess what Nvidia are doing.
 
What tests? I can’t see what the Nvidia driver is doing. All we can guess is whatever the driver is doing, it’s doing it it wrong.

Im still not convinced the driver is the issue.

Out of genuine interest do you have any ideas on what it might be?

If I was buying now I’d buy the highest end RDNA2 card and Zen3 CPU I could afford.

I have 5 home systems and only one configuration would be suitable for an RTX upgrade. The rest would be either end up slower or would be better off with a Radeon RDNA2 card. Hopefully Intel can offer an alternative in Q1

I don't normally spend £400+ on a CPU, i have never spent that much on a CPU, but was well aware of this problem before HUB made a video on it and was worried 6 core CPU's weren't going to cut it much longer and i had no real confidence in AMD's future GPU performance.

I'm relieved about being wrong about the latter but its all moot now thanks to GPU shortages, i don't regret the CPU, its a 2 or 3 GPU generation CPU for me, i don't have to worry about it for the next several years whatever happens.
 
Out of genuine interest do you have any ideas on what it might be?



I don't normally spend £400+ on a CPU, i have nether spent that much on a CPU, but was well aware of this problem before HUB made a video on it and was worried 6 core CPU's weren't going to cut it much longer and i had no real confidence in AMD's future GPU performance.

I'm relieved about being wrong about the latter but its all moot now thanks to GPU shortages, i don't regret the CPU, its a 2 or 3 GPU generation CPU for me, i don't have to worry about it for the next several years whatever happens.

I think the point you made could be the issue. Jenson is stuck in directoryX11 land.
 
We just need faster CPUs as CPU speed per core hasn't really advanced much over the last 5 years so top end Gpus are now being held back.
 
We just need faster CPUs as CPU speed per core hasn't really advanced much over the last 5 years so top end Gpus are now being held back.

Only RTX GPU’s suffer, that’s kind of the point…

Radeon RDNA GPU’s are fine. Probably not perfect but they even work well with all 7 versions of Sandy Bridge.
 
The problem for Nvidia is the GPU clock speed which as resolution drops clocks go up but Samsung 8nm doesn't clock high enough to keep pace.

Imagine a 3090 on TSMC 7nm that uses a lot less power and clocks to 2600mhz, would be a monster of a GPU but Nvidia went with Samsung so they could make more money.
 
Last edited:
I remember pointing out how my 780ti pulled more power, produced more heat and had less performance than my 290. The 780ti was also close to double the price too. Clearly the days of reasonably priced graphics are over. It’s now all about how many you can hoard to mine with.
Hawaii Vs GK100 was actually mostly a very big win for AMD in everything except sales and mindshare.

So aside from that, Hawaii was a far smaller, performed better, aged far better but somehow the narratives was about being hot and loud. Nvidia's marketing is just scary and there are enough weak "journalists" and fanboys that nothing ever gets said.

As for the thread topic: it really is in Nvidia's interest to fix this. Being able to use cheaper CPUs means people can spend more on Nvidia GPUs!
 
Regardless of whose card is faster AMD have still failed so far to gain any market share with the best product line they have released in years.
 
Back
Top Bottom