• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The CPU determines GPU performance. On Nvidia anyway.

Also look at the frame rates, 1080P Ultra, lows of 80 FPS on the 3090 vs 93 on the 6900XT, and 74 FPS at 1440P Ultra, this with a 5600X.

So even at 1440P Ultra with the 3090 the 5600X is right on the brink, ok you're unlikely to run a 5600X with a 3090 but i would have been interested to see if there is still a bottleneck with a 5800X at 1080P Ultra, there is 17% between the 3090 and 6900XT even with the 5600X at 1080P Ultra and this is where your future proofing comes in because a 5800X is the sort of CPU with a 3090 do buy and if they get Nvidia next latest and greatest they might find its not going to be any better.

So again they will have to change the CPU, assuming Alderlake if its going to be any faster in games than Zen 3, or Zen 3D or Zen 4.

If I was buying now I’d buy the highest end RDNA2 card and Zen3 CPU I could afford.

I have 5 home systems and only one configuration would be suitable for an RTX upgrade. The rest would either end up slower or would be better off with a Radeon RDNA2 card. Hopefully Intel can offer an alternative in Q1
 
Last edited:
It isn't technically a driver issue but the way nVidia using software for scheduling and multi-thread optimisations, similar to the DX11 intercept - it has potential to provide big benefits but also has some penalties if the way a game is developed works against it and/or where CPU resources are under contention.

I largely blame DX12/Vulkan really as it isn't the approach most developers want for the problem so they end up using lazy, inefficient ways to work around where you have to reinvent the wheel with DX12/Vulkan which basically results in having a DX11 like layer between the game and the GPU anyway.

Is that just your opinion of what might be happening? Because what you’re saying makes no sense. You can’t see what is happening under the hood. There is no way to measure or test.

What tests have you seen?

I’d assume Nvidia use an extra layer of abstraction and developers have to guess what Nvidia are doing.
 
Out of genuine interest do you have any ideas on what it might be?



I don't normally spend £400+ on a CPU, i have nether spent that much on a CPU, but was well aware of this problem before HUB made a video on it and was worried 6 core CPU's weren't going to cut it much longer and i had no real confidence in AMD's future GPU performance.

I'm relieved about being wrong about the latter but its all moot now thanks to GPU shortages, i don't regret the CPU, its a 2 or 3 GPU generation CPU for me, i don't have to worry about it for the next several years whatever happens.

I think the point you made could be the issue. Jenson is stuck in directoryX11 land.
 
We just need faster CPUs as CPU speed per core hasn't really advanced much over the last 5 years so top end Gpus are now being held back.

Only RTX GPU’s suffer, that’s kind of the point…

Radeon RDNA GPU’s are fine. Probably not perfect but they even work well with all 7 versions of Sandy Bridge.
 
Clocks are not only about the process node, its also the architecture, Zen 2 and Zen 3 are on the same node and outside of LN2 Zen 2 doesn't get anywhere near 5Ghz no matter what you do to it and yet Zen 3 will clock past 5Ghz out of the box without doing anything to it.

Oh.... i almost forgot while writing this; RDNA 1 and RDNA 2 are also on the same node and the 5700XT can get to 2Ghz where the 6700XT can get to 2.8Ghz, that's a 40% difference.

Clocks are about architecture as much the node :)

Nvidia was beating AMD with a node disadvantage Turing (12nm) v RDNA-1 (7nm) and in the case of the 1080ti (7nm) v Radeon-7 (16nm) IIRC.

So clearly architectures are as, if not more important than process nodes. It’s most likely a case of designing architecture to leverage the most from the node.

Anyway non of the gate sizes are responsible for Nvidias issues, or answer how Nvidia lost such a monumental advantage to AMD in a single generation of GPU.

What AMD have achieved with RDNA 2 is more impressive than what they achieved with Ryzen.
 
Out of genuine interest do you have any ideas on what it might be?



I don't normally spend £400+ on a CPU, i have never spent that much on a CPU, but was well aware of this problem before HUB made a video on it and was worried 6 core CPU's weren't going to cut it much longer and i had no real confidence in AMD's future GPU performance.

I'm relieved about being wrong about the latter but its all moot now thanks to GPU shortages, i don't regret the CPU, its a 2 or 3 GPU generation CPU for me, i don't have to worry about it for the next several years whatever happens.

I think Nvidia might have planned to add extra hardware or change the pipeline configuration and that plan hasn’t materialised from some reason.

Nvidia clearly wasn’t expecting AMD to beat them with RDNA-2, but I don’t think anyone outside of AMD was. It’s been a very easy entry into the super highend for AMD and I have a hunch if it wasn’t for the pandemic or Nvidia had come with a better design AMD would have offered a stronger looking stack of parts.
 
Nvidia are pushing their cards to the limit to keep up with RDNA2 and that's the real reason for the high power consumption, just like AMD used to have to only this time its AMD who aren't, they have another 400Mhz (20%) headroom no problem but you're lucky to get more than 5% out of Ampere.

I can't wait to see RDNA3, IF the rumours are true Nvidia are in deep manure.

After experiencing Ampere I can say Nvidia has pushed it cards way past the limit and exposed it architectural weakness.

If I had to some up Ampere in one word it would be tardy.
 
Its not AMD's success, its Intel's failure?

Word it however it makes you ok with it in your head but Intel are about to launch yet another 250 Watt CPU and still not beat AMD's out going soon to be last generation CPU convincingly.

Intel tried to come back at AMD, but AMD just kept beating Intel and never stopped opening up the gap since.
 
It was costing more than a Ryzen 7 3700X which also came with a much better cooler too. I found the flip-flop really weird with Zen3. People were saying how Zen2,even though it lost in per core CPU performance(and gaming performance),offered you more cores for the same price than Intel. Hence,Intel wasn't worth the premium per core.

The moment AMD eeks out Intel in single core performance(and gaming performance),that all went out the window,and nobody cares about price/performance.....apparently. Something like the Core i5 10400F/Core i5 10600KF/Core i5 11400F all at various times have offered much better value than a £250~£280 Ryzen 5 5600X. Once Intel brought the B560,and you had decent ones like the MSI B560M PRO-VDH for £90~£100,even the argument you needed a very expensive Intel motherboard was not quite true anymore.

But because it was all AMD,so many just forgot about the Core i5 10400F - I got a Core i5 10400(the one with an IGP) for only £100. With some tweaking its probably as fast as my Ryzen 7 3700X in many games.

Also,last time I checked overall Intel sells far more desktop CPUs than AMD does,especially when you look at sales marketshare.



The Ryzen 5 5600X was the Ryzen 5 3600 replacement. Same 65W TDP and same Stealth Spire CPU cooler. AMD never really released a Ryzen 5 3600X replacement at 95W TDP.

They also at the same time,never really replaced the 95W Ryzen 3600X.

The move to 14/12nm to 7nm was always going to push the price of Zen3 up, and the difference in performance and power use relative to Intel was always going to increase demand of the 5000 chips.
 
Even going from a 3600 to a 5800X with a 3080 @1440p was only about 10% at best case and certainly not worth the 400 quid I paid. On the other hand £650 I paid for the 3080 was about a 120% boost over my old GTX 1070ti so well worth the cash.

Im not sure your numbers are correct TBH or in what context, but if what you say is true a properly tuned 1600AF system would perform within 5-10% of a 5800X for less than half the price.
 
It didn't push up the price of zen 2 though even though that was also on 7nm and you would think as the process matures and yields increase that the costs to manufacture the CPUs would get become cheaper.



The 3600 is faster than a 1600AF.

The 5800X is miles faster than both. 7nm is significantly more expensive then 12nm. Nvidia didn’t like the price.
 
Both the 3600 and 5600X are on 7nm yet going on the prices they released at there was almost a 50% price increase for the latter.
Sure the 1600AF was cheaper but then it was a re-released 2600 to use up some silicon AMD had sitting around and also came out after the 3600 had been released so it's not like they were going to charge much for it, for reference the 2600 and 3600 launched at $199 while the 5600X launched at $299.

TSMC have had an ever increasing pricing. Google TSMC 7nm price increase. You should nform yourself about topics you want argue.

Edit: Just 4 days ago. TSMC announced another 20% price increase on 7nm and under.
 
Last edited:
While that is the case I think come Alderlake that the 5600X will be under £200 which will show just how much profit AMD was gobbling up.

What you think is wrong. AMD prices have tracked TSMC price increases. AMD are probably moving to 3D-V cache now to avoid the latest TSMC price increase.

I hope AMD are making a healthy profit. They have the better products.
 
As long as it trickles down in price I can wait. Charging high prices on launch isn't too bad as some times the herd rush have to have it so let them pay to be first. What we don't want to see is prices stay the same or increase, for the sake of 10% type gains this is not of interest to some earlier gen users.

10% performance uplift across a number of cores is the difference between a loss and a profit in some tasks.

In this case (gaming) you gain 20-30% by choosing RDNA2 over Ampere even with an Intel chip. So if bang for buck is critical buy a lower tier RDNA2 card and lower priced CPU.
 
I think it depends on your selection of build components in the first place. Then it depends on what res you are gaming at and if you prefer it to be locked within sync range, or need max fps hz at all costs.

The RTX processing overhead is across the board that is just how the architect has been designed. Either way you can get more done with less on an all AMD system. In no way should I be seeing less performance when upgrading to a RTX 3080 over a Vega 64 or 5700XT regardless of what machine I’m fitting it in.
 
That's just the way CPU bottleneck works if you try run a high end card of a low end machine, it just effects Nvidia more than AMD.

Besides cards like the 3080 is designed for 4K so if running them below that you and on an old CPU then you cant expect the full potential and it's the same with AMD but just to a lesser degree.

They are not designed for 4K that would be the 3090. I’ve seen over 9gb of VRAM use on the 3080 at 4K
 
If people are buying and using 3060Ti/6700's for 1440p with good results, I think it is fair to say (even Jensen's kitchen release championed it) that a 3080 is indeed a 4k card. What do you think DLSS is also for?

You can’t relay on DLSS to offset a memory limitation. It’s marketing fluff. The 3080 should have more memory. Even the 3060 ships with 12gb. The 3060 and 3090 have more VRAM than is needed and the 3080 is struggling.
 
If the VRAM was an issue it would have been shown up in reviews yet the 3080 beats AMDs 16gb 6800XT in the majority of titles at 4K.

Well it is. You want some screen shots?

The RX 6800XT is a better product than the RTX 3080. I’ve used both and RDNA2 is jus a better experience.
 
Back
Top Bottom