• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The CPU determines GPU performance. On Nvidia anyway.

So aside from that, Hawaii was a far smaller, performed better, aged far better but somehow the narratives was about being hot and loud. Nvidia's marketing is just scary and there are enough weak "journalists" and fanboys that nothing ever gets said.

Correct. I went to upgrade the 290X sometime in 2017 and was shocked that the 580 was barely a sidegrade. It ended up being one of the better GPU purchases and definitely got some strange mindshare berating - the only thing I will concede was its poor cooler out of the box, far too loud and did get toasty (but I installed a third party with AIO some point after which probably made it last so long).
 
Regardless of whose card is faster AMD have still failed so far to gain any market share with the best product line they have released in years.

While this is true, even if they had stock works things be much different? In the past when Nvidia was behind on tech or often crazily poor value, they still outsold AMD by a larger amount.

That is some crazy mindshare with which it is hard to compete.

Still, even if it wasn't for consoles using up most wafers does AMD actually want volume?

Because often in the near past it has looked like after spending big on the fixed costs of designing, validating, masks, etc. that they then didn't really want to price to sell. Which is very strange. Fixed costs are after all, well, fixed so as long as they're not making a loss why limit your volumes?
 
If AMD continue to match Nvidia on prices / performance then what reason would I or others have to switch to AMD.

For me to consider AMD they need to offer something different to Nvidia.
 
The problem for Nvidia is the GPU clock speed which as resolution drops clocks go up but Samsung 8nm doesn't clock high enough to keep pace.

Imagine a 3090 on TSMC 7nm that uses a lot less power and clocks to 2600mhz, would be a monster of a GPU but Nvidia went with Samsung so they could make more money.

Clocks are not only about the process node, its also the architecture, Zen 2 and Zen 3 are on the same node and outside of LN2 Zen 2 doesn't get anywhere near 5Ghz no matter what you do to it and yet Zen 3 will clock past 5Ghz out of the box without doing anything to it.

Oh.... i almost forgot while writing this; RDNA 1 and RDNA 2 are also on the same node and the 5700XT can get to 2Ghz where the 6700XT can get to 2.8Ghz, that's a 40% difference.

Clocks are about architecture as much the node :)
 
Clocks are not only about the process node, its also the architecture, Zen 2 and Zen 3 are on the same node and outside of LN2 Zen 2 doesn't get anywhere near 5Ghz no matter what you do to it and yet Zen 3 will clock past 5Ghz out of the box without doing anything to it.

Oh.... i almost forgot while writing this; RDNA 1 and RDNA 2 are also on the same node and the 5700XT can get to 2Ghz where the 6700XT can get to 2.8Ghz, that's a 40% difference.

Clocks are about architecture as much the node :)
7nm is more mature now though than 2 years ago so you would expect better clocks.
 
7nm is more mature now though than 2 years ago so you would expect better clocks.
Not much better clocks though.

RDNA2's clocks are by design.

In CPU terms things like pipelines length etc. count for clocks. For GPUs similar things apply.

What could Nvidia have done with Ampere on TSMC 7nm instead of the cheapenin out with Samsung's 8nm?

Higher clocks would not be guaranteed, better density and efficiency probably would have been. Which often equates to higher clocks anyway. Cheaper coolers and lower TDP is something AIBs would have loved though.

Hush, hush but despite the "shortages" and crazy prices, Nvidia have actually sold crazy accounts of Ampere cards, and since TSMC flat out in trends terms of capacity it is just as well they went with Samsung.
 
What could Nvidia have done with Ampere on TSMC 7nm instead of the cheapenin out with Samsung's 8nm?
We will see if Nvidia decide to go with TSMC 5nm next time around as AMD had quite a large process advantage this time with the Samsung 8nm being closer to TSMC 12nm than 7nm.
 
It isn't technically a driver issue but the way nVidia using software for scheduling and multi-thread optimisations, similar to the DX11 intercept - it has potential to provide big benefits but also has some penalties if the way a game is developed works against it and/or where CPU resources are under contention.

I largely blame DX12/Vulkan really as it isn't the approach most developers want for the problem so they end up using lazy, inefficient ways to work around where you have to reinvent the wheel with DX12/Vulkan which basically results in having a DX11 like layer between the game and the GPU anyway.

DX12/Vulkan sohuld allow more than a few lights to actually cast shadows, better feed the GPU and... since Ray Tracing (RTX), works under DX12, I guess that's needed too.
Problem is that devs (or at least their bosses), want the quickest buck with the least amount of investment. If they would held games to the same rigor as a rocket launch, how many would be bug free enough to pull it off?
Probably we'll need an example for a big studio to be take to court and actually lose big until this will get fixed.


Regardless of whose card is faster AMD have still failed so far to gain any market share with the best product line they have released in years.

Of course, they didn't had a clear marketing direction and consistency in products over the years, basically the goal to do it. Things got worse after their previous CEO wanted to cut back on dedicated GPUs, so a period of R&D was lost.
And if they keep their silly price with 6xxx and next serios, they're gonna have a hard time taking back marketshare - moreso as Intel enters the market.
 
Regardless of whose card is faster AMD have still failed so far to gain any market share with the best product line they have released in years.
Probably partially due to the epic **** up with drivers in the 5000 series with the black screens. They were doing so well with drivers before that.
 
Clocks are not only about the process node, its also the architecture, Zen 2 and Zen 3 are on the same node and outside of LN2 Zen 2 doesn't get anywhere near 5Ghz no matter what you do to it and yet Zen 3 will clock past 5Ghz out of the box without doing anything to it.

Oh.... i almost forgot while writing this; RDNA 1 and RDNA 2 are also on the same node and the 5700XT can get to 2Ghz where the 6700XT can get to 2.8Ghz, that's a 40% difference.

Clocks are about architecture as much the node :)

Nvidia was beating AMD with a node disadvantage Turing (12nm) v RDNA-1 (7nm) and in the case of the 1080ti (7nm) v Radeon-7 (16nm) IIRC.

So clearly architectures are as, if not more important than process nodes. It’s most likely a case of designing architecture to leverage the most from the node.

Anyway non of the gate sizes are responsible for Nvidias issues, or answer how Nvidia lost such a monumental advantage to AMD in a single generation of GPU.

What AMD have achieved with RDNA 2 is more impressive than what they achieved with Ryzen.
 
Nvidia was beating AMD with a node disadvantage Turing (12nm) v RDNA-1 (7nm) and in the case of the 1080ti (7nm) v Radeon-7 (16nm) IIRC.

So clearly architectures are as, if not more important than process nodes. It’s most likely a case of designing architecture to leverage the most from the node.

Anyway non of the gate sizes are responsible for Nvidias issues, or answer how Nvidia lost such a monumental advantage to AMD in a single generation of GPU.

What AMD have achieved with RDNA 2 is more impressive than what they achieved with Ryzen.
AMDs top card was only 40 CU's for RDNA 1.0 so if they had released an 80CU card it would have been around 2080ti performance also the jump from N7 to N7+ was probably larger than nvidias jump from TSMC 12 to Samsung 8.

RTX 2080ti @ 260w vs 3070ti @290w while being almost the same performance just shows how bad the Samsung 8nm is.
 
Out of genuine interest do you have any ideas on what it might be?



I don't normally spend £400+ on a CPU, i have never spent that much on a CPU, but was well aware of this problem before HUB made a video on it and was worried 6 core CPU's weren't going to cut it much longer and i had no real confidence in AMD's future GPU performance.

I'm relieved about being wrong about the latter but its all moot now thanks to GPU shortages, i don't regret the CPU, its a 2 or 3 GPU generation CPU for me, i don't have to worry about it for the next several years whatever happens.

I think Nvidia might have planned to add extra hardware or change the pipeline configuration and that plan hasn’t materialised from some reason.

Nvidia clearly wasn’t expecting AMD to beat them with RDNA-2, but I don’t think anyone outside of AMD was. It’s been a very easy entry into the super highend for AMD and I have a hunch if it wasn’t for the pandemic or Nvidia had come with a better design AMD would have offered a stronger looking stack of parts.
 
If AMD continue to match Nvidia on prices / performance then what reason would I or others have to switch to AMD.

For me to consider AMD they need to offer something different to Nvidia.


AMD tried the better value than Nvidia approach and it didn't work.

Having the worlds fastest card, convincingly, might do it but being cheaper doesn't and AMD are not going to cut their margins for nothing.

AMD got back on their feet by under-cutting Intel but they killed Intel's mindshare by killing them on the performance charts.
 
I think Nvidia might have planned to add extra hardware or change the pipeline configuration and that plan hasn’t materialised from some reason.

Nvidia clearly wasn’t expecting AMD to beat them with RDNA-2, but I don’t think anyone outside of AMD was. It’s been a very easy entry into the super highend for AMD and I have a hunch if it wasn’t for the pandemic or Nvidia had come with a better design AMD would have offered a stronger looking stack of parts.


Nvidia are pushing their cards to the limit to keep up with RDNA2 and that's the real reason for the high power consumption, just like AMD used to have to only this time its AMD who aren't, they have another 400Mhz (20%) headroom no problem but you're lucky to get more than 5% out of Ampere.

I can't wait to see RDNA3, IF the rumours are true Nvidia are in deep manure.
 
I think that was mostly down to intel stagnating on their profits and failing to come up with a gen improvement.

Its not AMD's success, its Intel's failure?

Word it however it makes you ok with it in your head but Intel are about to launch yet another 250 Watt CPU and still not beat AMD's out going soon to be last generation CPU convincingly.
 
Back
Top Bottom