• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

It looks like the 'real' /affordable RDNA3 + next gen NV desktop launch won't launch until September. Thoughts?

Well, since the plain 6800 also had the best perf/watt I think that would possibly their best card this gen. As we saw last gen when things like the 6700 XT got clocked a good bit past the perf/watt sweetspot (AKA once the mining boom was in swing everything which was release was clocked to the max and pre-skalped (also witness the 3080 vs the 3080 Ti)).

The real question to AMD is: after spending $millions developing RDNA3 and with not console players to subsidies the R&D what volume do AMD want with RNDA2?

Ultra-low volume but high margins? Well with Nvidia on near 90% marketshare that is a very risky move. Or far lower margins but much higher volume?

Their "we are premium brand" nonsense strategy really hasn't worked well and this year the recession is real, the cost of living crisis is almost global and miscalculations like insisting Zen4 must be a clean break only available on expensive DDR5-only and mostly PCIe 5.0 AM5 platforms has really eroded their ability to grow in the consumer space. And while Zen4 should be great in laptops (well compared to anything which relies on Intel's huge P cores), they are late to that market. That leaves servers. Well, Intel's designs are terrible there but server buyers are conservative and I'm sure Intel can carve out some niches for E-core only server chips.

A company willing to grow would snap up all wafers they can with the quite easy calculation: any margins we make is going to be far better than the console margins even if it won't please Wall Street.
why would anyone expand in a recession?
 
Well, the RDNA2 based RX 6600 XT was slower (on average) than the RTX 3060, according to Techspot:

EDIT - Oops, it depends on the resolution actually, which is faster.

4K_Average.png


Spec wise, it's basically identical to the RDNA3 based RX 7600M XT:

And this is what AMD claims about the 7600M XT's performance:

RDNA_3_mobile_perf.png


More games compared here:

Seems like it's around 20% faster than a RTX 3060 8GB, depending on the game.

So, I'd assume at least a 20% performance increase for RDNA3 Compute Units. The floating point (FP32) processing power is approx. double.

That's before taking clock speeds into account.

The Boost clock on the RX 6800 is just 2100 Mhz, so 2500-2600Mhz for Navi32 wouldn't be a surprise.


Yeah its not a 4K card that.... :)

At 1440P its pretty damned good, it beats the 3060 12GB but its also a dead card replaced by the slightly faster RX 6650XT, which is on average £50 cheaper.

Look at this, sits right between the 3060 12GB and the 3060Ti, the Gigabyte Gaming is £309. Good card, good price. All things being fair it should be the best seller, but no, the 3060 is, by far.

8QmL6ak.jpg
 
Last edited:
To dominate market share in the following turn up, as long as they survive the recession!

is this supposed to be some kind of a fairy tale.. they can possibly extend cheeeep financing options but this is going to be unheard of in this space, AMD doing a GE?
anyhow if i am tasked with building a strategy for recession, i would start with an assumption of lower volumes by default and then build a plan to maximize profits under this assumption
 
So how well is AMD's lower volume but higher margins working for them?

High margins means expensive prices for us consumers, and given the choice of expensive and very expensive, it seems most consumer buy the very expensive Nvidia cards. Not helped by yet another poorly launched of Navi31.

And I don't really mean the references cooler - although AMD should have issued a recall on those as anything else costs them more in reputation - no I mean releasing yet again for some silly arbitrary release date with drivers which weren't polished. That, for example, the idle power issue has now mostly been solved but launch day reviews were poor or poorer than they need have been.

So AMD are left trying to get back their fixed costs over an ever-smaller volume. Even at the same margin, having 90% of the marketshare means Nvidia can survive this far better.
 
Last edited:
So, it looks like I wasn't far off saying a 20% performance improvement for RDNA3 Compute Units:

small_compute-unit-pair.jpg


The magic number is 17.4% for the CUs.

AMD also claims a 20% improvement for the overall silicon design:


I think this includes cache system improvements also:

AMD says that there are less infinity cache hits with RDNA3.

I think we will get another 10 or 20% boost in performance from higher clock speeds.

I think the earlier claims of a 50% performance per watt improvement, must have assumed large increases in clock frequency, like rumours suggest we could see with an RDNA3 refresh (and probably conflated things by including compute unit increases in this figure also).

So, it looks to me, like the performance of a 60 Compute unit RDNA3 GPU will be as fast as either an RTX 3080, or RTX 3090 (closer to the RTX 3090), depending on the max boost clock frequencies.
 
Last edited:
So, it looks like I wasn't far off saying a 20% performance improvement for RDNA3 Compute Units:

small_compute-unit-pair.jpg


The magic number is 17.4% for the CUs.

AMD also claims a 20% improvement for the overall silicon design:


I think this includes cache system improvements also:

AMD says that there are less infinity cache hits with RDNA3.

I think we will get another 10 or 20% boost in performance from higher clock speeds.

I think the earlier claims of a 50% performance per watt improvement, must have assumed large increases in clock frequency, like rumours suggest we could see with an RDNA3 refresh (and probably conflated things by including compute unit increases in this figure also).

So, it looks to me, like the performance of a 60 Compute unit RDNA3 GPU will be as fast as either an RTX 3080, or RTX 3090 (closer to the RTX 3090), depending on the max boost clock frequencies.

Another AMD slide?

We already know they are not 17.4% faster per compute unit.

RX 6950XT: 5120 Shaders @ 2310MHz. 100% performance
RX 7900XT: 5376 Shaders @ 2400Mhz. 110% performance


5376 / 5120 = 1.05 (+5%)
2400Mhz / 2310MHz = 1.0389 (+3.9%)

+8.9% Total

The 7900XT also has a 320Bit Bus vs 256Bit

Don't trust AMD's slides....

2GXfqdk.png
 
Last edited:
If the Shaders really were 17.4% faster per clock the 7900XT would be at 128% on that ^^^ chart, pushing just past the 4080, i have no doubt that's where AMD wanted it, thought it would be, but it isn't.
 
Last edited:
shaders cant be faster because there isnt any change in basic functionality, will they be utilized better is the bigger question? the dual issue approach means greater utilization like in the case of 7900xt, theres been an increase in utilization but it wasnt enough to deliver on intended perf envelopes
 
shaders cant be faster because there isnt any change in basic functionality, will they be utilized better is the bigger question? the dual issue approach means greater utilization like in the case of 7900xt, theres been an increase in utilization but it wasnt enough to deliver on intended perf envelopes

I'm not an expert on this so can't comment, certainly i don't pretend to know more than the people who designed it, if i did i wouldn't have time to post on this forum.

Having said that there are glimpses where its does much better than the average or even very well indeed. There is a massive discrepancy between its best and worst performance, the architecture or drivers seem very inconsistent.

j0zFa3f.png

ewLKhkv.png

JxLFzw5.png
 
Last edited:
I'm not an expert on this so can't comment, certainly i don't pretend to know more than the people who designed it, if i did i wouldn't have time to post on this forum.

Having said that there are glimpses where its does much better than the average or even very well indeed. There is a massive discrepancy between its best and worst performance, the architecture or drivers seem very inconsistent.

j0zFa3f.png

ewLKhkv.png

JxLFzw5.png
a shader is just expected to do one fma operation per clock.. theres no concept of ipc when the functionality is so narrowly defined
 
Interesting:

This suggests that high clocks in games might be possible on some Navi32/Navi33 cards.
 
Interesting:

This suggests that high clocks in games might be possible on some Navi32/Navi33 cards.

Maybe but we still don't know the reason why game clocks are so much lower than benchmark clocks

My theory is that benchmarks like this only stress a part of the GPU which allows the card to push power to that part specifically and run faster where as games are dynamic and use the entire GPU capability which results in lower clocks
 
Last edited:
Why? everyone will just buy Nvidia anyway...

Like they do now.
That happens because AMD still believes they can basically price match Nvidia with a minimal discount.
I'm not brand loyal, I bought Matrox, 3DFX, Nvidia, ATI and AMD in my life. My last few cards have all been AMD for a simple reason: they were the best bang for the buck when I needed to upgrade.
No brand and no feature will persuade me to spend more than 600€ on a card so I'll just get the best combination of raster and VRAM under that price, which right now means either a RX 6700XT or an ARC a770 (similar price, one wins on speed and drivers, the other on VRAM).

I've been playing since 1988, I don't care about 4k, I think RT is nice but still immature and more than anything I know that both players are killing the gaming market. Aside from a small minority of enthusiasts very few people will spend more than a month of pay for an entire system, cards costing a grand are rich kid toys just like high end smartphones. Nvidia is simply trying to pull the low end into their streaming service and AMD thinks that they still profit if the low end goes to console.

Back on topic, I do hope AMD will either try to squeeze the 7800xt on the lowest N31 bins or find some trick that will allow them to have a more efficient N32 chip (maybe fixing something that went wrong with N31?), we desperately need cheaper new gen GPUs.
 
That happens because AMD still believes they can basically price match Nvidia with a minimal discount.
I'm not brand loyal, I bought Matrox, 3DFX, Nvidia, ATI and AMD in my life. My last few cards have all been AMD for a simple reason: they were the best bang for the buck when I needed to upgrade.
No brand and no feature will persuade me to spend more than 600€ on a card so I'll just get the best combination of raster and VRAM under that price, which right now means either a RX 6700XT or an ARC a770 (similar price, one wins on speed and drivers, the other on VRAM).

I've been playing since 1988, I don't care about 4k, I think RT is nice but still immature and more than anything I know that both players are killing the gaming market. Aside from a small minority of enthusiasts very few people will spend more than a month of pay for an entire system, cards costing a grand are rich kid toys just like high end smartphones. Nvidia is simply trying to pull the low end into their streaming service and AMD thinks that they still profit if the low end goes to console.

Back on topic, I do hope AMD will either try to squeeze the 7800xt on the lowest N31 bins or find some trick that will allow them to have a more efficient N32 chip (maybe fixing something that went wrong with N31?), we desperately need cheaper new gen GPUs.

Darujhistan will just say if they giving it away, despite them having better margins than Nvidia :cry:
 
Back
Top Bottom