• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

It looks like the 'real' /affordable RDNA3 + next gen NV desktop launch won't launch until September. Thoughts?

I suspect AMD will put less focus and budget into desktop GPU's going forwards. They surely realize that no matter how good the hardware, the majority will still buy Nvidia. Not that RDNA3 is good, as a product it's clearly inferior where it matters today (RT & Software stack). Nvidia having 83% market share is an insurmountable nut to crack.

They don't allocate a lot of wafers to dGPUs,but seem to prioritise consoles more it appears.
 
...but seem to prioritise consoles more it appears.
Is it AMD ordering those or MS & Sony? I thought AMD licensed the IP, designed the chips, but it was MS & Sony who go to TSMC and tell them how many they want.

I could be wrong though as AMD semi-custom SOC side of the business can be both made by them and/or designed/licensed for other companies to make. (Obviously they're all made by TSMC but i thought they were different order books)
 
Is it AMD ordering those or MS & Sony? I thought AMD licensed the IP, designed the chips, but it was MS & Sony who go to TSMC and tell them how many they want.

I could be wrong though as AMD semi-custom SOC side of the business can be both made by them and/or designed/licensed for other companies to make. (Obviously they're all made by TSMC but i thought they were different order books)

AMD does because they supply the chips. They are the second largest customer of TSMC AFAIK.
 
Last edited:
So, what he says about the top RDNA3 GPUs is true, there's about 2x the number of transistors (compared to RDNA2), but the raster GPU performance isn't anything like 2x as fast. I personally found RDNA gen 1 unimpressive, but it doesn't get a mention. At the time there was lots of talk of 'Big Navi' eventually arriving - But AMD basically admitted there were scaling issues with the first generation - It's remains a problem in 2023...

The same thing is true for Nvidia's top Ampere and 'Ampere Next' GPUs, more than a doubling of transistors doesn't give you anything like 2x the performance (GA102 - 28.3 billion transistors vs AD102 - 76.3 billion transistors), so he is starting off from a false premise - it just isn't that simple. I think he's being a bit histrionic, considering that prices are going to come down further for RDNA3 GPUs, probably from September onwards.

Also, Nvidia already had a significant advantage with the RTX 3090 and 3090 TI in performance when these cards released - the RX 7900 XT beats or matches both.

It's quite possible that Navi32 GPUs will look unimpressive, relative to Navi21 - But still a reasonable upgrade compared to Navi22 (cards like the RX 6700 XT).
AMD's main with their new GPUs problem is scaling, contrary to what lots of people have been banging on about (clock speed).

You can see this just by looking at the RDNA2 consoles that struggle to reach 30 FPS at 4K in some titles (The Series X GPU has 52 Compute Units). They don't have enough Compute Units, I think you'd need at least twice as many to deliver a steady 60 frames per second at reasonable level of image quality, and this should apply for ray tracing also. 120 CUs seems like it would be a good target for RDNA4.

So people need to look at the functional units of GPUs, not the transistor count - because the architecture is different and is designed to do more tasks, like compute, AI and ray tracing. I believe the RDNA3 CUs are fairly well optimised - the issue is that there just isn't enough of them - they need to make significant power improvements with RDNA4, to allow for much higher CU counts.

AMD is not going to release a GPU which requires a 800 or 1000 watt power supply, presumably because it just isn't worth it (people buy the high end cards they sell anyway) and it would almost certainly require an 'exotic' cooling solution. Undoubtedly, they'd be much more difficult to produce also, at least with the current RDNA3 5nm fabrication process.

A final point on ray tracing in 2023:

watch-dogs-legion-rt-3840-2160.png


There's really just one card that does 'well' in ray tracing and it (still) costs >£1,500, and is recommended with a 850 watt power supply.

But you can see that a linearly scaled up RDNA3 (or more likely, RDNA4) GPU ought to be able to match this level, and do it at a cheaper price point - but until that happens, we're a long way from RT becoming a standardised feature in PC games (that game developers put time and effort into optimising).
 
Last edited:
It looks like the RX 7900 GRE is going to be a limited release, not for the PC building market sadly,..


No wonder it has a silly name, rather than just being the 'RX 7900'.

The only good thing to say about this release really, is that the 260w tdp sounds quite power efficient.

So that pretty much just leaves Navi32 GPUs to launch in September. I think the main selling point of these will be FSR3, which will likely only work on RDNA3 cards.
 
Last edited:
So that pretty much just leaves Navi32 GPUs to launch in September. I think the main selling point of these will be FSR3, which will likely only work on RDNA3 cards.

I believe this may well be right, with the new AI cores and all 3 GPU makers working on texture size reduction technologies it might not be a long wait before the upscaling tech can be independent from the games. I don't see much hardware improvement over the coming years unless prices increase again and again so less hardware performance improvements and more software / refining improvements could be a path forward which would make sense given the AI boom.
This is my confused perception trying to follow all this :D
 
AMD hasn't said anything about FSR3 or frame generation support on consoles, which have RDNA2 GPUs, so I doubt it's possible on existing consoles.

If new consoles or refreshes are released with RDNA3 or maybe RDNA4 GPUs, I imagine FSR3 /similar software will be enabled on future console games.

Ampere lacks proper hardware support for frame generation, so Nvidia decided not to add driver support for it on Ampere. Similarly, RDNA2 has no hardware support for frame gen. so I doubt AMD will add FSR3 driver support. If it's possible, it will be something that gets added later, but I'm doubtful it would function well.

AMD ofc won't tell people in advance that FSR3 is unlikely to be supported on older GPUs, because they really need to sell off the remaining series of RDNA2 GPUs.

Baldur's Gate 3 is getting support for FSR2 in September, it's interesting to see no mention of FSR3 - Presumably the developers would rather support FSR2 to make there's a widely supported upscaling tech available shortly after release. Or, maybe FSR3 is still too new and developers aren't sure how to implement it yet.

I expect that AMD has already done a lot of work on designing RDNA4, as I think they are wrapping up the RDNA3 series now. If RDNA4 GPUs are going into future consoles, the focus on the new series will be even greater.
 
Last edited:
AMD hasn't said anything about FSR3 or frame generation support on consoles, which have RDNA2 GPUs, so I doubt it's possible on existing consoles.
Am hoping we see FSR3 soon, and that it contains some overall improvements. Bit shocking really that Intel has overtaken them with XeSS 1.1 which apparently provides better visuals (esp in motion) with equivalent performance!
 
So, what he says about the top RDNA3 GPUs is true, there's about 2x the number of transistors (compared to RDNA2), but the raster GPU performance isn't anything like 2x as fast. I personally found RDNA gen 1 unimpressive, but it doesn't get a mention. At the time there was lots of talk of 'Big Navi' eventually arriving - But AMD basically admitted there were scaling issues with the first generation - It's remains a problem in 2023...

The same thing is true for Nvidia's top Ampere and 'Ampere Next' GPUs, more than a doubling of transistors doesn't give you anything like 2x the performance (GA102 - 28.3 billion transistors vs AD102 - 76.3 billion transistors), so he is starting off from a false premise - it just isn't that simple. I think he's being a bit histrionic, considering that prices are going to come down further for RDNA3 GPUs, probably from September onwards.

Also, Nvidia already had a significant advantage with the RTX 3090 and 3090 TI in performance when these cards released - the RX 7900 XT beats or matches both.

It's quite possible that Navi32 GPUs will look unimpressive, relative to Navi21 - But still a reasonable upgrade compared to Navi22 (cards like the RX 6700 XT).
AMD's main with their new GPUs problem is scaling, contrary to what lots of people have been banging on about (clock speed).

You can see this just by looking at the RDNA2 consoles that struggle to reach 30 FPS at 4K in some titles (The Series X GPU has 52 Compute Units). They don't have enough Compute Units, I think you'd need at least twice as many to deliver a steady 60 frames per second at reasonable level of image quality, and this should apply for ray tracing also. 120 CUs seems like it would be a good target for RDNA4.

So people need to look at the functional units of GPUs, not the transistor count - because the architecture is different and is designed to do more tasks, like compute, AI and ray tracing. I believe the RDNA3 CUs are fairly well optimised - the issue is that there just isn't enough of them - they need to make significant power improvements with RDNA4, to allow for much higher CU counts.

AMD is not going to release a GPU which requires a 800 or 1000 watt power supply, presumably because it just isn't worth it (people buy the high end cards they sell anyway) and it would almost certainly require an 'exotic' cooling solution. Undoubtedly, they'd be much more difficult to produce also, at least with the current RDNA3 5nm fabrication process.

A final point on ray tracing in 2023:

watch-dogs-legion-rt-3840-2160.png


There's really just one card that does 'well' in ray tracing and it (still) costs >£1,500, and is recommended with a 850 watt power supply.

But you can see that a linearly scaled up RDNA3 (or more likely, RDNA4) GPU ought to be able to match this level, and do it at a cheaper price point - but until that happens, we're a long way from RT becoming a standardised feature in PC games (that game developers but time and effort into optimising).

You hit the nail right on the head.

People want RT and DLSS, well those things are not free in the architecture, you get that largely in place of rasterisation performance, in the case of Nvidia quite a chunk of transistors are given over to tensor cores, so you get slightly better image quality than AMD's FSR but its still at a lower resolution and visibly at a lower resolution, if those transistors were given over to shaders you would not need DLSS to get that performance.

What you're paying for now is DLSS in-place of pure performance.
 
Last edited:
I think one of the things I dislike about upscaling and RT is the necessity to spend even more time tweaking game settings, no longer can you just set resolution and graphics preset (e.g Ultra). I suppose it's less of an issue on 1080p monitors.
 
It's still an issue at 1080p. I have to shorten the fov as it murders the gtx 1660 with the rest of the settings at high. Now I know this won't be an issue on sat a 3060 ti / 6700xt or newer but it makes me wonder what settings are needed for sat Starfield.
 
Those specs don't pass the sniff test for me personally, 60 Compute Units on the 7800 XT when the 6800 XT has 72 and the 7900 XT has 84 seems wrong, 60 CU is what I'd expect on a 7800 non-XT card.

And then there's the weird PSU requirements, the 7900 XT has a TDP of 300W with recommendation of a 700W PSU but the 7800 XT has the same 300W TDP with a recommended PSU of 800W.
 
Those specs don't pass the sniff test for me personally, 60 Compute Units on the 7800 XT when the 6800 XT has 72 and the 7900 XT has 84 seems wrong, 60 CU is what I'd expect on a 7800 non-XT card.

And then there's the weird PSU requirements, the 7900 XT has a TDP of 300W with recommendation of a 700W PSU but the 7800 XT has the same 300W TDP with a recommended PSU of 800W.

It's a powercolor overclocked model so of course the PSU requirement is higher than an amd founders card. That's not rocket science

As for the core count, I spose they could be wrong but this is not a Twitter leaker that can say anything, it's an AMD board partner accidentally releasing specs early for an unannounced product on their website, so it's 100 times more likely to be correct than anything coming from someone like MLID. In fact the odds that the core counts specs are correct is so high, I'll make a $1000 bet with you right now
 
Last edited:
Back
Top Bottom