• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
I think there's been an almost obsessive focus on MCM graphics cards, with the assumption being that there would be multiple GPU dies in 2022, and no mention of the performance and efficiency of this approach.

The largest RDNA2 die was 520 mm² on 7nm, so I think AMD could design much larger dies on TSMC's 5nm and 6nm. In theory, shrinking down Navi21 would reduce the die size to ~420 mm² on 6nm EUV. So, I think AMD will be constrained by power consumption, rather than die size.
 
Last edited:
What threw people for a loop was the leak that Navi 31 has 7 chips on the die.

We now know that's because it's 1 GPU chip and 6 memory chips, all connected through a substrate and IF.

This still qualifies as "MCM" because all of these chips are on the one GPU die but people thought each chip would be a GPU like each chip on Ryzen is CPU cores but that's not the case with Navi 31, it's still a single monolithic GPU core

Here is what Navi 31 looks like so you can understand what I'm saying

 
Last edited:
MCM itself is nothing new - if you look at Voodoo cards, etc. they used MCM aspects in the implementation. Where it becomes interesting is where you can using MCM designs to expand the capabilities beyond what can be done simply shoving 2 discrete cores on there with all the limitations of SLI/CF technology.

I wonder if there is anything in the way 3dfx did their approach that could be reapplied to modern GPUs...
I also think we should stop playing fanboyism with multinational corporations and push them to do something new. Raytracing is nice and all but it does not really change the way we game.

You know what could do it? Proper voxel acceleration. Imagine the flexibility of Minecraft or Teardown but with actually decent looks and properly detailed worlds... Heck, imagine any gaming genre with properly malleable environments!
 
Early adaptors of MCM may be in for a shock, it maybe a good move for a manufacturer to be second to the market with this tech.
It may also mean monolithic GPUs are coming to the end of their lifespan and you reap the rewards early with MCM designs it could mean that too could it not?

Innovation can sometimes cause disruption and I don't know why anyone would be against that unless they were negatively effected by the disruption.
 
It may also mean monolithic GPUs are coming to the end of their lifespan and you reap the rewards early with MCM designs it could mean that too could it not?

Innovation can sometimes cause disruption and I don't know why anyone would be against that unless they were negatively effected by the disruption.

Ray Tracing on the NVidia 2XXX cards lets be honest was pretty useless and worse still there were very few games that used it during the time those cards were on the market.

This is the reality of being an early adaptor.

Having said that I must start thinking about a Ryzen 7XXX build for a RTX 4XXX card.:D:)
 
Ray Tracing on the NVidia 2XXX cards lets be honest was pretty useless and worse still there were very few games that used it during the time those cards were on the market.

This is the reality of being an early adaptor.
Yes but Turing wasn't trying to deliver any manufacturing improvements (which frankly it should have), this MCM approach is all about how to build a big GPU with billions more of transistors in a way that is more cost efficient. That's why Nvidia are imho rattled, they know there is the potential for disruption this time.

As long as it works and delivers benefits who as a consumer cares, what's not to like about that when you've got a market leader abusing its position.

If Navi 31 comes out and you find it needs special per game drivers and game developer changes to leverage the power then yeah it will be an early adopters nightmare but I don't see anyone saying that AMD must know that wouldn't float. It might be why 2 GCDs are rumoured to be off the table for now but I don't really care if they land 2X performance without further price inflation.
 
I think we will see a large increase in Streaming Multiprocessors for 'Ampere Next' cards and Compute Units for RDNA3 cards. That's because both of these units appear to be directly linked to the number of ray tracing cores integrated into GPU cores.

Examples here:

Logically, RDNA3 GPUs will need a much larger increase in Compute Units, in order to catch up to Nvidia's ray tracing performance, unless they have found a way to increase RT performance per RT core.

I wonder if packaging infinity cache / L3 cache with an MCM design could help to reduce GPU die production costs, or make production easier?

if they land 2X performance without further price inflation.

2x performance jump in a single generation is extremely rare. Nvidia managed about a 57% increase in performance, from the RTX 2080 TI to the RTX 3080 TI / RTX 3090. RDNA2 was a massive improvement vs RDNA, but I think this was only achievable due to the inherent power inefficiencies that existed in RDNA.

These inefficiencies led AMD to hold off from releasing a mega card for RDNA gen 1, as the temps would've been s*** hot. The 5700XT was hitting ~92°C Celsius under load, with a 110°C hotspot.
 
Last edited:
Logically, RDNA3 GPUs will need a much larger increase in Compute Units, in order to catch up to Nvidia's ray tracing performance, unless they have found a way to increase RT performance per RT core.
They are though aren't they? At least as far as we can speculate, plus more cache, plus more clock speed all of that equals quite a substantial perf increase IF it lands certainly a lot more than 57% and that could be a problem for Nvidia.

I wouldn't put too much weight on what Nvidia 'has managed' in the past, they quite blatantly and intentionally restrict performance to control price, market supply and the rollout of technology. That's not tin foil hat territory its a given, its the symptom of having a monopoly...

I find it much more believable at this point Navi 31 will be 2X and RTX 4xxx will not, and that is not because I have starry eyes for one vendor over the other its because it looks feasible it adds up
 
It depends really. RDNA2 improved IPC / performance per compute unit too. And had large improvements to power efficiency:


The main things I'd expect from RDNA3 would be a large increase in compute units, and perhaps 30-40% increase in power efficiency due to using TSMC's much improved 5nm EUV process, probably less for the 6nm based GPUs. Everything else is likely just speculation and guess work.

I think they will need to make some further optimizations to power efficiency, in addition to using a new fab. process.

Maybe some significant clock speed increases too, but I think this is likely to be a secondary consideration, as it will probably lead to a large increase in power consumption, for the amount of performance gained.
 
Last edited:
Remember when we were the first people to get a consumer SSD wooo OCZ Core 120GB for £360! What an amazing new thing! Oh........................totally unusable due to caching stutters in Windows and OCUK wouldn't give a refund because that's just life woohoo! no one would buy it for even £10 so it got binned. That's going to be RDNA if they use MCM before it's ready. Best they do this and be cheap and underpowered this coming gen.

Remember your hardware store doesn't give a refund if the technology doesn't work properly.
 
Remember when we were the first people to get a consumer SSD wooo OCZ Core 120GB for £360! What an amazing new thing! Oh........................totally unusable due to caching stutters in Windows and OCUK wouldn't give a refund because that's just life woohoo! no one would buy it for even £10 so it got binned. That's going to be RDNA if they use MCM before it's ready. Best they do this and be cheap and underpowered this coming gen.

Remember your hardware store doesn't give a refund if the technology doesn't work properly.
Sure if its not ready then it will be garbage.

I'm going to suggest that its not ready right now cos we can't buy it but in 5months time will it be as ready as 600W Nvidia GPU? None of us know.... we have to wait this one out is my opinion i.e. what is AMD going to announced following Nvidia's opening cash grab and is it credible.
 
We don't know what Nvidia is planning for 'Ampere Next' graphics cards. I think all the rumours have so far just been guess work, and Nvidia will use Samsung for consumer graphics cards and TSMC for the data centre / compute GPUs, just like they did with Ampere. No reason to think they wouldn't do this again, if it improves yields, or reduces production costs.

AMD will put out the best product they can, as they are still catching up with Ampere (the RTX 3090 and 3090 TI), they won't know either what Nvidia has planned. They particularly need to focus on ray tracing performance.

I think a single die design is bound to be more simple than any eventual multi-GPU MCM design, regardless of power usage, so it's not a sensible comparison to make. Ampere (GA102) is already 628 mm², so I think Nvidia is likely to be more constrained by die size than AMD is, with RDNA3.

Looking at the die size of Ampere, I'd consider the die shrink to 5nm EUV to be fairly essential to Nvidia's RTX 4000 series plans. AMD, perhaps less so, so it might make sense to utilize TSMC's 6nm EUV for the mid range cards.
 
Last edited:
MCM itself is nothing new - if you look at Voodoo cards, etc. they used MCM aspects in the implementation. Where it becomes interesting is where you can using MCM designs to expand the capabilities beyond what can be done simply shoving 2 discrete cores on there with all the limitations of SLI/CF technology.

Well yeah captain obvious, SLI or CF is not the sort of MCM anyone is talking about tho is it?
 
Well yeah captain obvious, SLI or CF is not the sort of MCM anyone is talking about tho is it?

Yep, not sure what Rroff thinks MCM is, but having multiple GPU chips on a PCB isn't MCM

This here is not MCM:





"A multi-chip module (MCM) is generically an electronic assembly (such as a package with a number of conductor terminals or "pins") where multiple integrated circuits (ICs or "chips"), semiconductor dies and/or other discrete components are integrated, usually onto a unifying substrate, so that in use it can be treated as if it were a single larger IC"

Its in the description, MCM is when you have multiple chips that are placed into a substrate (the PCB itself is not a substrate) that allows them to act as a single electronic device.

Sticking two GPUs on a PCB is still two seperate GPUs and Windows will tell you have two seperate GPU, this is just SLI as demonstrated by Linus

 
Last edited:
Or not, this thread is not about multi die MCM GPUs?

It is just my opinion, but I do not think RDNA3 will have multiple GPUs per GFX card. For one thing, AMD never said it would, and I think they would have told us by now, or given more info about MCM itself, if this was the case.

I think the problem is, rumour mill ppl saw some early patents from AMD about MCM, and made a lot of assumptions based on those.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom