• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD RDNA 4 thread

But apparently now 12GB of VRAM is fine for 4K for the next few years,because Nvidia wants to sell you an £800 RTX4070TI. Yet none of the people defending 12GB has fine for 4K for years,own a 12GB dGPU. If these people bought 12GB dGPUs to play games at 4K for the 3~5 years then it might show some actual belief.
Just like all the RT hypers / evangelists then!

It's all you must buy Nvidia because of RT but never a word about how on card below x80 or even x90, RT does not work. There's no point in Nvidia being faster at RT when the showcases like Cyberpunk run at 10 FPS or so on Nvidia even if AMD only get 5 FPS. Both are equally unplayable.

If you hold your GPU just right then 12GB of special Nvida VRAM is enough. Just like Apple claiming their v-ram (virtual memory that is) is so clever that 8GB machines are fine - and no, in magic Apple land swapping to soldered SSD does not wear out the drive!
Everyone should get an RTX4090,then nobody wouldn't argue even about VRAM. Maybe that is Nvidia's secret plan - it will bring harmony to the PC gaming space.
That's the eliminate poverty by killing all the poor approach!

(Also totally ignoring who all the ones left over would get to do the jobs they don't want to do!)
 
Last edited:
But that's not everybody's extreme experience though, that's not everybody wanting to play 4K ultra+RT, in fact that's actually a tiny minority. And as I implied earlier if you simply have to play everything at absolute extrme settings then you're never going to have enough GPU, and you're always going to be looking at charts and thinking "Gee, how much have I gotta spend in 2 years for anything playable?"

So its a bit self-inflicted, it's a bit bad tech implementation (I am Ray-Tracing destroyer of frame rates), it's a bit predatory NV/AMD giving you less for more gen on gen, and there's bad game dev coupled with lousy console ports.

Yeah you're right storage is now also getting out of hand and will probably get even worse!
If you lower settings you lower the vRAM usage.

Moreover, metro Exodus runs at 60fps on RDNA2 in consoles with RTGI. So is a matter of optimization.

If RDNA4 will offer something like 7900xtx performance at around 4-500 dollars will be a good step forward.

4080 runs 4k path tracing with dlss at 60fps +.

2080 was running 30-40 fps at 1080p.
 
Last edited:
No wonder AMD changed plans:



13~20 chiplets?

AMD-NAVI-4C-HERO.jpg


As this is the RDNA4 thread and it got sidetracked I will bump this up again. No wonder it was cancelled. AMD shooting for the moon again,and misfiring.

Here I was thinking they would make another 200MM2~300MM2 GCD and double it up.
 
Last edited:
As this is the RDNA4 thread and it got sidetracked I will bump this up again. No wonder it was cancelled. AMD shooting for the moon again,and misfiring.

Here I was thinking they would make another 200MM2~300MM2 GCD and double it up.
It would be nice to have more than one GCD die. Could be a good sign for future cards, APUs, consoles and so on. A good sign with good prices, hopefully! :D
 
It would be nice to have more than one GCD die. Could be a good sign for future cards, APUs, consoles and so on. A good sign with good prices, hopefully! :D

Not if it gets cancelled! :p

Even with their Zen CPUs,it took three generations to get chiplets fully sorted out:
1.)Zen/Zen+ introduced scalable CPU designs with Infinity Fabric. The Threadripper and Epyc CPUs had multiple dies. But power consumption of the IF massively increased.
2.)Zen2 fixed some of the power issues and introduced 8C chiplets and doubled core counts.
3.)Zen3 introduced die stacking,and worked on improving latency. Memory controller also improved.
4.)Zen4 finally dropped IF power enough,the chiplet CPUs could be used in laptops.

With their dGPUs:
1.)RDNA1 split compute and gaming functions and was the first to use Infinity Fabric,and also get over GCN scaling problems.
2.)RDNA2 introduced raytracing,improved scaling and introduced Infinity Cache.
3.)RDNA3 optimised Infinity Cache,introduced machine learning functions and went to chiplets.

AMD did what they did with Zen,Zen+ and Zen2 in the first three generations of RDNA. They were trying to do a Zen3 and Zen4 technical move in one generation? Maybe they need to be less ambitious on the technical side and try and concentrate on baby steps first. Ideally first improve the design with a solid RDNA4 and then move onto more complex arrangements with RDNA5.
 
Last edited:
Not if it gets cancelled! :p

Even with their Zen CPUs,it took three generations to get chiplets fully sorted out:
1.)Zen/Zen+ introduced scalable CPU designs with Infinity Fabric. The Threadripper and Epyc CPUs had multiple dies. But power consumption of the IF massively increased.
2.)Zen2 fixed some of the power issues and introduced 8C chiplets and doubled core counts.
3.)Zen3 introduced die stacking,and worked on improving latency. Memory controller also improved.
4.)Zen4 finally dropped IF power enough,the chiplet CPUs could be used in laptops.

With their dGPUs:
1.)RDNA1 split compute and gaming functions and was the first to use Infinity Fabric,and also get over GCN scaling problems.
2.)RDNA2 introduced raytracing,improved scaling and introduced Infinity Cache.
3.)RDNA3 optimised Infinity Cache,introduced machine learning functions and went to chiplets.

AMD did what they did with Zen,Zen+ and Zen2 in the first three generations of RDNA. They were trying to do a Zen3 and Zen4 technical move in one generation? Maybe they need to be less ambitious on the technical side and try and concentrate on baby steps first. Ideally first improve the design with a solid RDNA4 and then move onto more complex arrangements with RDNA5.
Some of the lessons learned from Ryzen could shorten the steps needed, I guess. But it will depend on how much they're willing to push things forward and get aggressive on the pricing to get things going thoroughly.
 
Some of the lessons learned from Ryzen could shorten the steps needed, I guess. But it will depend on how much they're willing to push things forward and get aggressive on the pricing to get things going thoroughly.

But the Navi 4C design sounds very complex....and very expensive because of the need for multiple layers and packages and also many new things at once. It sounds like this:

You would think the next step would be splitting the GCD function into two chiplets!
 
As this is the RDNA4 thread and it got sidetracked I will bump this up again. No wonder it was cancelled. AMD shooting for the moon again,and misfiring.

Here I was thinking they would make another 200MM2~300MM2 GCD and double it up.

At least they are trying, CAT :) one wouldn't call the Zen architecture a fail, 6 years after AMD started on that road Intel still can't compete with it.

I would even argue RDNA 3 is the first phase in a revolution in GPU packaging, Navi 31 Logic die is smaller than an RX 6700XT while having more than 2X the performance, Nvidia are cutting out IMC's to save on die space, AMD just glue them on, or not, like Lego.

The truth is AMD are taking the difficult design and engineering steps while others chicken out flogging 192Bit $800 GPU's instead.

I'll grant you the performance of AMD first MCM GPU is not as good as we would like, its not as good as AMD would like or even thought it might be, the same was true for Zen 1, AMD stuck with it, now look at it... Intel are having to design cores that are 3X the size and 3X the power consumption just to get a 10% IPC advantage.
 
Last edited:
At least they are trying, CAT :) one wouldn't call the Zen architecture a fail, 6 years after AMD started on that road Intel still can't compete with it.

I would even argue RDNA 3 is the first phase in a revolution in GPU packaging, Navi 31 Logic die is smaller than an RX 6700XT while having more than 2X the performance, Nvidia re cutting out IMC's to save on die space, AMD just glue them on, or not, like Lego.

The truth is AMD are taking the difficult design and engineering steps while others chicken out flogging 192Bit $800 GPU's instead.

I'll grant you the performance of AMD first MCM GPU is not as good as we would like, its not as good as AMD would like or even thought it might be, the same was true for Zen 1, AMD stuck with it, now look at it... Intel are having to design cores that are 3X the size and 2X the power consumption just to get a 10% IPC advantage.

But they are trying to do something in one generation which took two generations of Zen to do. What they need to do is make sure they make smaller steps,and be able to release these to the market on time and on budget. That is what AMD did with Zen.
 
But they are trying to do something in one generation which took two generations of Zen to do. What they need to do is make sure they make smaller steps,and be able to release these to the market on time and on budget. That is what AMD did with Zen.

They gave it a go, no doubt learned a lot from it and shelved it, for now. that's how you make progress, you take the risks.

What are Nvidia doing? As usual vastly over provisioning resources in their sponsored games to A, make you buy their higher end GPU's because it strangles the performance of the hardware so much and B, specifically to hurt AMD performance more, well anyone can do that, AMD can do that to Nvidia.
AMD are far to busy trying to crack an actual problem, with significant success already.

I challenge Nvidia to do anything useful for once....
 
Last edited:
But the Navi 4C design sounds very complex....and very expensive because of the need for multiple layers and packages and also many new things at once. It sounds like this:

You would think the next step would be splitting the GCD function into two chiplets!
I wouldn't want to be the guy writing drivers for those! :D .

There's still hope in me that one day, by some miracle, multi GPU (of some sort) will make a return!
 
They gave it a go, no doubt learned a lot from it and shelved it, for now. that's how you make progress, you take the risks.

What are Nvidia doing? As usual vastly over provisioning resources in their sponsored games to A, make you buy their higher end GPU's because it strangles the performance of the hardware so much and B, specifically to hurt AMD performance more, well anyone can do that, AMD can do that to Nvidia.
AMD are far to busy trying to crack an actual problem, with significant success already.

I challenge Nvidia to do anything useful for once....
Using a dual GCD design is the logical step after RDNA3. A 13~20 chiplet design sounds a better fit for RDNA5. This way at each generation you validate one part of the puzzle. Well AMD is sponsoring Starfield,so lets hope it means their cards function well in it.
I wouldn't want to be the guy writing drivers for those! :D .

There's still hope in me that one day, by some miracle, multi GPU (of some sort) will make a return!

We are slowly getting there.
 
AMD already have multi logic die GPU's in workstation.

CDNA 2 has 2 logic dies, it BTW is in the worlds most powerful supercomputer, not Nvidia. The Frontier supercomputer at Oak Ridge National Laboratory.

AMD were asked if they would be doing this for gaming GPU's, they said its far more difficult for gaming GPU's but we are researching it.

They have some experience, remember the R9 295 X2?


Around 100% scaling in 100% of games, this completely passed us by, and tech journalists, too busy still complaining about the 290X reference cooler and circle #### over what ever Nvidia had.....

AMD engineered a new PCB PCIe bridge design and coded the drivers to effectively make the two cores act as one, just like that they illiminated the main problem with multi GPU's, but how about those new Nvidia cards, eh? aren't they a much nice green now????


CDNA 2.

uliKG5s.jpg
 
Last edited:
Apologises if this has already been posted/discussed but obviously AMD not bothering with the high-end is a common rumour now, and IDK where they got this sentence, but this from TPU...
For AMD to continue investing in the development of this GPU, the gaming graphics card segment should have posted better sales, especially in the high-end, which it didn't.
Kind of made me do a double take as i have no idea why the gaming graphics card segment, especially the cards with an MSRP of $900-1000, haven't posted better sales. :confused: :p
 
They have some experience, remember the R9 295 X2?


Around 100% scaling in 100% of games, this completely passed us by, and tech journalists, too busy still complaining about the 290X reference cooler and circle #### over what ever Nvidia had.....

AMD engineered a new PCB PCIe bridge design and coded the drivers to effectively make the two cores act as one, just like that they illiminated the main problem with multi GPU's, but how about those new Nvidia cards, eh? aren't they a much nice green now????
It was a nice piece of engineering, with good scaling at high resolutions, expensive, power hungry, but nice. The problem was that it still required profiles for each game (so you'd get about the same experience with 2x290/x, but in a larger, perhaps a bit even more power hungry package if you didn't undervolt the cards), but at least it demonstrated it can be done. With chiplets this has to happen by default at driver level, with no special driver / profile required or else it will fail.

I'm a bit surprised the consoles didn't jump on the multi GPU path. It would have been a fix configuration, plenty of performance and good yields too.
 
Last edited:
It was a nice piece of engineering, with good scaling at high resolutions, expensive, power hungry, but nice. The problem was that it still required profiles for each game (so you'd get about the same experience with 2x290/x, but in a larger, perhaps a bit even more power hungry package if you didn't undervolt the cards), but at least it demonstrated it can be done. With chiplets this has to happen by default at driver level, with no special driver / profile required or else it will fail.

I'm a bit surprised the consoles didn't jump on the multi GPU path. It would have been a fix configuration, plenty of performance and good yields too.
Yes.

Which is why it needs to work in a way where the OS and the game doesn't even know its multiple dies.

But damn that was a monster GPU wasn't it? Look how much faster it is even compared to the dual GPU GTX 690.
 
Last edited:
Back
Top Bottom