• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
Soldato
Joined
6 Feb 2019
Posts
18,583
First preliminary details are starting to leak for AMD's next GPU

Navi 33 features the same max 80CU on a die, but it's not known if cards will use multiple dies with MCM or single monolithic just that 80CU is the max like RDNA 2.

Built on TSMC 5nm.

Current launch windows is Q2 2022, but has room to shift to Q3 if 5nm supply from TSMC is delayed

https://wccftech.com/amd-rdna-3-nav...ds-feature-80-compute-units-5120-cores-rumor/
 
I don't think we are seeing gaming MCM designs any time soon - the ground work for that just isn't going ahead at the kind of pace I'd expect to see if that was a reality.
 
Just a tad update, if the launch window is correct AMD could have a nice lead over Nvidia.
Kopi7te or h/e you smell his name is reporting that Nvidia's Lovelace GPU has a target window of Q4 2022.

So that's 17 months till RTX4000 and 12 to 15 months till RX7000
 
I don't think we are seeing gaming MCM designs any time soon - the ground work for that just isn't going ahead at the kind of pace I'd expect to see if that was a reality.
I wouldn't write AMD off especially as they managed to make MCM happen in the CPU space on a shoe string budget.

They now have much more cash available for RnD and also have a few years of experience handling MCMs designs, couple with that the fact they came from around Gtx1080ti performance to matching the 3090 in raster within the space of 16 months shows the speed of progress being made in the GPU division.
 
they came from around Gtx1080ti performance to matching the 3090 in raster within the space of 16 months shows the speed of progress being made in the GPU division.

They achieve that performance on a smaller node with higher clocks and focusing purely on gaming, yet are still a generation behind with raytracing and so far offer no alternative to DLSS 5.5 months after launch. Where would AMD GPUs be today if Nvidia had also gone to 7nm?
 
They achieve that performance on a smaller node with higher clocks and focusing purely on gaming, yet are still a generation behind with raytracing and so far offer no alternative to DLSS 5.5 months after launch. Where would AMD GPUs be today if Nvidia had also gone to 7nm?

yeah it is worth keeping in mind that while AMD leads in CPU's and competes in GPU's - both divisions have exclusive access to industry leading process nodes which give it a huge advantage.

Both Intel and Nvidia have traditionally only moved to smaller nodes once the costs came down, AMD went straight for the smallest it could get regardless of cost.

AMD won't always have a 50% transistor density advantage over it's competitors and they should prepare for that by making further large improvements to their architecture. A good example of this is Nvidia's HPC Ampere architecture- it shows that when Nvidia has access to TSMC 7nm like AMD they can produce GPUs that have significantly more SM's than the current RTX3090 (128 vs 82) - that means AMD is actually still behind Nvidia and are only competitive because they are the only ones producing 7nm gaming GPUs at present.
 
Last edited:
They achieve that performance on a smaller node with higher clocks and focusing purely on gaming, yet are still a generation behind with raytracing and so far offer no alternative to DLSS 5.5 months after launch. Where would AMD GPUs be today if Nvidia had also gone to 7nm?
RDNA was also 7nm so it was on the same node as RDNA2 and while it is better than Samsung the gains between 2 generations was impressive.

It was Nvidia's choice not to use TSMC for the gaming line so while we can say what if it doesn't really matter. It's the same with Intel with what if they had got 10nm to work back in 2016/17 where would zen be today? Probably a long way behind.

This was also their first go at RT and while not as good as nvidia that was to be expected, even nvidia cards struggle with RT when DLSS is not available and are certainly not there yet in terms of where they need to be to make RT a mainstream feature.

Nvidia has already had to flounder this generation by moving the 3080 to a 102 die and pushing up power when they would have been content with a 104 previously for that level of card.
 
Last edited:
RDNA was also 7nm so it was on the same node as RDNA2 and while it is better than Samsung the gains between 2 generations was impressive.

Has no relevance when purchasing a GPU today.

It was Nvidia's choice not to use TSMC for the gaming line so while we can say what if it doesn't really matter. It's the same with Intel with what if they had got 10nm to work back in 2016/17 where would zen be today? Probably a long way behind.

Yes, we know Nvidia chose Samsung's 8nm to maximise profits. I don't think it's the same with Intel as Nvidia don't appear to have been napping.

This was also their first go at RT and while not as good as nvidia that was to be expected, even nvidia cards struggle with RT when DLSS is not available and are certainly not there yet in terms of where they need to be to make RT a mainstream feature.

What does it matter if it was AMDs first or ninety-ninth attempt at RT? AMD are still a generation behind, up to 50% slower, without considering DLSS. AMD knew where Nvidia were headed, but rather than choose to compete they elected to design for consoles before shoving the results on to a PC card.

Quake 2 RTX is a good example of RT without DLSS. On an ancient Ivy Bridge system I can play at 60+ FPS using a 3080 at 1440p. DLSS should be available on any modern title and has already been added to Unity, Unreal Engine and IW.

Nvidia has already had to flounder this generation by moving the 3080 to a 102 die and pushing up power when they would have been content with a 104 previously for that level of card.

While also maintaining a healthy profit. It's a win win for the consumer when purchasing a Nvidia GPU.

So where would AMD GPUs be today if Nvidia had also gone to 7nm? I'd guess anywhere from 20-30% further behind than they already are.
 
Has no relevance when purchasing a GPU today.



Yes, we know Nvidia chose Samsung's 8nm to maximise profits. I don't think it's the same with Intel as Nvidia don't appear to have been napping.



What does it matter if it was AMDs first or ninety-ninth attempt at RT? AMD are still a generation behind, up to 50% slower, without considering DLSS. AMD knew where Nvidia were headed, but rather than choose to compete they elected to design for consoles before shoving the results on to a PC card.

Quake 2 RTX is a good example of RT without DLSS. On an ancient Ivy Bridge system I can play at 60+ FPS using a 3080 at 1440p. DLSS should be available on any modern title and has already been added to Unity, Unreal Engine and IW.



While also maintaining a healthy profit. It's a win win for the consumer when purchasing a Nvidia GPU.

So where would AMD GPUs be today if Nvidia had also gone to 7nm? I'd guess anywhere from 20-30% further behind than they already are.
You do realise you wouldn't have even got the 3080 without AMD, nvidia would have gave you the 3070 rebranded as a 3080 instead since as you say they like to maximise profits.

AMD might be 50% slower in RT but what if they were 50% faster in raster? Do you consider RT more important than raster?
You also realise that increasing raster performance automatically increases RT.

Quake is a poor example of RT since it's a 20 year old game that the majority of gamers couldn't care less about, out of modern games I think I've played just 2 with RT so I wouldn't consider it a must have feature atleast right now and by the time it is these generations of cards will be to slow anyway.

That said I still went nvidia this time as it offers the better package right now but if AMD pulls a large rasterisation lead then I would probably go with them next time as even at 1440p currently a lot of the newer AAA games are still below 100fps and since I'd like to jump to 4K I'd consider having 144+fps at that resolution currently more important to me than RT.
 
Last edited:
You do realise you wouldn't have even got the 3080 without AMD, nvidia would have gave you the 3070 rebranded as a 3080 instead since as you say they like to maximise profits.

Makes no sense, the 3080 arrived as speculated and well in advance of RDNA2.

AMD might be 50% slower in RT but what if they were 50% faster in raster? Do you consider RT more important than raster?

Do I consider RT more important than raster performance. Yes! Rasterisation has hit levels above and beyond what is required. I'd have been happy woth 1080Ti raster performance and double the RT cores on my 3080. I've already said this before. Rasterisation should be considered legacy by now, something that only IGPUs hold on to.

You also realise that increasing raster performance automatically increases RT.

No? Care to explain that one? :cry:

Quake is a poor example of RT since it's a 20 year old game that the majority of gamers couldn't care less about, out of modern games I think I've played just 2 with RT so I wouldn't consider it a must have feature atleast right now and by the time it is these generations of cards will be to slow anyway.

I didn't mention Quake. I did mentioned Quake 2 RTX, a fully path traced engine and arguably our best example of raytracing within a game so far. I'd agree most gamers couldn't care less about as they have no understanding of what it is. Ignorance plagues tech forums.

That said I still went nvidia this time as it offers the better package right now but if AMD pulls a large rasterisation lead then I would probably go with them next time as even at 1440p currently a lot of the newer AAA games are still below 100fps and since I'd like to jump to 4K I'd consider having 144+fps at that resolution currently more important to me than RT.

Yes Nvidia offers the better product, which was partly the point of my original post.

I'm not a pro gamer though so FPS in legacy titles/engines has no interest for me. Both AMD and Nvidia offer cards that provide more FPS than requried in that respect. I simply want new tech that doesn't leave games looking as though thay are made from cardboard cutouts.
 
No matter what they are doing, they cannot compete with Nvidia because Nvidia also uses dirty tricks and has enough cash and marketshare to control the market.
Example:
https://twitter.com/JirayD/status/1388971823304425475

"So that's it basically for this short look at the RTX showcase. As expected from nVidia tech demo, it is 100% optimized for their own architecture, which I'm not criticizing. Unfortunately it seems to go out of it's way to remove base UE4 optimizations for AMD."

This is also happening in almost every game when Nvidia was/is/will be involved. As soon as they pay the game becomes a Nvidia tech demo.
Maybe Lisa Su will make enough billions from selling CPUs and then AMD will sponsor every AAA game released in the future. That way they may have a chance. :)
 
Also if that's the Navi 33 then it means Navi 31 and 32 will be chiplets with 160 CU's max. But again that doesn't mean anything, if you don't have marketshare and you don't sponsor a ton of games, your hardware will not work as good as it should in the majority of games.
 
Makes no sense, the 3080 arrived as speculated and well in advance of RDNA2.
Nvidia obviously had an idea what they were up against performance wise as its not uncommon for competitors to have inside information. the 3080 could have easily been turned into the ti for release and sold for 1k while the 3070 bumped to the 3080 and no one would have complained at 2080ti performance for £650 and as you say Nvidia wants to maximise profits but at the same time give you as little performance as they are able to get away with so you need to upgrade again sooner.

performance. Yes! Rasterisation has hit levels above and beyond what is required. I'd have been happy woth 1080Ti raster performance and double the RT cores on my 3080. I've already said this before. Rasterisation should be considered legacy by now, something that only IGPUs hold on to.

Maybe for you but for the majority of people rasterisation is the most important factor. How many people would have got excited about ampere if Jensen said we are giving you the same raster performance as Turing but doubling RT and then AMD knock out a card that is 50% faster in raster.
 
They achieve that performance on a smaller node with higher clocks and focusing purely on gaming, yet are still a generation behind with raytracing and so far offer no alternative to DLSS 5.5 months after launch. Where would AMD GPUs be today if Nvidia had also gone to 7nm?

The Ryzen 1800X was 95% of Intel's Intel's IPC, it was behind in clock speed and gaming but had better power efficiency on a brand new 14nm node compared with Intel's very mature 14nm node, and matched the 8 core 3900K for performance at half the price.
The Ryzen 3800X was 110% of Intel's Intel's IPC, it still was behind in clock speed and gaming but improved on both fronts, with the 3900X and 3950X AMD had the fastest mainstream CPU's AMD's 16 core matched Intel 18 core HEDT. Again at half the power.
The Ryzen 3960X, 3970X and 3990X literally killed Intel's HEDT, stone dead.
The Ryzen 5000 series have a much higher IPC than Intel, the match Intel for clock speed, they have better gaming performance, better single threaded performance, are twice as fast in productivity and again much better power consumption. Ryzen 5000 is in every conceivable way better than Intel.

They didn't do it overnight, to expect them to match or beat Nvidia in one generation is disingenuous at best, AMD match Nvidia for Rasterisation performance and do it with less power, in one generation, at least give them credit for that.
 
Status
Not open for further replies.
Back
Top Bottom