• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
Soldato
Joined
6 Feb 2019
Posts
18,583
First preliminary details are starting to leak for AMD's next GPU

Navi 33 features the same max 80CU on a die, but it's not known if cards will use multiple dies with MCM or single monolithic just that 80CU is the max like RDNA 2.

Built on TSMC 5nm.

Current launch windows is Q2 2022, but has room to shift to Q3 if 5nm supply from TSMC is delayed

https://wccftech.com/amd-rdna-3-nav...ds-feature-80-compute-units-5120-cores-rumor/
 
Just a tad update, if the launch window is correct AMD could have a nice lead over Nvidia.
Kopi7te or h/e you smell his name is reporting that Nvidia's Lovelace GPU has a target window of Q4 2022.

So that's 17 months till RTX4000 and 12 to 15 months till RX7000
 
They achieve that performance on a smaller node with higher clocks and focusing purely on gaming, yet are still a generation behind with raytracing and so far offer no alternative to DLSS 5.5 months after launch. Where would AMD GPUs be today if Nvidia had also gone to 7nm?

yeah it is worth keeping in mind that while AMD leads in CPU's and competes in GPU's - both divisions have exclusive access to industry leading process nodes which give it a huge advantage.

Both Intel and Nvidia have traditionally only moved to smaller nodes once the costs came down, AMD went straight for the smallest it could get regardless of cost.

AMD won't always have a 50% transistor density advantage over it's competitors and they should prepare for that by making further large improvements to their architecture. A good example of this is Nvidia's HPC Ampere architecture- it shows that when Nvidia has access to TSMC 7nm like AMD they can produce GPUs that have significantly more SM's than the current RTX3090 (128 vs 82) - that means AMD is actually still behind Nvidia and are only competitive because they are the only ones producing 7nm gaming GPUs at present.
 
Last edited:
Look here at 8k in Doom Eternal

3090 420W average
6900xt 311W average

So again, AMD does not have a hardware problem. They can easily catch Nvidia by adding a lot more hardware and use the same power Nvidia does, it was probably a bad idea to bet on low power usage while Nvidia went full berserk but that may help them in the future.
Nvidia needs to work on perf/watts because they can't increase the power usage forever and the transition to 5nm will not be enough for a good performance increase in the next generation.


Ouch Nvidia more than 100% faster in tomb Raider :D
 
Last edited:
Makes no sense, the 3080 arrived as speculated and well in advance of RDNA2.



Do I consider RT more important than raster performance. Yes! Rasterisation has hit levels above and beyond what is required. I'd have been happy woth 1080Ti raster performance and double the RT cores on my 3080. I've already said this before. Rasterisation should be considered legacy by now, something that only IGPUs hold on to.



No? Care to explain that one? :cry:



I didn't mention Quake. I did mentioned Quake 2 RTX, a fully path traced engine and arguably our best example of raytracing within a game so far. I'd agree most gamers couldn't care less about as they have no understanding of what it is. Ignorance plagues tech forums.



Yes Nvidia offers the better product, which was partly the point of my original post.

I'm not a pro gamer though so FPS in legacy titles/engines has no interest for me. Both AMD and Nvidia offer cards that provide more FPS than requried in that respect. I simply want new tech that doesn't leave games looking as though thay are made from cardboard cutouts.


For me RT is important because more and more games are releasing with it.

and we're still at a stage where RT takes up most of the frametime on each frame, therefore in any games that does RT, RT gpu performance has much more importance than rasterisation - so I place a lot of value on RT performance and it's only going to increase.

and for people who think that we'll soon get to th stage where RT performance doesn't matter, we're not even close. Almost everything in a game can be rendered to life like image quality with RT, we're only at the tip of the iceberg now, RT has incredible scaling - as GPU's get better at doing RT, the amount of RT effects and quality of those effects put into games will just get more and more so we're not going to hit the point where it flattens out for a long time - maybe in 10 years RT performance won't matter anymore
 
Last edited:
More rumours, bunch of leakers are reporting the same thing today:

* Navi 33 is a 80CU monolithic GPU
* Navi 31 is a 2 x 80CU MCM GPU

Performance

* Navi 31 is targeting to be 150% faster than the RX6800 in Rasterization
* Navi 31 is targeting to be 100% faster than the RX6800 in Ray Tracing

Architecture

* RDNA 3 doesn't contain fixed function hardware for DirectML/Super Resolution/DLSS, Super Resolution/Direct ML uses the same pipeline as it does on RDNA 2
 
Last edited:
I think the only gauranteed t


Navi 31 is then targeting Ampere's RT performance, while Nvidia move to gen 3 RT cores. Did they just throw down another white flag?

Well there are no rumours from Nvidia's side, they're either better at hiding info or they are behind AMD in the design process. We don't yet know if Lovelace actually contain a 3rd gen RT core or if its the same core as Ampere - it could just be the same core just with more of them due to higher transistor density.

I would rather be conservative in your hopes than be disappointed later on
 
If they go MCM with 2 80CU chiplets then just a 40% raster increase would be a big fail as I would expect around that with just a single chiplet.

40% higher performance from a 160CU graphics card, like you said, would be a fail and highly unlikely - the scaling between the two chiplets would have to be historically terrible OR the heat output and power draw would have to be so bad they would have to run the chiplets under 2ghz compared to the 2.9ghz you see now on the 6900xt
 

Just watching now.

That has much more reasonable expectation of performance and is very very different to the previous "leak" about being 300% faster

From Moores law is dead:

* The 7900XT (Navi31) is targeting at least performance improvement over the 6900XT (Navi21) of 40%, though 60% to 80% is also likely.

* The absolute maximum performance improvement would be 100%, though its unlikely - so in summary, at least 40% faster, though 60 to 80% more likely and 100% high unlikely but not impossible.

* MCM design has large performance penalties due to intercore latency, you gain overall performance improvement from having multiple chiplets but you also face a penalty and that penalty is much heavier than on a ryzen CPU. This explains how you can potentially have a 160CU GPU split across multiple chiplets but only have 40 to 80% improvement instead of over 100% you'd expect if scaling was perfect.
 
Last edited:
If Moore's law is dead is right, the MCM penalties are quite heavy - is it even worth it? I assume a large MCM gpu will cost more than the 6900xt right?

6900XT: 1 x monolithic 7nm die with 80CU = 100% baseline performance

7900XT: 2 x 5nm chiplets containing 80CU each (and 7nm IO chiplet) and 50% performance per watt architecture improvement = just 160% to 180% compared to baseline performance


So you have a very large IPC improvement and you're doubling your core count for just 60 to 80% gains
 
Kopeite says both Lovelace and RDNA 3 are targeting roughly double performance over Ampere and RNDA 2.

However, both cards are also currently running into cost issues - that's not to say people won't accept higher prices anyway as we've seen, just that the new cards are more expensive to make than the current ones.

The big kicker is memory bandwidth, GDDR6 and GDDR6x just isn't cutting it - both Lovelace and RDNA 3 are taking large performance penalties when mated with G6 and G6x modules, to get the doubling of performance thats being estimated requires a large improvement in bandwidth which is what they are struggling with.

It will be interesting to see what Nvidia and AMD come up with to solve the bandwidth issue. Will AMD increase it's L3 game cache for RDNA 3 to try and fix the bandwidth limitation? What's Nvidia plan, expensive HBM2 modules? HBM2e supports up to 24GB capacity and up to 2500GB/s of bandwidth but it is several times more expensive than G6x
 
Can't they just increase the bus speed to say 512?.

apart from that increasing the cost of the memory module, it would also give the gpu a higher tdp so you'd need better cooling too - that's why traditionally we've seen 512bit kept for hpc cards but those have moved along to HBM2 because it's just better
 
Last edited:
I wouldn't say RDNA2 beat nvidia, they drew level on raster performance although they still have the inferior overall package but they missed an open goal by not being able to deliver a decent amount of stock and keep prices reasonable and when the dust settles have probably lost market share instead of gaining.


Already losing market share
 
"Alleged AMD Radeon RX 7600 XT Navi 33 Specs Suggest It Could Be Faster Than The 6900 XT"

https://hothardware.com/news/amd-radeon-rx-7600-xt-navi-33-specs-faster-6900-xt

kind hard to believe a 150w-180w new card would beat the previous 320w flagship card - cause that's not just a 50%+ performance improvement but also a 50% performance per watt improvement.

As far as I'm aware there has never in the history of discrete GPUs being such a large single generation leap. Even with Nvidias famous Pascal architecture, the GTX1060 could not come close to beating the previous flagship, the GTX980ti
 
Last edited:
Grey on says RDNA 3 lineup still includes MCM GPUs

I'm not sure - I don't think amd will do it but if it turns out to be true then it can be a problem for games.

https://mobile.twitter.com/greymon5...a-lovelace-geforce-rtx-4090-with-18432-cores/


The problem that I see is look at the MI200, AMDs new MCM GPU using brand new 3rd Gen interconnect - it's so good that its two GPUs, as in it shows up as two GPUs in windows - so are we back to crossfire now?
 
What threw people for a loop was the leak that Navi 31 has 7 chips on the die.

We now know that's because it's 1 GPU chip and 6 memory chips, all connected through a substrate and IF.

This still qualifies as "MCM" because all of these chips are on the one GPU die but people thought each chip would be a GPU like each chip on Ryzen is CPU cores but that's not the case with Navi 31, it's still a single monolithic GPU core

Here is what Navi 31 looks like so you can understand what I'm saying

 
Last edited:
Well yeah captain obvious, SLI or CF is not the sort of MCM anyone is talking about tho is it?

Yep, not sure what Rroff thinks MCM is, but having multiple GPU chips on a PCB isn't MCM

This here is not MCM:





"A multi-chip module (MCM) is generically an electronic assembly (such as a package with a number of conductor terminals or "pins") where multiple integrated circuits (ICs or "chips"), semiconductor dies and/or other discrete components are integrated, usually onto a unifying substrate, so that in use it can be treated as if it were a single larger IC"

Its in the description, MCM is when you have multiple chips that are placed into a substrate (the PCB itself is not a substrate) that allows them to act as a single electronic device.

Sticking two GPUs on a PCB is still two seperate GPUs and Windows will tell you have two seperate GPU, this is just SLI as demonstrated by Linus

 
Last edited:
Status
Not open for further replies.
Back
Top Bottom