• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** AMD "Navi 3X" (RDNA 3 discussion thread) ***

Permabanned
Joined
2 Sep 2017
Posts
10,490
Now we know what RDNA 2 in Navi 21 is capable of, let's speculate and wish what improvements can be made.

Ultra HD 2160p gaming with higher framerates looks like one of them.
Shrink to TSMC N5 process?
Adding more Infinity Cache - up to 256 MB or more?
Increasing the width of the memory bus and/or better memory - GDDR-next-gen?


AMD Zen 4, RDNA 3 to repeat recently seen generational leaps
https://hexus.net/tech/news/cpu/146824-amd-zen-4-rdna-3-repeat-recently-seen-generational-leaps/

As per our headline the AMD EVP was then quizzed about product refreshes going forward. "Everything is scrutinized to squeeze more performance out," replied Bergman and indicated that a similar long list of optimisations strategies are available that made the 19 per cent Zen 3 IPC gain possible – and Zen 4 will be moving to 5nm. On the topic of RDNA 3 GPUs, Bergman later confirmed that AMD was again targeting a 50 per cent plus GPU performance per watt improvement from generation to generation.
 
And so it starts again.... we just had a hard 12 months of RDNA2 speculation, can we have a rest now? :p

Hard? :( Yes, I agree the wait for proper graphics from AMD was difficult for everyone - since now the pressure over AMD is much lower, we can relax and enjoy this discussion.

The news won't wait us to rest.

It would also be nice if AMD improves the ray-tracing performance or releases new features and better ecosystem support for the existing ones - Contrast Adaptive Sharpening - Fidelity FX, better artificial intelligence for some upscaling and increasing the framerate?
Next get DirectX? DirectX 13?
 
Yeah, very interesting :eek:

Could be only for yields and decreased production costs, could be a new breakthrough like the Infinity Cache that greatly improves the performance?
The so called "uncore" part of the GPU to go to its own silicon, while the compute units to sit on their own.

I thought the ray-tracing cores could have their own silicon, too.
 
Close the thread for 12 months, at least after that amount of time people can make informed guesses.

Well, by doing so, we will miss any leaks or information from AMD about the new generation - there might be even presentation / teases / previews of Navi 31 at some point.

Did anyone get much right about RDNA 2 before its launch?

Actually, yes :)
There was an original leak from China stating that the new Navi 21 would be a large, 505 sq. mm GPU - this is from December 2019:

AMD's high-end Navi GPU details: twice as fast as Radeon RX 5700 XT?!
https://www.tweaktown.com/news/6954...tails-twice-fast-radeon-rx-5700-xt/index.html

Charlie wrote about Navi 21's tape out as early as November 2019:

A new high end GPU just taped out
A new high end GPU just taped out - SemiAccurate
 
AD102 is rumoured to have 18,432 cores, only 75% more than the 10,496 in GA102.
And no one guarantees that with such large shaders count, you will see any even close to linear performance scaling.

But that's normal - N5 is a full die shrink of N7.
 
It's AMD's choice if they want to implement upscaling for the PC games segment.

Technically, AMD has used upscaling in its consoles for many years.
 
About software yes AMD are very lazy and far behind Nvidia.

It will be years before the developers start implementing DXR in more games.

You probably know that the gaming development depends on the lowest common denominator - that is what the average Joe uses in its home entertainment rigs.
That is mostly GTX 1660 / Radeon RX 580 and below.
 
The software means more driver releases, working with game devs for optimizations and having game ready drivers on release day, AI upscaling feature, features for stramers etc. Not (only) the DXR part. Nvidia does these things better. Yes they have more money to spend but there is no reason for why AMD can't do much better if they want to be competitive.

I don't have neither the time nor the willingness to update my graphics drivers 3 or 4 times a month. One release per month is enough with enough fixes and new supports.
Nvidia does these things worse - it still keeps its driver user interface not touched in the 90s.
 
It is a question of when, not if :)

Quote:
AMD patents GPU chiplet design for future graphics cards
The patent points out that one of the reasons why MCM GPUs have not been attempted in the past is due to the high latency between chiplets, programming models and it being harder to implement parallelism. AMD's patent attempts to solve all these problems by using an on-package interconnect it calls the high bandwidth passive crosslink. This would enable each GPU chiplet to communicate with the CPU directly as well as other chiplets via the passive crosslink. Each GPU would also feature its own cache. This design appears to suggest that each GPU chiplet will be a GPU in its own right and fully addressable by the operating system.

There have been leaks in the past which suggested AMD is considering the move to an MCM design for its GPUs after RDNA3, and if NVIDIA's Hopper does the same, then AMD would have very little choice but to do so as well. Intel has already achieved success using the MCM design methodology and demoed the first MCM based GPU quite a while back. One thing is for sure: things are about to get very interesting for GPU enthusiasts./quote

AMD Files MCM Based GPU Patent – Finally Bringing The MCM Approach To Radeon GPUs?
AMD Files MCM Based GPU Patent - Finally Bringing The MCM Approach To Radeon GPUs? (wccftech.com)
 
We will see how it goes but these two dies will be quite big - at least 250 mm^2 and this will effectively be another dual-GPU card like the legacy R9 295X2, HD 7990, HD 6990, etc.

They indeed could have made smaller, real 'chiplets' under 100 mm^2 and make the cards with 4, 5 and/or 6 chiplets.

Now they will still have to design smaller GPUs for the lower tiers.
 
DLSS equivalent tech

It is not needed if the cards are fast enough as is.

DLSS is just an official enablement from Nvidia for their image quality - performance optimisation cheating. It's that just they improve it as some people say version 2 is better than version 1.

AMD doesn't need upscaling.
It has never been used in the PC games environment.
 
well, they are investing heavily it in according to some articles. at the moment, the rtx 3060 / ti 3070 can run games at 4k with dlss enabled, obviously that will hurt amd's sales of rx 6000 GPUs.

I can also run some games at 3840x2160 with lowered settings with my Radeon RX 560X 4GB mobile graphics card.

You can lower the settings on any card to boost the frame rate.

DLSS is just that - it lowers the settings.

And as far as it looks like - DirectX 12 doesn't support DLSS as an official standard feature - DLSS is Nvidia's proprietary setting.


Also, Sony and Microsoft will want higher framerates on the existing consoles - a dlss equivalent is probably the only way to achieve this. could it be done without special hardware? I'm not sure

Upscaling has been used on the consoles for many years - actually Nvidia stole the idea from the consoles market.
 
At some point in 2024 more than 300 people worldwide might be able to buy it.

There is a chance the cards will be launched later this year - 2021, or at some point in Q1 2022.
Also, it is likely the global situation will improve by that time and the production will resume as per normal.

Why do you need that trolling?
 
Back
Top Bottom