• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** AMD "Navi 3X" (RDNA 3 discussion thread) ***

It's AMD's choice if they want to implement upscaling for the PC games segment.

Technically, AMD has used upscaling in its consoles for many years.
 
I am curious to see what they do to improve RT performance. The rumours of Lovelace are that it will double-down on huge core counts (and consequently RT cores) which means that it will further smack any previous gen cards of any brand around when that's enabled, and make no mistake about it, 2 years from now RT will be a staple feature for most AAA games and even some AA studios, even if it it's only 1 or 2 effects as a toggle.
It depends on how much the devs are willing to implement RT in games and also the level of implementation. Also on how much AMD is willing to spend in sponsoring their own RT games, like Nvidia does.
I think it will never beat Nvidia in Nvidia sponsored titles and it will be stupid to join a RT race like it was the case with tessellation, if there are no RT heavy games except for those sponsored by Nvidia. I will be more concerned if i had a 3090 or a 3080 since Jensen will sponsor even heavier RT games in the future for his new cards.
Since the new consoles have less hw capabilities than the 6000 series ( and Nvidia 3000 series), we should see many light RT games in the future and some of these games will come to PC too.

About software yes AMD are very lazy and far behind Nvidia. But i think their AI upscaling solution also depends on Microsoft. Anyway, Direct ML will become the default in the future for every card manufacturer.
 
I for one would prefer AMD/MS to take their time with a quality solution than rush something out half-arsed :)

Also a few tidbits in here I saw last month;

https://wccftech.com/amd-rdna-3-radeon-rx-gpus-and-zen-4-ryzen-cpus-perf-per-watt-gains-5nm/

Also for the current gen;

"Finally and most importantly, Rick unveiled that AMD is targetting 1440p as the standard resolution for its raytracing solution. AMD themselves have not revealed much about the raytracing graphics performance of its ray-accelerator core powered Radeon RX lineup which will include the Radeon RX 6900 XT, RX 6800 XT & RX 6800 graphics cards but at least we get a baseline of where to expect the Radeon GPUs to land."
 
About software yes AMD are very lazy and far behind Nvidia.

It will be years before the developers start implementing DXR in more games.

You probably know that the gaming development depends on the lowest common denominator - that is what the average Joe uses in its home entertainment rigs.
That is mostly GTX 1660 / Radeon RX 580 and below.
 
I for one would prefer AMD/MS to take their time with a quality solution than rush something out half-arsed :)

Also a few tidbits in here I saw last month;

https://wccftech.com/amd-rdna-3-radeon-rx-gpus-and-zen-4-ryzen-cpus-perf-per-watt-gains-5nm/

Also for the current gen;

"Finally and most importantly, Rick unveiled that AMD is targetting 1440p as the standard resolution for its raytracing solution. AMD themselves have not revealed much about the raytracing graphics performance of its ray-accelerator core powered Radeon RX lineup which will include the Radeon RX 6900 XT, RX 6800 XT & RX 6800 graphics cards but at least we get a baseline of where to expect the Radeon GPUs to land."
Considering AMDs recent track record, I guess the question isn't will they achieve the 50% per/watt but how much will they exceed it by and how they will achieve it.

I hope that AMD pushes to bring at least 1440p to the PC gaming masses with RDNA 3. I would prefer 4k though. It is a joke that 1080p is still hanging around on PCs with consoles moving on to 4k (even if it is 30 fps).

As for RT, I want 3090 RT performance for around £300 before i take RT serious. But lets be honest not even Nvidia could achieve that, so it is not fair to expect that from AMD.
 
It will be years before the developers start implementing DXR in more games.

You probably know that the gaming development depends on the lowest common denominator - that is what the average Joe uses in its home entertainment rigs.
That is mostly GTX 1660 / Radeon RX 580 and below.
The software means more driver releases, working with game devs for optimizations and having game ready drivers on release day, AI upscaling feature, features for stramers etc. Not (only) the DXR part. Nvidia does these things better. Yes they have more money to spend but there is no reason for why AMD can't do much better if they want to be competitive.
 
The software means more driver releases, working with game devs for optimizations and having game ready drivers on release day, AI upscaling feature, features for stramers etc. Not (only) the DXR part. Nvidia does these things better. Yes they have more money to spend but there is no reason for why AMD can't do much better if they want to be competitive.

I don't have neither the time nor the willingness to update my graphics drivers 3 or 4 times a month. One release per month is enough with enough fixes and new supports.
Nvidia does these things worse - it still keeps its driver user interface not touched in the 90s.
 
It depends on how much the devs are willing to implement RT in games and also the level of implementation. Also on how much AMD is willing to spend in sponsoring their own RT games, like Nvidia does.
I think it will never beat Nvidia in Nvidia sponsored titles and it will be stupid to join a RT race like it was the case with tessellation, if there are no RT heavy games except for those sponsored by Nvidia. I will be more concerned if i had a 3090 or a 3080 since Jensen will sponsor even heavier RT games in the future for his new cards.
Since the new consoles have less hw capabilities than the 6000 series ( and Nvidia 3000 series), we should see many light RT games in the future and some of these games will come to PC too.

About software yes AMD are very lazy and far behind Nvidia. But i think their AI upscaling solution also depends on Microsoft. Anyway, Direct ML will become the default in the future for every card manufacturer.
Why would it be stupid? Marketing is just as important to selling a product as the product itself. There's a reason any big budget release of either hardware or software/media has a huge marketing budget. Unless of course AMD doesn't have the ability to supply that many GPUs or doesn't think it will be able to compete.
As for the AI upscaling, that doesn't exist for Nvidia either - that's what DLSS 1.0 was and it was a dismal failure. Any sort of competent AI upscaling is a decade away at least. What DLSS 2.0 is, like I've said before, it's TAA reconstruction + a little extra on the side, which is the only bit that actually makes use of "AI". In fact the way forward is keeping textures & general geometric detail at native and only rendering certain effects at lower res, eg reflections. Which is pretty much what you can see with DLSS 2.0 in something like Cyberpunk/Watch Dogs: Legion where the RT reflections are not upscaled at all but instead are kept at the rendering resolution.

I for one would prefer AMD/MS to take their time with a quality solution than rush something out half-arsed :)

Also a few tidbits in here I saw last month;

https://wccftech.com/amd-rdna-3-radeon-rx-gpus-and-zen-4-ryzen-cpus-perf-per-watt-gains-5nm/

Also for the current gen;

"Finally and most importantly, Rick unveiled that AMD is targetting 1440p as the standard resolution for its raytracing solution. AMD themselves have not revealed much about the raytracing graphics performance of its ray-accelerator core powered Radeon RX lineup which will include the Radeon RX 6900 XT, RX 6800 XT & RX 6800 graphics cards but at least we get a baseline of where to expect the Radeon GPUs to land."
Maybe a 6900 XT OC'ed to the gills on water with a custom power limit can do any sort of competent RT at 1440p but not likely. For the most part they're all 1080p cards w/ RT on.

It will be years before the developers start implementing DXR in more games.

You probably know that the gaming development depends on the lowest common denominator - that is what the average Joe uses in its home entertainment rigs.
That is mostly GTX 1660 / Radeon RX 580 and below.
The lowest common denominator are consoles. Everything revolves around that if it isn't PC-exclusive. What people miss with counting the multitude of 1060 rigs is that they're never going to be interested in all these games anyway (and most of them were net cafes) - those people are playing esports titles or legacy MP games (eg WoW), they're not rushing to buy Cyberpunk or Flight Sim & the devs certainly aren't going to shy from raising min specs on PC because of them.
 
Maybe a 6900 XT OC'ed to the gills on water with a custom power limit can do any sort of competent RT at 1440p but not likely. For the most part they're all 1080p cards w/ RT on.


The lowest common denominator are consoles. Everything revolves around that if it isn't PC-exclusive. What people miss with counting the multitude of 1060 rigs is that they're never going to be interested in all these games anyway (and most of them were net cafes) - those people are playing esports titles or legacy MP games (eg WoW), they're not rushing to buy Cyberpunk or Flight Sim & the devs certainly aren't going to shy from raising min specs on PC because of them.
How many RT games are out there that are not sponsored by Nvidia? And how it is AMD supposed to compete with Nvidia in their sponsored titles? Nvidia will always have an advantage because this is what they always did, they focused on a feature and made the devs use it heavily in the games Nvidia sponsored to ruin the performance for everyone else, even their own old cards users.
How can you compete with that? The only way is to try to sponsor your own games and AMD did that this year...well it is a start anyway. Dirt 5 has a nice market on consoles, and Farcry will come in the 2021. Maybe they will be able to sponsor better games in the future, it looks like they were also involved in the UE5 engine optimization so that might help them in the future.
But again, they will never be even close on Nvidia sponsored titles.
 
How many RT games are out there that are not sponsored by Nvidia? And how it is AMD supposed to compete with Nvidia in their sponsored titles? Nvidia will always have an advantage because this is what they always did, they focused on a feature and made the devs use it heavily in the games Nvidia sponsored to ruin the performance for everyone else, even their own old cards users.
How can you compete with that? The only way is to try to sponsor your own games and AMD did that this year...well it is a start anyway. Dirt 5 has a nice market on consoles, and Farcry will come in the 2021. Maybe they will be able to sponsor better games in the future, it looks like they were also involved in the UE5 engine optimization so that might help them in the future.
But again, they will never be even close on Nvidia sponsored titles.
Not really our problem, that's for AMD to figure out. Frankly Lisa Su should've fired their whole marketing department on day 1 because it's been by far the worst aspect of AMD. In reality AMD has always had good selling points vs Nvidia but they could never market them properly. That's why Nvidia could make Geforce synonymous with GPUs, or Ray tracing with RTX, etc. They're genius-level marketers, and I'm sure their attempts at mimicking Apple is no coincidence in the matter.

I shed no tears for billion dollar corporations, up to them to put up a fight if they want to keep making money.
 
It is a question of when, not if :)

Quote:
AMD patents GPU chiplet design for future graphics cards
The patent points out that one of the reasons why MCM GPUs have not been attempted in the past is due to the high latency between chiplets, programming models and it being harder to implement parallelism. AMD's patent attempts to solve all these problems by using an on-package interconnect it calls the high bandwidth passive crosslink. This would enable each GPU chiplet to communicate with the CPU directly as well as other chiplets via the passive crosslink. Each GPU would also feature its own cache. This design appears to suggest that each GPU chiplet will be a GPU in its own right and fully addressable by the operating system.

There have been leaks in the past which suggested AMD is considering the move to an MCM design for its GPUs after RDNA3, and if NVIDIA's Hopper does the same, then AMD would have very little choice but to do so as well. Intel has already achieved success using the MCM design methodology and demoed the first MCM based GPU quite a while back. One thing is for sure: things are about to get very interesting for GPU enthusiasts./quote

AMD Files MCM Based GPU Patent – Finally Bringing The MCM Approach To Radeon GPUs?
AMD Files MCM Based GPU Patent - Finally Bringing The MCM Approach To Radeon GPUs? (wccftech.com)
 
AMD Talks Zen 4 and RDNA 3, Promises to Offer Extremely Competitive Products

"When it comes to RDNA 3, the company has plans to offer an architecture that has a high performance-per-watt. Just like AMD improved performance-per-watt of RDNA 2, it plans to do the same with RDNA 3, bringing the efficiency of the architecture to the first spot and making it very high-performance for any possible task."
Clicking through to the sources we get this interesting quote
https://www.anandtech.com/show/1640...on-2021-demand-supply-tariffs-xilinx-and-epyc
We’re happy with RDNA2 on performance per watt, and overall performance, and we have a lot of focus on RDNA3. On elements such as AI specific integration, we are making investments. CDNA launched in November, and you will see us adding more AI capability to our CPUs and GPUs.’

https://www.thestreet.com/investing/amds-rick-bergman-talks-about-current-and-next-gen-cpus-and-gpus
Bergman: “Absolutely. In terms of inferencing and AI, and so on. Yes. Again, we have that capability. Certainly, we have our high-end...training or inferencing solutions, to potentially solutions that are more for the client or endpoint devices as well. And again unfortunately...I can't give you details on what we may have up our sleeves or what's coming, but it's certainly a very interesting area.”
 
Watch as I use my patented method to attempt to guess when the first 5nm part will be released. Amaze your friends, astound your co-workers, etc.

So the Apple A12 Bionic was first released in September 2018. The first 7nm AMD GPU was the Radeon VII which came out February 2019. That means a gap of about 5 months between an Apple mobile part and an AMD GPU part.

So the Apple 5nm part, the A14, was released in September 2020, meaning the AMD 5nm parts must be coming... next month! :D:p
 
Last edited:
Watch as I use my patented method to attempt to guess when the first 5nm part will be released. Amaze your friends, astound your co-workers, etc.

So the Apple A12 Bionic was first released in September 2018. The first 7nm AMD GPU was the Radeon VII which came out February 2019. That means a gap of about 5 months between an Apple mobile part and an AMD GPU part.

So the Apple 5nm part, the A14, was released in September 2020, meaning the AMD 5nm parts must be coming... next month! :D:p
You forgot to add a COVID delay into your equation.:p
 
Yeah, very interesting :eek:

Could be only for yields and decreased production costs, could be a new breakthrough like the Infinity Cache that greatly improves the performance?
The so called "uncore" part of the GPU to go to its own silicon, while the compute units to sit on their own.

I thought the ray-tracing cores could have their own silicon, too.

Hopefully will come at lower prices or many people couldn't care less what's in them (except perhaps for miners).
 
Back
Top Bottom