• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
It seems like an easy way to move the goal poasts as there are always more "rays" to be "traced".

So what happens when all mainstream cards can run a given title at max settings including ray tracing? Well then we get an update "spycho" rayt racing. What's that? Now cards can run that too? Don't worry, we'll put out a patch with "overdrive" ray tracing.

Soon it will be "super duper, no we really mean it this time" ray tracing.
But isn't that how stuff works in general.. raster included.
 
Presentation was clear of AMD goals.
They realized for some reason they cant match RT this gen(to be fair, the gap got wider if anything) so they focused on high refresh rate "traditional" gaming. Which isnt a bad choice, they have all the arguments there:
-lower driver overhead on cpu
-cheaper chip to make
-adopted dp 2.1

So if anyone wants to play very high refresh rate at all costs, I would say 7900xtx is a better choice because its 60% cheaper for being merely 20% slower - and possibly less at 1440p given massive cpu bottleneck on Nvidia.

Not a card for me, as I cannot tell the difference between 120Hz and 175Hz. I also like to play with RT, but I think there is an audience for this card and its a fair proposition at this price.

PS. Downside I can see is stagnation for RT, because if 7900xtx is still garbage at RT, there is no hope for better performing RT chips in ps5 pro/xsxx
PS2. The gap in RT is so wide that Adored video saying AMD will for certain win next gen because they will just double the shaders is now proven to be complete BS.
 
Last edited:
Any ideas where the regression in raster performance per SM coems from? You don't expect perfect scaling, but what is shown in the presentation is sublinear.

I'm really interested to see why that might be the case. I can think of a bunch: Cache hit rates, cache bandwidth, SM specialisation, backend limitations.

I'm super naive when it comes to GPU arches, only properly understand caches, so if anyone has insights it would be great to hear them.
 
Last edited:
I'm really surprised by the regression in raster performance per SM. You don't expect perfect scaling, but what is shown in the presentation is sublinear.

I'm really interested to see why that might be the case. I can think of a bunch: Cache hit rates, cache bandwidth, SM specialisation, backend limitations.

I'm super naive when it comes to GPU arches, only properly understand caches, so if anyone has insights it would be great to hear them.
It can be attributed to Amdahl's law(this is stricly for CPU, but some of the principles still apply to GPU)

That's why Adored claims AMD will win because they adopted chiplet and can multiply shader count is just plain wrong. You can see dimishing returns per SM in Nvidia, now you see diminishing returns at even faster rate in AMD (chiplet ųarch is more impacted by this phenomena if anything)
 
Last edited:
I think you are being overly optimistic in how close we are to an "end game." I think Ada's ray tracing performance could be considered trash tier for future ray tracing implications.

I suspect there will be more rays to trace for many, many generations to come.
Obviously but my point was that RT does matter and it’s something AMD have to catch up on at some point. Most displays on the market are 4K 120hz and the 4090/7900XTX well exceeds that target So you need raytracing for a meaningful upgrade to your gaming experience. i have a 240hz Samsung Odyssey Neo G8 and I struggle to see differences between 90 and 130 fps so I opt to use ray tracing instead. With the 7900XTX, you cannot do that as you will be stuck on 60 fps. Yes that is playable but the 4090 would be much more enjoyable.
 
Last edited:
It can be attributed to Amdahl's law(this is stricly for CPU, but some of the principles still apply to GPU)

That's why Adored claims AMD can win because they adopted chiplet and can multiply shader count is just plain wrong. You can see dimishing returns per SM in Nvidia, now you see diminishing returns at even faster rate in AMD (chiplet ųarch is more impacted by this phenomena if anything)
Sure, that makes sense, but why throw so many SMs in if you can feed them? As I said, I'm naive when it comes to GPU's, so the question is probably just dumb...

EDIT: The only thing Amdahl's law ?doesn't? take acount of is how modern CPUs can sequential programs, look at the branches, do prediction and perform out of order processing in parallel. I kind of figured GPUs either didn't suffer from the single critical path thread due to their operations being highly parallel by nature, but I guess the interactions between SMs workloads (sticking it all together) start to dominate pretty quickly. It's (probably) analogous to the fact that widening CPUs doesn't give anywhere near linear scaling, as there are naturally only so many branches you can execute in parallel and even then your cache becomes a limitation very quickly.

EIDT2: FFS Amdahl took OOO into account in 1967, so ignore the first sentence of my prior edit... Goddamn computer scientists are smart...
 
Last edited:
Loving some of the comments here. Numbers flying out of every orifice and we don't even have any official benchmarks. The anti AMD brigade crawling out of the wood work trying to find some fault and some people trying to justify a melty, fire risk, coil whining, overly sized, stupidly power hungry, ridiculous priced 4090. Hilarious.

Almost as funny as people saying I need to now buy a dp2.1 monitor. How dare you AMD!! Well dur obviously. When NVIDIA catch up though and announce support I'm sure everyone will be gushing. :cry:
You totally don't sound like an anti nvidia brigade. Totally :p
 
Last edited:
People downplaying RT are straight up luddites. Ray Tracing is what decades of GPU progress has been building towards. Ray Tracing and HDR are transformative for picture quality and put the same game years ahead in visual quality and fidelity than the version without.

While HDR is ”free” (cost to enjoy is a high end display such as aw3423dw) RT needs a lot of GPU power for now. In the end, we should be encouraging both vendors to be in an arms race to delivery 2x+ generational performance in RT while overtaking each other and swapping places. The more robust the RT performance is across the board, the more inclined developers are to use it. You need the raw power to be there along with the toolset for game devs to invest their time.

I don’t care about logos. I care about end results. I don’t want to see RT stagnation because one of the two players has decided to phone it in and play the value card.

Agreed. Nvidia has 80% discrete GPU market share for a reason. With the disastrous RX7000 series launch/reveal (disastrous as not even comparing to 4090 in comparisons, wonder why...), 3080 level RT, I can see AMD losing further market share, as it's only going to be fanboys buying these cards.

Majority will buy cheap/2nd hand 3080/3090 (for better RT performance than RX7000) and those with more to spend will grab a RTX 4000 series.

Also all this before we even get a look at drivers - such as new architecture could be a mess for months/years, judging from past AMD performance. I'd certainly not want to risk £1000 on their driver team.
 
Last edited:
Sure, that makes sense, but why throw so many SMs in if you can feed them? As I said, I'm naive when it comes to GPU's, so the question is probably just dumb...
Well you can throw SM because you still see gains, the gains are just getting smaller and smaller.
Its not a badly designed arch, it has a lot ROP to match SM.
Whats a bit contradictory is AMD claims of innovation. This is just brute force approach - throw a lot of SM, a lot of ROPs, lower power target per SM to claim efficency gains and thats it. The fact that MCD is outside of GPU does not automatically make it innovative.
Last truly innovative GPU arch was Maxwell, Ada to this day is still an evolution of Maxwell without major changes(the only significant changes were INT/FP32 core reorganization and of course RT, but thats outside of pure raster)
 
Last edited:
Afaik the presentation didn't mention VR :( 2% of gamers do so in VR which goes up to 15% of simmers (Steam numbers). We often spend quite a lot of money on our hobby. Missed opportunity imo.
 
The cards are great, for people that don't care about RT, the xtx will be a beast. Sadly, I do care about it so I went for a 4090, but still, didn't expect the pricing from AMD. Jensen will have to shove his 4080 12 and 16gb versions down his...
 
Well you can throw SM because you still see gains, the gains are just getting smaller and smaller.
Its not a badly designed arch, it has a lot ROP to match SM.
Whats a bit contradictory is AMD claims of innovation. This is just brute force approach - throw a lot of SM, a lot of ROPs, lower power target per SM to claim efficency gains and thats it. The fact that MCD is outside of GPU doesnt not automatically make it innovative.
Last truly innovative GPU arch was Maxwell, Ada to this day is still an evolution of Maxwell without major changes(the only significant change was INT/FP32 core reorganization)
Makes perfect sense. Looking at the 4090, they spent a ton of transistors on a relatively meagre raster performance uplift, but there was obviously a huge transistor budget for RT and ML acceleration.

Re. chiplets and innovation, I think they'll become more interesting when there's truly heterogeneous chiplets, e.g. basic compute, memory controllers, RT cores, encode cores. The width of connectivity is absolutely terrifying though.

Oh and one thing that does impress me is the power consumption impact of using chiplets. I would have expected a reasonable power uplift just to service those crazily fast & wide busses between dies. But I guess they're getting some savings back by being able to run the different dies at different frequencies, without taking the usual cross-clock-domain hits, or at least those latencies are hidden within the interfaces between dies.
 
Last edited:
If Nvidia didn’t have DLSS and such a huge RT advantage, these cards would be immense.
I still think AMD need to aim for the mid range and capture the market and the more budget end.
 
It seems like an easy way to move the goal poasts as there are always more "rays" to be "traced".

So what happens when all mainstream cards can run a given title at max settings including ray tracing? Well then we get an update "spycho" rayt racing. What's that? Now cards can run that too? Don't worry, we'll put out a patch with "overdrive" ray tracing.

Soon it will be "super duper, no we really mean it this time" ray tracing.
Uhm, isn't that the entire point of buying new gpus? To get better graphics? Your post was really confusing
 
Makes perfect sense. Looking at the 4090, they spent a ton of transistors on a relatively meagre raster performance uplift, but there was obviously a huge transistor budget for RT and ML acceleration.

Re. chiplets and innovation, I think they'll become more interesting when there's truly heterogeneous chiplets, e.g. basic compute, memory controllers, RT cores, encode cores. The width of connectivity is absolutely terrifying though.

Oh and one thing that does impress me is the power consumption impact of using chiplets. I would have expected a reasonable power uplift just to service those crazily fast & wide busses between dies. But I guess they're getting some savings back by being able to run the different dies at different frequencies, without taking the usual cross-clock-domain hits, or at least those latencies are hidden within the interfaces between dies.

Yeah the power draw is very frugal when you consider the energy needed to run those memory bus's. They must have learned a lot from Ryzen and how to get the infinity fabric to help out.

183 page thread and still not a meaningful benchmark form independent reviewers so the only relevant detail we actually have is the US pricing.
 
Nvidia have the ultimate performance, AMD is going for what you would call relative value.
 
Afaik the presentation didn't mention VR :( 2% of gamers do so in VR which goes up to 15% of simmers (Steam numbers). We often spend quite a lot of money on our hobby. Missed opportunity imo.

The raster bump will apply to VR but clearly no unique tech such as vrss, sps etc.

Still would expect niche players like xtal, varjo etc to be nvidia only.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom