• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
I can.

Look at TPU who measure the % FPS reduction when turning on RT. The delta between what a 3090Ti loses and what the 4090 loses is not that great.

Provided the 7950 does indeed have a 420W TBP with 1.5x the perf/watt it is 2.1x faster than a 6900XT, that would put it 10-20% faster than the 4090 depending what review you use. All AMD need to do is match the Ampere % FPS drop and that raster advantage will be enough to see them tie / beat the 4090 in RT.

Now everybody is saying it will come in between Ampere and Lovelace so what I presume they mean is that the % dropoff is greater than Ampere but the raw raster means the final FPS is still faster. I can see that happening as well.
What about when we see RT games where SER is being used/applied though? I can't remember exactly but haven't nvidia stated 40-50% improvement with this on 40xx cards?
 
So, from that document, one of the most important bits is: "DLSS is not designed to enhance texture resolution Mip bias should be set so textures have the same resolution as native rendering". This, together with few other bits tell me one thing - AI is only used to do AA and stacking frames indeed (especially with jitter introduced). All the extra pixels you get not from AI but from existing jittered frames, when you stuck them together, they fill in the blanks.
You've misread/misunderstood. The jittered frames are one of the inputs to the AI model - it then outputs the AI generated pixels.

Which means you can't get good image from just 1 frame, you have to stack a bunch of them first.
Well yes, but that's because you can't get motion vectors from one frame. All techniques that use motion vectors (TAA, SMAA multi, DLSS, FSR2.0 etc.) need input from more than one frame.

When NVIDIA talks about hallucinating pixels in the image they talk about ML in general and that is how DLSS 1 worked - which was just as bad in quality as all the other AI examples the shown there. That is why they had to redo the whole thing in DLSS 2 and instead of letting AI run widely hallucinating pixels (and usually doing it wrong), they gave it much simpler and more strict job, which also did not require huge training per game (and is generic now). This is consistent with what I heard from people who supposedly read leaked source code of DLSS 2.

In short words, AI in DLSS 2.0 (as of the time that document was written) doesn't seem to be filling in any missing pixels (sans for AA reasons), and it doesn't improve textures quality and details either (as NVIDIA says themselves). It stacks frames in a bit better way than Temporal upscaling (though this evolved too - as I believe that's what FSR 2 is based on, with no AI involved at all) but is still prone to errors (no AI is perfect). Nicely looking textures are effect of proper Mip Bias setting of textures and not DLSS itself (as per own NVIDIA's words in that doc).
No, that's incorrect. What you're describing is basically TAA type upscale without use of a neural net model. DLSS2.0 absolutely reconstructs/hallucinates - they show that quite clearly - but it does so with many more inputs to the model than DLSS1.

Now, FSR2 doesn't seem much different from DLSS 2, aside the fact that the latter uses AI for AA (and does a very good job at that, along with regenerating thin lines). In such simple work, generic algorithms if chosen well should be good enough.
That's also incorrect. FSR2 is a TAA type upscale, not DLSS 2.0 (and XESS for completeness) type. Remember DLSS doesn't do AA, unless you downsample afterwards (and then it's called DLSSAA) - but you might get better pixels from the model that look AA'd.
 
Last edited:
amd.jpg

The current leaks/rumors so far.
 
I can.

Look at TPU who measure the % FPS reduction when turning on RT. The delta between what a 3090Ti loses and what the 4090 loses is not that great.

Provided the 7950 does indeed have a 420W TBP with 1.5x the perf/watt it is 2.1x faster than a 6900XT, that would put it 10-20% faster than the 4090 depending what review you use. All AMD need to do is match the Ampere % FPS drop and that raster advantage will be enough to see them tie / beat the 4090 in RT.

Now everybody is saying it will come in between Ampere and Lovelace so what I presume they mean is that the % dropoff is greater than Ampere but the raw raster means the final FPS is still faster. I can see that happening as well.

that assessment doesnt make a lot of sense.. because thats independent of architecture, its all about allocating the transistor budget between RT and non-RT.. if nvidia allocates more transistors to RT that % difference would reduce, you dont need some ground breaking innovation to reduce the gap.. all that it needs is just the shuffling of transistors. maybe its just part of strategy

amd.jpg

The current leaks/rumors so far.

last i heard the 7950 could easily hit 4ghz
 
Last edited:
amd.jpg

The current leaks/rumors so far.

No.

N33 has 16WGPs and 4096 shaders. It will also be the 7600XT because AMD would be nuts to release a $530 x700 tier card with just 8GB of ram.

N32 will cover both 7800 and 7700 tiers because at the 7800 tier it will be 4 MCDs with a 256bit bus and 16GB of ram and at the 7700 tier it will be 3 MCDs with a 192bit bus and 12GB of ram. N32 is also only 30 WGPs with 7680 shaders and 3 shader engines.

N31 will also have a cut version with 5 MCDs and 20GB of ram somewhere in the mix.

Also worth noting that N33 is on N6 and is around 200mm^2 so smaller than N23 and on a cheaper node. It will be very margin friendly so AMD could sell it for $400 and it is probably in the 6800 / 3070 tier performance wise at 1440p/1080p. Maybe a bit faster depending on the clock speed it hits.

that assessment doesnt make a lot of sense.. because thats independent of architecture, its all about allocating the transistor budget between RT and non-RT.. if nvidia allocates more transistors to RT that % difference would reduce, you dont need some ground breaking innovation to reduce the gap.. all that it needs is just the shuffling of transistors. maybe its just part of strategy



last i heard the 7950 could easily hit 4ghz

We don't know what AMD have done RT wise in RDNA3. They could have allocated a bit more transistor budget to it vs RDNA2 and brought down the impact of turning RT on, all I am saying is if they have matched Ampere in that then provided the top card is really 420W and AMD indeed hits their advertised 1.5x perf/watt claim (they exceeded it for Vega - > RDNA and again for RDNA - > RDNA 2 so I see no reason they wouldn't match or exceed it a 3rd time) the performance gain over the 6900XT is 2.1x which would put the raster perf about 10-20% ahead of the 4090.

So the maths would be (based on RT costing the 4090 35% of the frames and it costing the 7950 40% of the frames like it costs Ampere)
| RT off | RT on
4090 |100 | 65
7950 |110-120 | 66 - 72

So even with worse RT performance (ie a larger cost in FPS) with enough of a raster advantage it won't matter.
 
Last edited:
AMD are just gonna come in a couple of hundred lower for equal perf and I'm ok with paying a little more for the Nvidia card, better industry support generally plus the DLSS bonus.

Don't get me wrong had multitude of AMD cards and love them, just sold my 6900XT and that was one of the best :)
 
Last edited:
We don't know what AMD have done RT wise in RDNA3. They could have allocated a bit more transistor budget to it vs RDNA2 and brought down the impact of turning RT on, all I am saying is if they have matched Ampere in that then provided the top card is really 420W and AMD indeed hits their advertised 1.5x perf/watt claim (they exceeded it for Vega - > RDNA and again for RDNA - > RDNA 2 so I see no reason they wouldn't match or exceed it a 3rd time) the performance gain over the 6900XT is 2.1x which would put the raster perf about 10-20% ahead of the 4090.

So the maths would be (based on RT costing the 4090 35% of the frames and it costing the 7950 40% of the frames like it costs Ampere)
| RT off | RT on
4090 |100 | 65
7950 |110-120 | 66 - 72

So even with worse RT performance (ie a larger cost in FPS) with enough of a raster advantage it won't matter.

i dont know if you have read the ada whitepaper but right now it looks like nvidia is going to dominate RT.. they have added hardware acceleration structures for:
- recursive raycasting
- transparency mapping.. this can literally cripple amd if nvidia goes full-steam with gameworks or similar kinds of encouragement
and then other claims like more number of triangle intersection tests per core which have now become standard fare
while amd would be perhaps building their first dedicated RT core.. i am not holding my breath

though there's a 50:50 chance with the kind of leaks we have seen, amd might match or exceed nvidia's raster performance
 
I'd wonder about that - the majority of high end games coming out now a console ports, which use AMD tech in them. Not worth developers time to produce a mutli platform game (which are the big sellers) to lock out one manufacturer. Games are always, as things stand, going to be fine on AMD. Future may change things, but not for this gen.
 
its relative.. some of those bits like the recursive structure used for SER is outside dx12 spec and will be enabled through NVAPI, so its not a matter of crippling the competitor in absolute terms but if devs are encouraged to use new features, amd will look crippled in relative terms
 
Status
Not open for further replies.
Back
Top Bottom