• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
Yeah - I mean that's the whole basis by which it works! All upscalers are generating pixels - TAA upscaling uses neighbouring and previous frames pixels to guess at a new pixel, while DLSS uses a nueral net model which takes that same input but has been trained on reference images and uses that to guess at a new pixel. You'll see it called things like reconstruction and hallucination.

There's a slide deck here - go to about slide 31 and read onwards:

So, from that document, one of the most important bits is: "DLSS is not designed to enhance texture resolution Mip bias should be set so textures have the same resolution as native rendering". This, together with few other bits tell me one thing - AI is only used to do AA and stacking frames indeed (especially with jitter introduced). All the extra pixels you get not from AI but from existing jittered frames, when you stuck them together, they fill in the blanks. Which means you can't get good image from just 1 frame, you have to stack a bunch of them first. When NVIDIA talks about hallucinating pixels in the image they talk about ML in general and that is how DLSS 1 worked - which was just as bad in quality as all the other AI examples the shown there. That is why they had to redo the whole thing in DLSS 2 and instead of letting AI run widely hallucinating pixels (and usually doing it wrong), they gave it much simpler and more strict job, which also did not require huge training per game (and is generic now). This is consistent with what I heard from people who supposedly read leaked source code of DLSS 2.

In short words, AI in DLSS 2.0 (as of the time that document was written) doesn't seem to be filling in any missing pixels (sans for AA reasons), and it doesn't improve textures quality and details either (as NVIDIA says themselves). It stacks frames in a bit better way than Temporal upscaling (though this evolved too - as I believe that's what FSR 2 is based on, with no AI involved at all) but is still prone to errors (no AI is perfect). Nicely looking textures are effect of proper Mip Bias setting of textures and not DLSS itself (as per own NVIDIA's words in that doc).

Now, FSR2 doesn't seem much different from DLSS 2, aside the fact that the latter uses AI for AA (and does a very good job at that, along with regenerating thin lines). In such simple work, generic algorithms if chosen well should be good enough.
 
Last edited:
Yeah MSAA is at its best on 3D edges. Textures and foliage that are projections of textures on 2D planes are not great for MSAA, especially as they nearly all use partially transparent textures so there's not really an edge, just a transition from visible to transparent within the texture. Again since it's a well-developed method there are several workarounds - I always set transparent textures to SSAA in drivers which helps a lot, but really you need screen-spaced based AA to deal with them best.

I just fired up RDR 2 and forgot how awful MSAA is in motion when going through tree areas :eek: You even have leaves/objects randomly disappearing but yes, it's better than DLSS........ :cry:

PS. good to see you also agree on the point I am making ;

Well it's not irrelevant if they can't be used :p But really the comparison should have been to SSAA.. but then that's not native either :p

MSAA is far from brute force - it's extremely elegant and has had a ton of development over the years.
SSAA is still with us though it's now listed under resolution scaling at either game or driver level - it's render agnostic.
Then we've got the bunch of post-process smoothers - MLAA->FXAA->single frame SMAA and 2D plus motion vectors like multiframe SMAA and TAA.

DLSS is most similar to the SS part of SSAA - it's generating extra pixels, but rather than sampling them relatively simply from either neighbouring pixels of a higher resolution it's getting them from a ML model which takes things like neighbouring pixels, motion vectors and prior training set into account. Downsample back to native resolution and you get DLSSAA - at which point it should definitely be compared with SSAA.

But I again agree it's almost always not better than native - however there are edge cases where it can be, for example where there isn't enough information in the native pixels to resolve something, but the predicted pixel out of the ML model might end up being a better choice. But that's super rare.

Are you really asking me for the games I have tested? Even though you directly quoted me referring to WDL as a perfect example? If you play the game with RT off you get overall better IQ if you run SMAA vs TAA+DLSS Quality. SMAA at 4K reduces jaggies sufficiently that shimering is not a major issue and textures look sharp even into the distance. There is also no other artifacting introduced. DLSS will offer some improvements with aliasing and eliminates shimmering at the cost of distant detail as it adds a level of blur. It also introduces some very notiecable ghosting at high speed (for example driving), which get worse if you add sharpening to reduce the blur. So overall DLSS is not better than native with SMAA in WDL, it wins some and loses some with the perference being down to personal taste.

The only reason I use DLSS in this game is so I can enable RT.

You asked for proof, I gave it (WDL). Now you are moving goalposts to include all elimination of artifacting, shimering. haloing etc so you can claim a "win". Even DLSS doesn't do that.

Let me make it very simple so you can follow it. DLSS is ONLY better than native when you are comparing to the low bar that is TAA only.

Seems someone's jimmies have been rustled again, why so defensive :cry:

I have already said "ubi" games are a good example of when you can see good AA implementations, I also listed other games but they are much older...... what else is there recently? You are making it sound like we have good AA options in majority of the games when we simply don't hence why so many reviewers state that DLSS is better than native...... That's the problem that I and kalniel are referring to i.e. it's "very rare" to see games with good AA techniques now, how are you not getting this, surely you should be able to list loads of games with the way you are making out that we have loads of good aa choices?

Have you got any photos/videos to show all these areas of dlss being so much worse than native like you claim cause as we have seen time and time again by majority of end users including tech press like TPU, DF, gamersnexus, HUB, oc3d.net they have all shown many times in plenty of games where DLSS does better than native hence why they also say "dlss is better than native" in certain areas unless you can of course upload some proof to "debunk" them?

You still haven't answered which one of those RDR 2 images look better either....

I remembered that another game that I tested that had options for FXAA, TAA, or TAA+DLSS. Death Stranding.

TAA again adds excessive blur that really impact detail.
FXAA gives very good results, shimmering is limited and textures remain sharp.
DLSS adds a level of detail back to TAA and also gives a nice performance boost but again some ghosting and oversharpening (or mesh type effect) is evident.

I would say that again it would be personal preference if you went DLSS or FXAA but I would certianly not call DLSS better than native overall.

Here is one of these sites that look at death stranding and provide comparisons/evidence backing up why they make this claim:

DLSS is a technology which enables faster framerates without any compromises. DLSS provides better Anti-Aliasing than the game's default AA (in quality mode) and removes many of the artefacts that can be seen with TAA, such as ghosting and grainy transparencies. As a whole, DLSS' quality mode looks better than the Death Stranding's native resolution presentation, which when combined with DLSS' framerate boost is a clear win for Nvidia.

The other AA methods are not good either, FXAA is awful hence why most sites never use/test it.

To me it seems you value sharpness/clarity far more than anything else, which is ok and will make native and whatever AA best for you but again that is not the case for everyone as attested to by several good review sites and hundreds of end consumers unless of course you can post some good comparisons to back up your thoughts....


Using best AA on offer vs dlss quality:

CP


spiderman



dl 2:


rdr 2 (can't change the dlss file to the newest one so not getting the best from it here)



Which ones look best then? Even though it's a bit pointless as you are only seeing standing still shots i.e. you can't get a real sense of shimmering, aliasing, jaggies (which again just to make clear.... does add to image quality). Where as in motion, the issues with the good/native AA is far worse/more noticeable. Only time MSAA looks good in rdr 2 is when standing still and even then some parts look worse.
 
Last edited:
Are you really asking me for the games I have tested? Even though you directly quoted me referring to WDL as a perfect example? If you play the game with RT off you get overall better IQ if you run SMAA vs TAA+DLSS Quality. SMAA at 4K reduces jaggies sufficiently that shimering is not a major issue and textures look sharp even into the distance. There is also no other artifacting introduced. DLSS will offer some improvements with aliasing and eliminates shimmering at the cost of distant detail as it adds a level of blur. It also introduces some very notiecable ghosting at high speed (for example driving), which get worse if you add sharpening to reduce the blur. So overall DLSS is not better than native with SMAA in WDL, it wins some and loses some with the perference being down to personal taste.

The only reason I use DLSS in this game is so I can enable RT.

You asked for proof, I gave it (WDL). Now you are moving goalposts to include all elimination of artifacting, shimering. haloing etc so you can claim a "win". Even DLSS doesn't do that.

Let me make it very simple so you can follow it. DLSS is ONLY better than native when you are comparing to the low bar that is TAA only.

Also, you say that IQ is not better with dlss then you say this?

So overall DLSS is not better than native with SMAA in WDL, it wins some and loses some with the perference being down to personal taste.

:cry:

So now DLSS can be better than native + SMAA, depending on what one values? i.e. what I have said all along:

That's the thing when it comes to these AA methods and upscalers, it is very much a case of pick your poison, some are better in areas than others and everyone has different preferences, most seem to prefer sharpness over any kind of softening, even if it is increasing artifacts, which is ok, for me it just reminds me of overdone sweetfx/redux presets though, it's why I also put spiderman on hold as dlss had sharpening turned up with no way to disable it and the game looked awful with it but most people didn't notice..... Whatever gives the least shimmering/aliasing, jaggies and best temporal stability will always be my go to and this is where dlss is strongest.

:cry:
 
Is it not performance related anyway? Theoretically, if you were running 4K at 10,000 fps (I know...) at native, surely that would deal with jagged edges, shimmering etc? DLSS brings frame rate up, which then leads to a shorter distance between pixels for each frame movement. Is there an example of using DLSS to upscale from 1080p locked to 60fps, which is then compared to a 4k locked 60fps image?

I just don't buy that DLSS generated frames are better than native on image alone, unless it is upscaling to a resolution and framerate higher that the one you are comparing it to. Gameplay? I think that would depend on performance.

Happy to be corrected though.
 
Is it not performance related anyway? Theoretically, if you were running 4K at 10,000 fps (I know...) at native, surely that would deal with jagged edges, shimmering etc? DLSS brings frame rate up, which then leads to a shorter distance between pixels for each frame movement. Is there an example of using DLSS to upscale from 1080p locked to 60fps, which is then compared to a 4k locked 60fps image?

I just don't buy that DLSS generated frames are better than native on image alone, unless it is upscaling to a resolution and framerate higher that the one you are comparing it to. Gameplay? I think that would depend on performance.

Happy to be corrected though.

Already mentioned but running at 4k does not always eliminate shimmering/aliasing and jaggies, it definitely does better than anything below but dlss or a good AA technique always does better than just simply dialling up res. Also, TAA and dlss generally always looks better at 4k too

There are plenty of examples out there comparing dlss to native now. Also, this has some great illustrations too:


Those 540/720p examples are crazy good.
 
Already mentioned but running at 4k does not always eliminate shimmering/aliasing and jaggies, it definitely does better than anything below but dlss or a good AA technique always does better than just simply dialling up res. Also, TAA and dlss generally always looks better at 4k too
Does this hold true when comparing the same frame rate? Currently at work, can't have a proper dig around at the moment.
 
Does this hold true when comparing the same frame rate? Currently at work, can't have a proper dig around at the moment.
Yes.

My 4k oled tv is only 60hz so I use vsync on it and can easily achieve that.

EDIT:

Just to clarify, it depends entirely on the game obviously e.g. SE 3 and rdr 2 are the worst offenders and spidermans temporal stability is terrible with SMAA but something like tomb raider with SMAA is good.
 
Last edited:
76 billion transistors put to good use. Lets hope we hear something from amd other than radio silence

AMD haven't learned the art of peeing on your competitors camp fire yet, despite everyone doing it to them, they will sit in silence like they aren't even here while Nvidia hoover up most potential customers.
 
Yeah they should have at least give a teaser of their own benchmarks or something to get people to hold off but nah..

Ray tracing doesn't seem to be improved in terms of the performance loss % just 4090 is a lot more powerful, if RDNA 3 can improve the % loss and get close or even exceed in raster they might get really close.
 
AMD haven't learned the art of peeing on your competitors camp fire yet, despite everyone doing it to them, they will sit in silence like they aren't even here while Nvidia hoover up most potential customers.

i had a different experience last time.. wasnt frank azor mouthing off on twitter, we should be able to see something from them right abt now

Maybe they should drop a "poor ada" tag line again :p
yeah it seems rt has gone mainstream.. i only saw GNs review though i didnt like his choice of games, but its intriguing the kind of performance achieved without even activating the main selling point - dlss3
 
Last edited:
Rasterization more than likely will. Can't see them coming anywhere close to that RT performance though
:eek:
Even raster will be lower. Mark my words.
 
Last edited:
Rasterization more than likely will. Can't see them coming anywhere close to that RT performance though :eek:

I can.

Look at TPU who measure the % FPS reduction when turning on RT. The delta between what a 3090Ti loses and what the 4090 loses is not that great.

Provided the 7950 does indeed have a 420W TBP with 1.5x the perf/watt it is 2.1x faster than a 6900XT, that would put it 10-20% faster than the 4090 depending what review you use. All AMD need to do is match the Ampere % FPS drop and that raster advantage will be enough to see them tie / beat the 4090 in RT.

Now everybody is saying it will come in between Ampere and Lovelace so what I presume they mean is that the % dropoff is greater than Ampere but the raw raster means the final FPS is still faster. I can see that happening as well.
 
Status
Not open for further replies.
Back
Top Bottom