Caporegime
Even with this not great Youtube image quality you can see the difference.
DLSS 2.0
DLSS 2.0
Last edited:
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
So, from that document, one of the most important bits is: "DLSS is not designed to enhance texture resolution Mip bias should be set so textures have the same resolution as native rendering". This, together with few other bits tell me one thing - AI is only used to do AA and stacking frames indeed (especially with jitter introduced). All the extra pixels you get not from AI but from existing jittered frames, when you stuck them together, they fill in the blanks. Which means you can't get good image from just 1 frame, you have to stack a bunch of them first. When NVIDIA talks about hallucinating pixels in the image they talk about ML in general and that is how DLSS 1 worked - which was just as bad in quality as all the other AI examples the shown there. That is why they had to redo the whole thing in DLSS 2 and instead of letting AI run widely hallucinating pixels (and usually doing it wrong), they gave it much simpler and more strict job, which also did not require huge training per game (and is generic now). This is consistent with what I heard from people who supposedly read leaked source code of DLSS 2.Yeah - I mean that's the whole basis by which it works! All upscalers are generating pixels - TAA upscaling uses neighbouring and previous frames pixels to guess at a new pixel, while DLSS uses a nueral net model which takes that same input but has been trained on reference images and uses that to guess at a new pixel. You'll see it called things like reconstruction and hallucination.
There's a slide deck here - go to about slide 31 and read onwards:
Yeah MSAA is at its best on 3D edges. Textures and foliage that are projections of textures on 2D planes are not great for MSAA, especially as they nearly all use partially transparent textures so there's not really an edge, just a transition from visible to transparent within the texture. Again since it's a well-developed method there are several workarounds - I always set transparent textures to SSAA in drivers which helps a lot, but really you need screen-spaced based AA to deal with them best.
Well it's not irrelevant if they can't be used But really the comparison should have been to SSAA.. but then that's not native either
MSAA is far from brute force - it's extremely elegant and has had a ton of development over the years.
SSAA is still with us though it's now listed under resolution scaling at either game or driver level - it's render agnostic.
Then we've got the bunch of post-process smoothers - MLAA->FXAA->single frame SMAA and 2D plus motion vectors like multiframe SMAA and TAA.
DLSS is most similar to the SS part of SSAA - it's generating extra pixels, but rather than sampling them relatively simply from either neighbouring pixels of a higher resolution it's getting them from a ML model which takes things like neighbouring pixels, motion vectors and prior training set into account. Downsample back to native resolution and you get DLSSAA - at which point it should definitely be compared with SSAA.
But I again agree it's almost always not better than native - however there are edge cases where it can be, for example where there isn't enough information in the native pixels to resolve something, but the predicted pixel out of the ML model might end up being a better choice. But that's super rare.
Are you really asking me for the games I have tested? Even though you directly quoted me referring to WDL as a perfect example? If you play the game with RT off you get overall better IQ if you run SMAA vs TAA+DLSS Quality. SMAA at 4K reduces jaggies sufficiently that shimering is not a major issue and textures look sharp even into the distance. There is also no other artifacting introduced. DLSS will offer some improvements with aliasing and eliminates shimmering at the cost of distant detail as it adds a level of blur. It also introduces some very notiecable ghosting at high speed (for example driving), which get worse if you add sharpening to reduce the blur. So overall DLSS is not better than native with SMAA in WDL, it wins some and loses some with the perference being down to personal taste.
The only reason I use DLSS in this game is so I can enable RT.
You asked for proof, I gave it (WDL). Now you are moving goalposts to include all elimination of artifacting, shimering. haloing etc so you can claim a "win". Even DLSS doesn't do that.
Let me make it very simple so you can follow it. DLSS is ONLY better than native when you are comparing to the low bar that is TAA only.
I remembered that another game that I tested that had options for FXAA, TAA, or TAA+DLSS. Death Stranding.
TAA again adds excessive blur that really impact detail.
FXAA gives very good results, shimmering is limited and textures remain sharp.
DLSS adds a level of detail back to TAA and also gives a nice performance boost but again some ghosting and oversharpening (or mesh type effect) is evident.
I would say that again it would be personal preference if you went DLSS or FXAA but I would certianly not call DLSS better than native overall.
DLSS is a technology which enables faster framerates without any compromises. DLSS provides better Anti-Aliasing than the game's default AA (in quality mode) and removes many of the artefacts that can be seen with TAA, such as ghosting and grainy transparencies. As a whole, DLSS' quality mode looks better than the Death Stranding's native resolution presentation, which when combined with DLSS' framerate boost is a clear win for Nvidia.
Are you really asking me for the games I have tested? Even though you directly quoted me referring to WDL as a perfect example? If you play the game with RT off you get overall better IQ if you run SMAA vs TAA+DLSS Quality. SMAA at 4K reduces jaggies sufficiently that shimering is not a major issue and textures look sharp even into the distance. There is also no other artifacting introduced. DLSS will offer some improvements with aliasing and eliminates shimmering at the cost of distant detail as it adds a level of blur. It also introduces some very notiecable ghosting at high speed (for example driving), which get worse if you add sharpening to reduce the blur. So overall DLSS is not better than native with SMAA in WDL, it wins some and loses some with the perference being down to personal taste.
The only reason I use DLSS in this game is so I can enable RT.
You asked for proof, I gave it (WDL). Now you are moving goalposts to include all elimination of artifacting, shimering. haloing etc so you can claim a "win". Even DLSS doesn't do that.
Let me make it very simple so you can follow it. DLSS is ONLY better than native when you are comparing to the low bar that is TAA only.
So overall DLSS is not better than native with SMAA in WDL, it wins some and loses some with the perference being down to personal taste.
That's the thing when it comes to these AA methods and upscalers, it is very much a case of pick your poison, some are better in areas than others and everyone has different preferences, most seem to prefer sharpness over any kind of softening, even if it is increasing artifacts, which is ok, for me it just reminds me of overdone sweetfx/redux presets though, it's why I also put spiderman on hold as dlss had sharpening turned up with no way to disable it and the game looked awful with it but most people didn't notice..... Whatever gives the least shimmering/aliasing, jaggies and best temporal stability will always be my go to and this is where dlss is strongest.
Is it not performance related anyway? Theoretically, if you were running 4K at 10,000 fps (I know...) at native, surely that would deal with jagged edges, shimmering etc? DLSS brings frame rate up, which then leads to a shorter distance between pixels for each frame movement. Is there an example of using DLSS to upscale from 1080p locked to 60fps, which is then compared to a 4k locked 60fps image?
I just don't buy that DLSS generated frames are better than native on image alone, unless it is upscaling to a resolution and framerate higher that the one you are comparing it to. Gameplay? I think that would depend on performance.
Happy to be corrected though.
Does this hold true when comparing the same frame rate? Currently at work, can't have a proper dig around at the moment.Already mentioned but running at 4k does not always eliminate shimmering/aliasing and jaggies, it definitely does better than anything below but dlss or a good AA technique always does better than just simply dialling up res. Also, TAA and dlss generally always looks better at 4k too
Yes.Does this hold true when comparing the same frame rate? Currently at work, can't have a proper dig around at the moment.
man I am just waiting for amd to rain on nvidias parade
Maybe they should drop a "poor ada" tag line again76 billion transistors put to good use. Lets hope we hear something from amd other than radio silence
76 billion transistors put to good use. Lets hope we hear something from amd other than radio silence
AMD haven't learned the art of peeing on your competitors camp fire yet, despite everyone doing it to them, they will sit in silence like they aren't even here while Nvidia hoover up most potential customers.
yeah it seems rt has gone mainstream.. i only saw GNs review though i didnt like his choice of games, but its intriguing the kind of performance achieved without even activating the main selling point - dlss3Maybe they should drop a "poor ada" tag line again
i had a different experience last time.. wasnt frank azor mouthing off on twitter, we should be able to see something from them right abt now
I thought we were the ones being peed on because of the pricing.Not that sort of peeing.
Even raster will be lower. Mark my words.Rasterization more than likely will. Can't see them coming anywhere close to that RT performance though
Rasterization more than likely will. Can't see them coming anywhere close to that RT performance though