• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Indeed, only people dissing DLSS and saying it can’t be better than native haven’t actually used it.

In death stranding for ex i posted multiple comparisons where even in still images it looked better ( due to resolving wires, fences, edges better ) while still looking as sharp as native. In motion it was even better while offering better perf.

no one said it’s ALWAYS better than native, there’s tradeoffs sometimes its better sometimes not while always offering better perf so its a no brainer.

This on the other hand, we’ll see. Got my doubts but its funny to see the same ‘skeptical’ people praising FSR even before any actual tests have been conducted while in the past they did the opposite with DLSS, and worse, they still do it nowadays even after the tech has proved itself several times over.

Indeed, it is hilarious to see :D Like I always say, it's swings and roundabouts on here ;)

Death stranding is a very good example, I linked an overclock3d article a few times now as they have a great image slider showing perfectly where dlss shines:

https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/6

But they must be shills too..... :p
 
Here's a decent video on what it does for IQ and also (~5:30) how to force higher resolution mipmaps.




Try forcing higher mipmaps, If it's just the blur on distant objects that is an issue.

You're trying to fix something they should just include in the first place. And since they didn't, they didn't think it's necessary, did they? And so, for me - that's a big nono. It's either there by default or I simply won't bother.
 
Of course it can come across as "promotional" piece if likes of DF are pointing out all the strengths of one brand and weaknesses of other brands/methods but that's not really their fault if said brand/feature is actually factually better.

Either way GN and HW have also stated the same as DF and it was GN who said dlss was "better than native" in their cyberpunk review so does that mean their piece was one-sided promotional material for nvidia?



For the people who keep questioning how dlss is better than native+AA methods such as TAA, they clearly have never used dlss themselves nor watched videos by GN, HW and DF to see why dlss is "often" a better method to a lot of games with poor AA methods (which sadly is a lot). With the current popular AA methods we have the following issues:

TAA - blurry mess in motion (considerably worse than dlss in my experience), often can produce a blurry scene too even in static screenshots, days gone is probably the best implementation of TAA to date but sadly it has awful trailing/ghosting issues, very good for removing jaggies/aliasing issues though. Not much of a performance hit with it

SMAA - one of the best for retaining sharpness/clarity, however doesn't do a great job with removing all jaggies or aliasing and shimmering issues. Depending on level, can vary from little to substantial enough performance hit

MSAA - one of the best for retaining sharpness/clarity but again doesn't do a great job of removing jaggies or aliasing and shimmering issues and you generally need 4 or even 8 times setting to get good results and as a result massive performance loss

FXAA - well this is just a Vaseline filter thrown on top and generally doesn't address the aliasing/shimmering issues

DLSS 2.0 manages to address a lot of issues such as shimmering, aliasing and jaggies whilst also improving performance considerably so it's a win win, sure, in motion it has its trailing/ghosting issues but in my experience, it is far better than TAAs motion issues.


Before people say "oh but with 4k, you don't need any AA".... yes you do, games can still have aliasing, shimmering issues even at 4k.

If FSR doesn't turn off/replace the AA, well that alone is already a big disadvantage compared to dlss.

The problem is neither DLSS nor FSR are just AA and can't be considered as AA at in here, hence the whole comparison doesn't make sense. Both do upscaling, image regeneration and other stuff. All I want is DLSS version (or FSR, whatever) that works as a very good new AA algorithm (like DLSS originally was supposed to be), without upscaling and other shenanigans. Just by offloading AA to tensor cores it would bring nice FPS improvement as it would be just like running games with no AA at all. And we know it CAN do it, it's been shown to do it - it's just not offered in games because NVIDIA doesn't want it to be offered. All we get is the whole package or nothing.
 
Indeed, it is hilarious to see :D Like I always say, it's swings and roundabouts on here ;)

Death stranding is a very good example, I linked an overclock3d article a few times now as they have a great image slider showing perfectly where dlss shines:

https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/6

But they must be shills too..... :p

And then DLSS can also produce such nice artefacts (just one example, there's plenty in CP2077), which simple upscaling and CAS doesn't cause: Cyberpunk 2077 - DLSS vs CAS and moire artefact - YouTube
 
I'm sure I read a post a few pages back (not you btw) that said the problem is getting developers to use it. Can't remember who posted it now. :p

Yeah I think it would have been a bit of a concern for the PC side if consoles didn't adopt it but that shouldn't be a concern now. Wonder if sony will adopt it too? Or if they'll stick with their checkerboarding method which tbf is extremely good.

And then DLSS can also produce such nice artefacts (just one example, there's plenty in CP2077), which simple upscaling and CAS doesn't cause: Cyberpunk 2077 - DLSS vs CAS and moire artefact - YouTube

Looks like dlss was on "auto" there which iirc according to GN/HW video can cause issues like that which aren't seen with the "quality" preset, I could be wrong though.... personally can't say I recall of that particular issue in my playthrough with "quality", only thing with cyberpunk which I noticed severely was car tailights trailing
 
Doesn't need to physically be better, just good enough to trick your eyes at normal viewing distance
better is subjective anyway, however define normal viewing distance - im not sure anyone can say what that is. asking 20 people they'l give you 20 answers.

I think @Rroff summed it up really well.
It is never going to be better than native, native is king. Trails would be a big no no for me, but if you can get close to native that's good enough.
all depends on the person looking it if close to native is good enough i guess, some people are more picky than others or even more sensitive to the differences i guess.

I don't claim that DLSS 2+ over all produces better than native IQ, but it is far more performant than traditional rendering, while doing a better job at removing shimmer from power lines, thin edges, fencing, trees, etc.
i agree it gives more performance as to the statement about a better job, that's your opinion, there's plenty of others who disagree. you also can get crazy artifacting sometimes with dlss does that also make it better?

i think we can all agree better is subjective.
 
Last edited:
You're trying to fix something they should just include in the first place. And since they didn't, they didn't think it's necessary, did they? And so, for me - that's a big nono. It's either there by default or I simply won't bother.

I'm in the same boat here, but it's not Nvidia that has to 'fix' it. It's the developers that should be altering the defaults for when DLSS is in use as it is different for every game. A mipmap slider under the DLSS option would do very nicely in game.

I do wish Nvidia would make altering such things user friendly though.
 
no one said it’s ALWAYS better than native, there’s tradeoffs sometimes its better sometimes not while always offering better perf so its a no brainer.

Yeah... no. People claim it's ALWAYS better than native all the time, in comments, on forums, all over the internet. The hype is HUGE and we can thank NVIDIA marketing for that - as most people that say it never actually used it as cards that can use it are still rare amongst gamers.

This on the other hand, we’ll see. Got my doubts but its funny to see the same ‘skeptical’ people praising FSR even before any actual tests have been conducted while in the past they did the opposite with DLSS, and worse, they still do it nowadays even after the tech has proved itself several times over.

FSR physically can't and won't be better in quality than DLSS 2+. It wasn't the goal anyway. It's been said over and over again - it's aimed at supporting as wide hardware as possible, so it can be used especially on older or just weaker GPUs, APUs, phones even etc. - to bring either playable framerates or allow RT to be used. Pascal owners for example don't care about DLSS - it does NOT work for them. They can use either simple upscaling (not great) or tech like FSR. Same with older/slower AMD cards, as current gen usually does NOT need such things as FSR, to have good playable FPS with full details (unless one insists on using RT, but then it's a wrong card for that). The whole discussion really makes little sense, because the aim of both technologies is different - one is used as marketing to upsell newest expensive GPU models, which do not usually even need this. The other has been designed for old/slow GPUs, irrelevant of brand or even hardware. Judging by the past closed technologies by NVIDIA (and technology evolution in general), I suspect the former will be growing slowly only as long as NVIDIA actively pushes it with money and resources - and then might as well vanish. The latter, considering super easy adoption (cost NO money at all by devs - it's part of FidelityFX that lots of games are already using, as it's so simple to implement), including consoles, works on pretty much whole market and might as well become de facto standard in the future. Time will tell.

One thing to remember is that market almost never chooses the thing that's objectively better, but instead chooses what's easier to implement, cheaper, has wider adoption and is good enough. Examples VHS vs BETA, AdaptiveSync vs GSYNC, etc. There's countless examples of that in the 20th and 21st Century. A small group of enthusiasts does NOT move market forward - the masses do.
 
Yeah... no. People claim it's ALWAYS better than native all the time, in comments, on forums, all over the internet. The hype is HUGE and we can thank NVIDIA marketing for that - as most people that say it never actually used it as cards that can use it are still rare amongst gamers.



FSR physically can't and won't be better in quality than DLSS 2+. It wasn't the goal anyway. It's been said over and over again - it's aimed at supporting as wide hardware as possible, so it can be used especially on older or just weaker GPUs, APUs, phones even etc. - to bring either playable framerates or allow RT to be used. Pascal owners for example don't care about DLSS - it does NOT work for them. They can use either simple upscaling (not great) or tech like FSR. Same with older/slower AMD cards, as current gen usually does NOT need such things as FSR, to have good playable FPS with full details (unless one insists on using RT, but then it's a wrong card for that). The whole discussion really makes little sense, because the aim of both technologies is different - one is used as marketing to upsell newest expensive GPU models, which do not usually even need this. The other has been designed for old/slow GPUs, irrelevant of brand or even hardware. Judging by the past closed technologies by NVIDIA (and technology evolution in general), I suspect the former will be growing slowly only as long as NVIDIA actively pushes it with money and resources - and then might as well vanish. The latter, considering super easy adoption (cost NO money at all by devs - it's part of FidelityFX that lots of games are already using, as it's so simple to implement), including consoles, works on pretty much whole market and might as well become de facto standard in the future. Time will tell.

One thing to remember is that market almost never chooses the thing that's objectively better, but instead chooses what's easier to implement, cheaper, has wider adoption and is good enough. Examples VHS vs BETA, AdaptiveSync vs GSYNC, etc. There's countless examples of that in the 20th and 21st Century. A small group of enthusiasts does NOT move market forward - the masses do.

so much this.
 
I don't claim that DLSS 2+ over all produces better than native IQ, but it is far more performant than traditional rendering, while doing a better job at removing shimmer from power lines, thin edges, fencing, trees, etc.
i agree it gives more performance as to the statement about a better job, that's your opinion, there's plenty of others who disagree. you also can get crazy artifacting sometimes with dlss does that also make it better?

i think we can all agree better is subjective.

I clearly stated exactly where it does a better job.

I've had a great time using DLSS quality at 1440p, which uses a source of only 960p. Atrifacting has not been an issue for me.
 
I'm sure I read a post a few pages back (not you btw) that said the problem is getting developers to use it. Can't remember who posted it now. :p

Yep, one of the resident self proclaimed experts who is very pro Nvidia said that it wasn't required or even viable on the consoles, because they already used their own upscaling tehcniques. This was despite the fact that every article I had read with the announcement of DLSS and even HUB had said it was a big thing for consoles.

I didn't even bother replying because it sounded more like they were hoping it was true, rather than on actual facts.
 
Yep, one of the resident self proclaimed experts who is very pro Nvidia said that it wasn't required or even viable on the consoles, because they already used their own upscaling tehcniques. This was despite the fact that every article I had read with the announcement of DLSS and even HUB had said it was a big thing for consoles.

I didn't even bother replying because it sounded more like they were hoping it was true, rather than on actual facts.

From what we know so far about FSR, it should be better than standard upscaling +CAS. This includes consoles checker-board upscaling. Which means it should be a step ahead of DLSS 1.0 too, as it's been shown many times by many different reviewers that just standard upscaling +CAS is superior already. It might as well be an improvement on checker-board upscaling with CAS and additional filters - we don't know yet. We only know it's not AI based and won't be doing fancy image regeneration like DLSS 2.0 does. But it also means it should be really fast on most GPUs.
 
I clearly stated exactly where it does a better job.

I've had a great time using DLSS quality at 1440p, which uses a source of only 960p. Atrifacting has not been an issue for me.

I used DLSS quality in CP2077 to keep RT on at 4K and there was definite ghosting and trails that were noticeable. Though overall I was still happy to leave it on but I won't delude myself it wasn't there.
 
From what we know so far about FSR, it should be better than standard upscaling +CAS. This includes consoles checker-board upscaling. Which means it should be a step ahead of DLSS 1.0 too, as it's been shown many times by many different reviewers that just standard upscaling +CAS is superior already. It might as well be an improvement on checker-board upscaling with CAS and additional filters - we don't know yet. We only know it's not AI based and won't be doing fancy image regeneration like DLSS 2.0 does. But it also means it should be really fast on most GPUs.

Ultimately I don't care how it's done just that it does it without too much IQ loss and ultimatley gets far more adoption than DLSS for both 2D and VR applications.
 
The whole discussion really makes little sense, because the aim of both technologies is different - one is used as marketing to upsell newest expensive GPU models, which do not usually even need this. The other has been designed for old/slow GPUs, irrelevant of brand or even hardware. Judging by the past closed technologies by NVIDIA (and technology evolution in general), I suspect the former will be growing slowly only as long as NVIDIA actively pushes it with money and resources - and then might as well vanish. The latter, considering super easy adoption (cost NO money at all by devs - it's part of FidelityFX that lots of games are already using, as it's so simple to implement), including consoles, works on pretty much whole market and might as well become de facto standard in the future. Time will tell.

One thing to remember is that market almost never chooses the thing that's objectively better, but instead chooses what's easier to implement, cheaper, has wider adoption and is good enough. Examples VHS vs BETA, AdaptiveSync vs GSYNC, etc. There's countless examples of that in the 20th and 21st Century. A small group of enthusiasts does NOT move market forward - the masses do.

Remember that people buying expensive GPUs tend to also have expensive monitors, which tend to offer have higher refresh rates, which DLSS can help make use of.

I really hope DLSS like tech is adopted by both Intel and AMD. Ideally FSR will go through the same evolution as Vulkan and eventually be the frame work that includes solutions from all vendors. There are already rumours that FSR is nothing more than the framework for a ML capable RDNA3, which does make sense to me.

Today, DLSS > FSR in high end, while FSR > DLSS in lower end for obvious reasons, Gsync > AdaptiveSync as I can't find a decent AdaptiveSync panel that doesn't have issues with flicker and PhysX > No PhysX as I was just thinking the other day how good Mafia II looked when I got a 980Ti.
 
Ultimately I don't care how it's done just that it does it without too much IQ loss and ultimatley gets far more adoption than DLSS for both 2D and VR applications.

And that's fair. You might also not be the target of this tech and have simply no need to use it.
 
I used DLSS quality in CP2077 to keep RT on at 4K and there was definite ghosting and trails that were noticeable. Though overall I was still happy to leave it on but I won't delude myself it wasn't there.

Maybe it's just down to play style? I take a slow stealthy approach when ever possible in games. Perhaps I just don't move the mouse quick enough to notice ghosting. I've not played competitive FPS titles since the original Unreal Tournament (Freeserve and 56k modem with £600+ phone bills).
 
Back
Top Bottom