• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

If that example is actually DLSS is working on a solid box because that's what the lower resolution has then that example deserves to be punted out the window because its not an honest example of the tech working.

Also no, we saw FSR UQ at 1662p -> 4k pushing higher fps than DLSS quality 1440p -> 4k in necromunda so the extra pixels required for FSR UQ managed to be less work than sorting out DLSS on less pixels. Other examples may vary but it's certainly curious.


*put correct video, so many virtually identical videos spammed in here it's hard to remember which it was

DLSS is recreating the 16k ground truth image, not the 4k one.

DLSS is about reconstructing the image and performance. DLSS has a fixed performance cost based on the number of tensor cores and the resolution.
FSR reduces image quality for increase speed. As FSR only has the image as a source of information its cannot reconstruct details, thus by design can never match native. There always is a loss in detail which gets worse as the internal resolution drops. This is the cause of bluring. Even ultra quality is blured Godfall, performance is really nasty. The best you can hope for, is a dark low detail image. One that hides the limitations of FSR. Then sharpen the image for all its worth.

Then argue subjectively about image quality.

DLSS is superior to FSR in every aspect.

I wrote a quick Python script to compare DLSS and FSR in quality/ultra quality mode, respectively, in terms of their mean squared error (MSE) relative to the native image based on your screenshots, based on the area of the image just left of the player character and under the MSI afterburner stats (so they all display the same area - see this imgur link ). Here's the result:

Native - DLSS MSE: 43.53554856321839
Native - FSR MSE: 60.963549712643676

The script I used to generate the results can be found here To re run it, download the FSR, DLSS and native images as fsr.jpg, dlss.jpg and native.jpg respectively and run the script from within the same directory. If you want to compare another part of the image, change the x_start/x_end variables etc.

Result is that FSR has about 50% higher MSE compared to DLSS results, so it's a clear victory for DLSS at least for this section of this static image in this game.

If we look at a nice bright image full of detail. DLSS wins, it has more detail and less blur.

 
Last edited:
DLSS is recreating the 16k ground truth image, not the 4k one.

DLSS is about reconstructing the image and performance. DLSS has a fixed performance cost based on the number of tensor cores and the resolution.
FSR reduces image quality for increase speed. As FSR only has the image as a source of information its cannot reconstruct details, thus by design can never match native. There always is a lose in detail which gets worse as the internal resolution drops. This is the cause of bluring. Even ultra quality is blured Godfall, performance is really nasty. The best you can hope for, is a dark low detail image. One that hides the limitations of FSR. Then sharpen the image for all its worth.

Then argue subjectively about image quality.

DLSS is superior to FSR in every aspect.

The latest DLSS isn't upscaling to a ground truth with extra fidelity information. So you have to be careful in what you say.

The latest versions of DLSS instead are more general (for example what is being baked into UE5) and are able to reconstruct using only the underlying image. The neural networks are trained to attempt to achieve a 16K ground truth from a 1440p image, but that doesn't mean they achieve anything close to a 16K image with potentially 16K assets. The idea is that they can get something similar to 4K.

However, if the game in question dynamically changes the quality of underlying textures and fidelity based on resolution then DLSS is hampered in it's ability to upsample. For things like text, edges or tessellated structures or textures then it's not too difficult but it can't create very specific high quality textures out of no where.

FSR is much worse as it uses far less information and in a far less sophisticated way.
 
Last edited:
DLSS is recreating the 16k ground truth image, not the 4k one.

DLSS is about reconstructing the image and performance. DLSS has a fixed performance cost based on the number of tensor cores and the resolution.
FSR reduces image quality for increase speed. As FSR only has the image as a source of information its cannot reconstruct details, thus by design can never match native. There always is a loss in detail which gets worse as the internal resolution drops. This is the cause of bluring. Even ultra quality is blured Godfall, performance is really nasty. The best you can hope for, is a dark low detail image. One that hides the limitations of FSR. Then sharpen the image for all its worth.

Then argue subjectively about image quality.

DLSS is superior to FSR in every aspect.

What are you talking about.

This was us talking about DLSS creating a vastly different image from FSR and native, so different that there is a theory that the resolution it's using has a completely different model. The reasoning being the "solid" box while native and fsr are trying to show a semi-transparent mesh with details visible through.

You're ranting about blurriness as if you didn't read a word but have a quota of highly negative posting about FSR to fulfil.

And for the third time I'm going to say this about this snippet you're posting 3-4 times as if it's gold:

I wrote a quick Python script to compare DLSS and FSR in quality/ultra quality mode, respectively, in terms of their mean squared error (MSE) relative to the native image based on your screenshots, based on the area of the image just left of the player character and under the MSI afterburner stats (so they all display the same area - see this imgur link ). Here's the result:

Native - DLSS MSE: 43.53554856321839
Native - FSR MSE: 60.963549712643676

The script I used to generate the results can be found here To re run it, download the FSR, DLSS and native images as fsr.jpg, dlss.jpg and native.jpg respectively and run the script from within the same directory. If you want to compare another part of the image, change the x_start/x_end variables etc.

Result is that FSR has about 50% higher MSE compared to DLSS results, so it's a clear victory for DLSS at least for this section of this static image in this game.

Let me continue that from this thread where it came from in reddit: https://old.reddit.com/r/Amd/comments/om4910/marvels_avengers_fsr_vs_nvidia_dlss_comparison/h5iphab/

tmp.png


You are posting something extremely incorrect.
 
The latest DLSS isn't upscaling to a ground truth with extra fidelity information. So you have to be careful in what you say.

The latest versions of DLSS instead are more general and are able to reconstruct using only the underlying image. The neural networks are trained to attempt to achieve a 16K ground truth from a 1440p image, but that doesn't mean they achieve anything close to a 16k image. The idea is that they can get something similar to 4K.

However, if the game in question dynamically changes the quality of underlying textures and fidelity based on resolution then DLSS is hampered in it's ability to upsample. For things like text, edges or tessellated structures or textures then it's not too difficult but it can't create very specific textures out of no where.

FSR is much worse as it uses far less information and in a far less sophisticated way.

Thats put better than I said. DLSS is trained that the 16k ground truth is the correct output, its what the network uses to work out if it is right or not and create feadback weights to improve the network. When you run the AI network it will try to recreate the image as it was trained to do.
 
It's always amusing how far people would go to bat for their fav team when the tech shows clear signs of heavily distorting the reference image, it's like at that point why even pretend? Just wear a green hat. Personally I have no interest in arguing with people who do it in bad faith.
 
It's always amusing how far people would go to bat for their fav team when the tech shows clear signs of heavily distorting the reference image, it's like at that point why even pretend? Just wear a green hat. Personally I have no interest in arguing with people who do it in bad faith.

You're giving DLSS too much credit. It can't fundamentally change an imagine like that.

There must be a difference in the underlying base rendering. Even then it did a pretty good job with what it was working from.
 
What are you talking about.

This was us talking about DLSS creating a vastly different image from FSR and native, so different that there is a theory that the resolution it's using has a completely different model. The reasoning being the "solid" box while native and fsr are trying to show a semi-transparent mesh with details visible through.

You're ranting about blurriness as if you didn't read a word but have a quota of highly negative posting about FSR to fulfil.

And for the third time I'm going to say this about this snippet you're posting 3-4 times as if it's gold:



Let me continue that from this thread where it came from in reddit: https://old.reddit.com/r/Amd/comments/om4910/marvels_avengers_fsr_vs_nvidia_dlss_comparison/h5iphab/

tmp.png


You are posting something extremely incorrect.

I am not sure that its wrong to use MSE. We just want to know how the two images differ from the native image as data values. If you go here https://en.wikipedia.org/wiki/Structural_similarity If you go to the bottem, see the "see also". There is https://en.wikipedia.org/wiki/Mean_squared_error.

Performance comparison
Due to its popularity, SSIM is often compared to other metrics, including more simple metrics such as MSE and PSNR, and other perceptual image and video quality metrics. SSIM has been repeatedly shown to significantly outperform MSE and its derivates in accuracy, including research by its own authors and others.[7][20][21][22][23][24]

A paper by Dosselmann and Yang claims that the performance of SSIM is "much closer to that of the MSE" than usually assumed. While they do not dispute the advantage of SSIM over MSE, they state an analytical and functional dependency between the two metrics.[8] According to their research, SSIM has been found to correlate as well as MSE-based methods on subjective databases other than the databases from SSIM's creators. As an example, they cite Reibman and Poole, who found that MSE outperformed SSIM on a database containing packet-loss–impaired video.[25] In another paper, an analytical link between PSNR and SSIM was identified.[26]

I dont see that MSE as being wrong, I would accept that each frame is not 100% the same. How much this affects the result is not something I looked into. This is why I am accepting your arguement not to use this again.

From the wiki on SSIM.
In lossy image compression, information is deliberately discarded to decrease the storage space of images and video. The MSE is typically used in such compression schemes. According to its authors, using SSIM instead of MSE is suggested to produce better results for the decompressed images.

We are comparing compressed jpg images. Thus MSE is better than SSIM. If the images are uncompressed then SSIM is better. This is why I used the arguement, its the correct method as far as I can tell. I dont see how MSE is just incorrect because someone posted they thought it was wrong. You need to research its wrong and why before you post.

Really nice attempt, keep that comming.
 
Last edited:
Thats put better than I said. DLSS is trained that the 16k ground truth is the correct output, its what the network uses to work out if it is right or not and create feadback weights to improve the network. When you run the AI network it will try to recreate the image as it was trained to do.

Yep, lets see it in action:

DLSS Ultra Performance (720p) vs FSR Ultra Quality (1662p)

x63vpuomtyb71.jpg
 
Last edited:
These differences in thin line textures and assets are something DLSS is very good at recreating.

A good example of this can be seen when compared even close up images of hair on characters and NPC's in various games - often DLSS contains more details in the hair than native and same goes for FSR, hair lines look more natural and have more depth to them.
 
The person who made that comparison agreed it was wrong and explained it was a rushed piece of work.

You need to check it yourself. It does not matter who agreed you whom. It only holds weight if its correct. I am willing to agree with you if you are correct. It easier to win a debate if you show you can be trusted. Will accept you are wrong and move on.

From the wiki on SSIM.
In lossy image compression, information is deliberately discarded to decrease the storage space of images and video. The MSE is typically used in such compression schemes. According to its authors, using SSIM instead of MSE is suggested to produce better results for the decompressed images.

We are using compressed images, thus by wiki MSE is typically used. If the image is decompressed then SSIM is used instead of MSE. Wiki is the unbiased arbiter here. If its wrong then its wrong but that is unlikely. You wont accept just my word on the matter and you have people telling you different. So I use wiki as a source, to show that this is not necessary the wrong way of comparing the images. That you can use MSE that it is not completely wrong. If I have read it wrong, thats another thing but I think I have read wiki correctly.
 
You need to check it yourself. It does not matter who agreed you whom. It only holds weight if its correct. I am willing to agree with you if you are correct.

From the wiki on SSIM.

We are using compressed images, thus by wiki MSE is typically used. If the image is decompressed then SSIM is used instead of MSE. Wiki is the unbiased arbiter here. If its wrong then its wrong but that is unlikely. You wont accept just my word on the matter and you have people telling you different. So I use wiki as a source, to show that this is not necessary the wrong way of comparing the images. That you can use MSE that it is not completely wrong.

It's pretty significant if you're posting the person making a comparison but ignoring the same person saying their comparison was not correct!
 
It's pretty significant if you're posting the person making a comparison but ignoring the same person saying their comparison was not correct!
No its not. If I have evidence he is using the right method then I can have a bit of confidence. If he shows latter that he used MSE but was not sure. Then that does not matter because I know he used the right method. I should have checked his work in depth but like everyone else I am just too lazy. So its easier just not to use his work in any future arguements, if it proves contentious.

Just to add more sources that you can use MSE. This video can help https://youtu.be/RGQ-IXg0REQ here the video compares the original image to the decrypted image. We get an MSE of 0 which shows we decrypted the image. The whole video covers MSE so if you learn python you can check his work and see if he got anything wrong. MSE is simple to work out, the video shows you in excel.

This video does it in mathlab, you can just copy and paste. Here mathlab compares the two images. You get the result. So you can use the same images for DLSS and FSR. Then compare to the same native image. See if you get the same values. https://youtu.be/UQm4hk-18_g
 
Last edited:
It's pretty significant if you're posting the person making a comparison but ignoring the same person saying their comparison was not correct!

You should realize by now that zx128k is the most extreme Nvidia zealot on this board. Absolutely everything he says is to praise Nvidia and discredit AMD.

The thread is about FSR but you get all the trolls coming in praising DLSS. Best to ignore such 'fans' mate. The discussion should really be about comparing FSR to Native since that is what AMD is aiming for. DLSS is not comparable at all since it's only available to about 15% of the total gpu userbase and also proprietary tech. Anyone without an RTX card does not care if it's better.
 
The person who made that comparison agreed it was wrong and explained it was a rushed piece of work.

No he didn't say it was wrong. MSE is well accepted measure of accuracy. The other commenters suggest there may be better metrics but the fact they will different significantly from MSE is a stretch. They will be strongly correlated. Anyone who has trained any kind of model will know this.

The other comment about using a supersampled image is fair, but again that isn't gonna make FSR look any better. If anything it will actually benefit DLSS as it actually tries to do super sampling beyond the native image.
 
You're giving DLSS too much credit. It can't fundamentally change an imagine like that.

There must be a difference in the underlying base rendering. Even then it did a pretty good job with what it was working from.

It shouldn't - but I've seen similar with CP2077 where certain textures on distant buildings are filled in wrong with DLSS - which I assume is a problem with the trained model - there are several unfinished parts of CP2077 with low res/placeholder versions that are similar to existing high-res textures used more widely which I think it causing some confusion. (EDIT: Though that again could be a LOD thing given they are distant buildings).

Though the more likely explanation by far is some kind of LOD issue where for whatever reason the input renderer for DLSS is rendering a different asset.
 
No its not. If I have evidence he is using the right method then I can have a bit of confidence. If he shows latter that he used MSE but was not sure. Then that does not matter because I know he used the right method. I should have checked his work in depth but like everyone else I am just too lazy. So its easier just not to use his work in any future arguements, if it proves contentious.

Just to add more sources that you can use MSE. This video can help https://youtu.be/RGQ-IXg0REQ here the video compares the original image to the decripted image. We get an MSE of 0 which shows we decripted the image. The whole video covers MSE so if you learn python you can check his work and see if he got anything wrong. MSE is simple to work out, the video shows you in excel.

This video does it in mathlab, you can just copy and paste. Here mathlab compares the two images. You get the result. So you can use the same images for DLSS and FSR. Then compare to the same native image. See if you get the same values. https://youtu.be/UQm4hk-18_g

No he didn't say it was wrong. MSE is well accepted measure of accuracy. The other commenters suggest there may be better metrics but the fact they will different significantly from MSE is a stretch.

The other comment about using a supersampled image is more accurate, but again that isn't gonna make FSR look any better. If anything it will actually benefit DLSS as it actually tries to do super sampling.

The reddit thread is linked right there to make such statements but the bottom line is, posting a comparison the creator accepts is flawed as evidence is extremely questionable.

Also making claims that maybe it's ok but doing no work to present to a better informed audience than here is not great at all.
 
Last edited:
The reddit thread is linked right there to make such statements but the bottom line is, posting a comparison the creator accepts is flawed as evidence is extremely questionable.

The method is not flawed I just showed you what wiki stated and showed you video were you can check his work using mathlab. You can get mathlab as a trial. I cant help you any more than that, apart from doing all the work for you.

One issue will remains, are the images all from the same rendered frame. If they are then the output will produce a value that will show the difference between each image. This does not help DLSS which adds details not found in the native image. These added details will be a difference between the DLSS image and the native. Making DLSS look worse than it looks. For FSR its just the amount that FSR does not match the native image.

You have everything you need now to create a constructive arguement.
 
The method is not flawed I just showed you what wiki stated and showed you video were you can check his work using mathlab. You can get mathlab as a trial. I cant help you any more than that, apart from doing all the work for you.

One issue will remains, are the images all from the same rendered frame. If they are then the output will produce a value that will show the difference between each image. This does not help DLSS which adds details not found in the native image. These added details will be a difference between the DLSS image and the native. Making DLSS look worse than it looks. For FSR its just the amount that FSR does not match the native image.

You have everything you need now to create a constructive arguement.

Do my eyes deceive me or are you asking me to do the work to fix your broken evidence source?

The answer is no. If you're convinced the method is fine and the author discredited their comparison for no reason then fix the comparison and demonstrate to the same audience you found it in.
 
Back
Top Bottom