• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

The mighty neural network was unable to solve the ghosting in games which should be easy as ... if any AI was involved, the AI "knows" the shape of the car and it will only display the edges from the last frame.
And i suspect they fixed most of it now by dumping most of the temporal data and focusing more on the spatial reconstruction. Because people are seeing big improvements even in Death Stranding and we were told that the motion vectors on that game were crap, that's why it had a lot of ghosting. Yet, the mighty neural network was able to get rid of most of the ghosting only after TSR was released and just before FSR is released.
Plus what you describe here looks a lot like per game training but we are told that DLSS 2.0 is not working this way anymore, you feed images from CP 2077 into neural network and that makes Metro Exodus to look better. :)



What a weird pile of gumpf.

DLSS 2.1 reduced the amount of ghosting compared TAA implementations, even if some ghosting still remained.
DLSS 2.2 is a further improvement in the motion model, not just in Death Stranding but in other games, and in artificial tests.

The rest of your post is just weird conspiracy style nonsense without any evidence. If DLSSS2 dropped the tmeproal accumulation then the result would go back to DLSS 1 quality.


And no, DLS22 has no per game training. The train is done by Nvidia on hundreds of different games and settings a0priori making a generic model, which is easily testable because DLSS 2 works great in any tech demo you an set up in UE or whatever, with your own unique assets.


Good thread here:
https://www.reddit.com/r/nvidia/comments/o1zmev/cyberpunk_2077_dlss_21_vs_22/

Just dumping the latets DLSS 2.2. library in games that support DLSS 2.0 and 2.1 results in a massive improvement, without any change to the game engine, just improved DL model
 
Last edited:
What a weird pile of gumpf.
Oh mine is a pile of ...
The evidence is there for everyone to see. Yesterday DLSS had a huge temporal problem. Now, most of it, it is fixed. You think they improved the neural network, don't you think it is a big coincidence that the neural network was unable to understand the shape of a car in Cyberpunk but now it does that much better, just after another company made its own version of TAA upscaling and just before another company is releasing a non temporal upscaling? What is more plausible that they took a look on how UE5 and/or FSR are working and improved their tech by reducing the temporal factor, or they fixed it by feeding more trillions images in the neural network in the last month? :)

I said they dumped most of the temporal data, not all of it.
 
Oh mine is a pile of ...
The evidence is there for everyone to see. Yesterday DLSS had a huge temporal problem..


Stopped reading right here,because that simply isn;t true.DLSS2.1 had a minor ghosting problem in some situations, but the problem was much less severe than what exists in TAA in most games.

No coincidence about anything. Of course Nvidia are continuously improving DLSS. They haven't even announced DLS2.2
 
Stopped reading right here,because that simply isn;t true.DLSS2.1 had a minor ghosting problem in some situations, but the problem was much less severe than what exists in TAA in most games.

No coincidence about anything. Of course Nvidia are continuously improving DLSS. They haven't even announced DLS2.2
Look at that CP footage from DLSS 2.1. The difference vs DLSS 2.2 is big, you can see the ghosting right away, you don't need screenshots for that.
How hard is it for a neural network to understand that some pixels belong to a car as it looked in the previous frame and they should not be taken into consideration for the displayed frame? Is this what state of the art AI can do atm or is it proof that there is not too much AI involved in there and the emphasis is on temporal data?
If the AI can't solve such simple things how can it solve even more complicated shapes and objects that it has no idea about, since you say the training is generic? A car has a very easy to understand shape, you can have a plant or an animal with much more complicated shapes.
 
Look at that CP footage from DLSS 2.1. The difference vs DLSS 2.2 is big, you can see the ghosting right away, you don't need screenshots for that.
How hard is it for a neural network to understand that some pixels belong to a car as it looked in the previous frame and they should not be taken into consideration for the displayed frame? Is this what state of the art AI can do atm or is it proof that there is not too much AI involved in there and the emphasis is on temporal data?
If the AI can't solve such simple things how can it solve even more complicated shapes and objects that it has no idea about, since you say the training is generic? A car has a very easy to understand shape, you can have a plant or an animal with much more complicated shapes.

no one is denying that 2.2 brings big improvements, quit hhe oposite and just shows the difficult task that FSR will have to live up to hbe competition.

The reduction in ghosting can come from msny parts of the architecture. The actual temporal accumulation does use DL, it is just a straight projection of a pixel at time T1 to time Tn given the motion vectors of the triangle at each intermediate frame. Exactly how to achieve that depends on considerations of quality, performance, handling edge cases. There is plenty of scope to improve the temporal ptojection.

The DL use the temporal accumulated as input to a spatial upscaler. This model learns not only standard spatial convolutional upsampling, but additionally corrects for artifacts like ghosting.

It is trivial to train a modern CNN to learn a function that corrects for a distortion. Used in nodern photo processing like PS/LR. DLSS2 is doing the dame thing, learning to invert distortions that occur in temporal accumulated images. DLSS has a huge helping hand in that it knows the motion vectors that were used to fo the projection.


so fundamentally you are misunderstanding what the DL is doing. It isn't specifically recognise pixels belonging to objects, it is recognising distortions in an image given the motion vectors.


e.g., this stuff is already ancient by DL standards,
http://vision.cs.utexas.edu/projects/on_demand_learning/
https://xiaoyu258.github.io/projects/geoproj/
 
Last edited:
no one is denying that 2.2 brings big improvements, quit hhe oposite and just shows the difficult task that FSR will have to live up to hbe competition...

Let's think for a second: i see you didn't like when i called ghosting a huge problem so let's say DLSS had this small problem in some games, even if, if you go and read on Reddit you see a lot of people claiming that changing the DLSS file brings a big improvement to IQ. But let's call it small problem.
So DLSS had this problem in CP for example for 7 months. An update comes for another game, people were taking the dlss file and put it inside CP folder and it solves most of the problem with ghosting in CP. Not only there but also in DS and who knows what other games.
What is changed? There were no updates for CP or DS, no new "motion vectors" to be feed into the neural network. The most plausible idea is that the temporal data was in part dumped. Especially when we consider the timing of the update.
I don't think it is a bad thing if they have inspired from UE5 and/or FSR but i don't understand why do you put so much faith in the neural network when there are simpler explanations? How come the DLSS was unable to recognize "distortions in an image given the motion vector" for 7 months ( or more if we talk about DS ) and suddenly now it can do that? :)
 
Let's think for a second: i see you didn't like when i called ghosting a huge problem so let's say DLSS had this small problem in some games, even if, if you go and read on Reddit you see a lot of people claiming that changing the DLSS file brings a big improvement to IQ. But let's call it small problem.
So DLSS had this problem in CP for example for 7 months. An update comes for another game, people were taking the dlss file and put it inside CP folder and it solves most of the problem with ghosting in CP. Not only there but also in DS and who knows what other games.
What is changed? There were no updates for CP or DS, no new "motion vectors" to be feed into the neural network. The most plausible idea is that the temporal data was in part dumped. Especially when we consider the timing of the update.
I don't think it is a bad thing if they have inspired from UE5 and/or FSR but i don't understand why do you put so much faith in the neural network when there are simpler explanations? How come the DLSS was unable to recognize "distortions in an image given the motion vector" for 7 months ( or more if we talk about DS ) and suddenly now it can do that? :)
This all makes it seem like that DLSS might not be as dependent on AI as we think. But since it is a black box thing, we can't say for sure.
 
What is more plausible that they took a look on how UE5 and/or FSR are working and improved their tech by reducing the temporal factor, or they fixed it by feeding more trillions images in the neural network in the last month? :)

I said they dumped most of the temporal data, not all of it.

Unlikely - nVidia have vast experience on this and many techniques for image scaling and transformation in common use are credited to people who either now work at or worked for nVidia. They wouldn't need to look at FSR, etc. for ideas. The issue is likely more than that anyhow the temporal upscaling in Quake 2 RTX has a lot of nVidia involvement and has no real issue with trails, etc.

There were no updates for CP or DS, no new "motion vectors" to be feed into the neural network. The most plausible idea is that the temporal data was in part dumped. Especially when we consider the timing of the update.
I don't think it is a b

Doesn't necessarily follow - the game might have been supplying motion data that was unused or ineffectively used previously. Without a deep dive into what is essentially a black box it is difficult to know. DLSS sits in the same spot in the pipeline as temporal methods so would have access to the motion vector data if supplied.
 
Last edited:
Unlikely - nVidia have vast experience on this and many techniques for image scaling and transformation in common use are credited to people who either now work at or worked for nVidia. They wouldn't need to look at FSR, etc. for ideas. The issue is likely more than that anyhow the temporal upscaling in Quake 2 RTX has a lot of nVidia involvement and has no real issue with trails, etc.
Yet CP 2077 is 7 months old. Other games are even older. And DLSS had this problem with ghosting in a lot of games.

Doesn't necessarily follow - the game might have been supplying motion data that was unused or ineffectively used previously. Without a deep dive into what is essentially a black box it is difficult to know. DLSS sits in the same spot in the pipeline as temporal methods so would have access to the motion vector data if supplied.

So:
CP 2077
Death Stranding
F12020
WD Legion

Probably more games. All these games will come up in the Reddit topics about the so called dlss 2.2. People are reporting reduced ghosting on every game.
So a 14Mb file from a newly DLSS supported game, has inside new motion vectors and everything DLSS needs for all these games? Games that were not updated, you use the older DLSS file you get ghosting, you use the newer one you get much less ghosting.
 
Yet CP 2077 is 7 months old. Other games are even older. And DLSS had this problem with ghosting in a lot of games.



So:
CP 2077
Death Stranding
F12020
WD Legion

Probably more games. All these games will come up in the Reddit topics about the so called dlss 2.2. People are reporting reduced ghosting on every game.
So a 14Mb file from a newly DLSS supported game, has inside new motion vectors and everything DLSS needs for all these games? Games that were not updated, you use the older DLSS file you get ghosting, you use the newer one you get much less ghosting.

Motion vectors won't be in those files they are provided via the game based on real time frame data. Those files may provide a superior reconciliation when using the same game provided temporal data input.
 
Motion vectors won't be in those files they are provided via the game based on real time frame data. Those files may provide a superior reconciliation when using the same game provided temporal data input.
But if the games were not updated it means there are no new/better motion vectors, that's why i asked you. So the only explanation if we don't count mine ( "they dropped part of the temporal data" ) is the "superior reconciliation" or the data in the file makes DLSS to work better on the old motion vectors?
But isn't this still a big coincidence that it happens now after we've seen TSR and right before we will see FSR?
 
But if the games were not updated it means there are no new/better motion vectors, that's why i asked you. So the only explanation if we don't count mine ( "they dropped part of the temporal data" ) is the "superior reconciliation" or the data in the file makes DLSS to work better on the old motion vectors?
But isn't this still a big coincidence that it happens now after we've seen TSR and right before we will see FSR?

It is possible that previously DLSS wasn't making as efficient use of the motion data the game was presenting it with compared to with the new files. DLSS is a work in progress so it isn't that noteworthy to see changes - there is at least one data set they've not released to the public which seems to be optimised for ray tracing titles and probably more stuff beside.
 
FSR is now live and can be tested

however, interestingly none of the games with FSR also have DLSS. Coincidence? Maybe I can go with that, until I saw the AMD review guidelines

AMD has sent out these guidelines to reviewers to ensure it doesn't get compared to DLSS, hopefully those sites call out AMD for it

5318453c8001f3d16577600a2f95b772e863787597bbf409be0debfc73ee1bf3.png
 
Back
Top Bottom