• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD FSR 3.0 has exposed the ugly truth about most PC gamers

By that logic if i start seeing pink elephants the alcohol is letting me see things that are missing in real life.

I was once so drunk after a birthday party that laying in bed on my side I saw an amazing looking huge spider building a web really meticulously on my night stand, It then waved to me and toddled off.

I haven't touched absinthe since :D
 
By that logic if i start seeing pink elephants the alcohol is letting me see things that are missing in real life.

I reckon he has just picked a frame of the scene where the road was missing the shadow detail from the two lines crossing the road. DLSS or any other temporal upscaler uses data from multiple frames to create the next frame. The native mode will no doubt have only a single frame worth of data to create the frame . I suspect the previous and next frame after the one chosen for the comparison does have the missing information. It's not as if DLSS is creating detail which does not exist in the native rendering pipeline.

Also the native side seems to be showing better texture detail on the road than the DLSS side. It should be the other way around if DLSS was actually creating more detail.
 
Last edited:
It's not as if DLSS is creating detail which does not exist in the native rendering pipeline.
I mean if it's creating details that are not there in the native image it's failed, like it or not the native image is the native image so if image processing techniques add or remove details then they're failing to recreate the image, they're altering it.

They maybe altering it for what the person seeing it considers to be for the better or worse but those are subjective, if you copy something like the Mona Lisa you try to copy it flaws and all, you don't make a copy and make it so people can see the brush strokes better because then it's no longer a copy of the Mona Lisa.
 
The road is more detailed in the DLSS/FG versions not native, what you are seeing is the sun moving across the sky creating a softer surface look, there's no way to stop the movement of weather/sun in the game between screenshots, so lighting will subtly change as screenshots are taken.

You can literally zoom into any occluded part of the road showing where it meets the kerb etc for detail retention that isn't there on the native version - A more worthwhile measure of detail gain/loss than the sunlit areas of road where RTGI/Occlusion etc plays a part based on the direction of the sun etc.

It's long been clear that upscaling uses AI to reconstruct lost details, that was one of the marketing bits from Nvidia and AMD back in the early DLSS/FSR early days. It's just much more obvious now because they have advanced quite a bit to actually be good quality for the most part vs the low quality blurfest of old.

I reckon he has just picked a frame of the scene where the road was missing the shadow detail from the two lines crossing the road. DLSS or any other temporal upscaler uses data from multiple frames to create the next frame. The native mode will no doubt have only a single frame worth of data to create the frame . I suspect the previous and next frame after the one chosen for the comparison does have the missing information. It's not as if DLSS is creating detail which does not exist in the native rendering pipeline.

I didn't pick anything, I took 3 screenshots with each setting mentioned whilst in-game and that's as far as "picking" went.

If you really seem unconvinced, then here's a video, at these follow camera angles it's clear that the native resolution version doesn't have the definition for distant occlusion/detail on the surface so it's muted away, whereas DLSS accounts for this as the AI has analysed the scene, noted that there is meant to be detail there and so puts the texture/shadow detail there with correct occlusion:


Screenshot from the video zoomed in:

Native:
LXsv1Rq.png


DLSS:
5KRXwiY.png



This is actually the first time I've watched this quirk on video and had the time to rewind and fast forward between the renders so it's pretty clear what's happening vs just taking a screenshot of any given scene in game. The detail /is/ technically there, it's just not being rendered at native properly when the camera is zoomed out. Like at an overhead angle looking directly at that distant area you'll make it out whether native or not. The purpose of AI reconstruction is to restore the details that would otherwise be there regardless of rendering method, to be realistically convincing. When you get low to the ground in real life you don't suddenly see less detail, so why should game rendering see the same issue as shown in native resolution?

I find it amusing but also in some ways insulting that no matter what actual proof is posted, someone will always be there to go "oh that was cherry picked", or come up with some other excuse to try to throw shade onto something.
 
I find it amusing but also in some ways insulting that no matter what actual proof is posted, someone will always be there to go "oh that was cherry picked", or come up with some other excuse to try to throw shade onto something.

Yup it's very tiring tbh. Like I said before, there should be a rule enforced now where if people are going to make sweeping claims that goes against several reputable sources as well as likes of yourself and mine where we post evidence to back up our statements, those who oppose the posts should be posting something of substance to back up their points too as it just comes across as fingers in ears and almost baiting/trolling, the onus is on them to debunk said points really and for those who just post one liners, well they should have said posts deleted instantly if they aren't willing to engage in such discussions. Head over to x/twitter for that level of discussion if that's your thing imo.

It's a bit like in the other thread with the poll where I posted my imgsli comparing native and dlss quality, which showed DLSS better overall than native in terms of rendering the text better and some other details but alas the fingers in ears continued:

cz18yC1.png


And DLDSR with DLSS Perf for comparison:

2xLQ0Xp.png


At the end of the day, we can all pick areas where DLSS and even FSR looks better or picks areas where native (even with TAA) looks better, the key thing is how often does said method look better i.e. if native is only better say 40% of the time where as dlss/fsr/xess is better 60% of the time then I know what I'll pick but again, it all comes down to what one values for IQ, sharpness/clarity, less jaggies, less shimmering, less softness, less ghosting, this is where Alex/DF video will be very good (inb4 shill)

3BhElm9.png
 
Last edited:
The road is more detailed in the DLSS/FG versions not native
If DLSS/FG is adding more details then it's not recreating the native image correctly, like i said you don't make a more detailed copy of something like the Mona Lisa because when you do it's no longer the Mona Lisa.
 
Is DLSS/FG perfect? Nope. Does it make the overall experience of playing CP2077, for me, not on top tier hardware, better? Hell yeah, massively improves the visuals and fps that I can get without so that's a win in my book.
 
If DLSS/FG is adding more details then it's not recreating the native image correctly, like i said you don't make a more detailed copy of something like the Mona Lisa because when you do it's no longer the Mona Lisa.
The details are lost when using native, that's the point. Not lost at all times, but lost at zoom distances or angles at a distance where they should still be visible in reality. DLSS restores that lost detail. The data is still there just like how FG requires motion vectors to work effectively, the AI just has to use frame data to determine what the objects in frame are and then decide whether to put the detail there at that zoom or angle as it should be, since those details are there when you zoom in or elevate the camera a bit more as demonstrated by my video above.

Native just doesn't have the smarts to be able to cleanly render everything at every camera zoom or angle, that is the fact of the matter, AI does have that ability since it has more data to work with, and the higher the internal res (DLSS preset), the more data it has to produce an even better result, again, as demonstrated.

I don't know why this is a surprise to anyone in 2024, this has been the norm for ages.

Edit* And then we have to talk about temporal stability too, which is where native just doesn't match or beat upscaled. I note upscaled, not DLSS specifically, because some games are more stable with XeSS (RoboCop), but the vast majority of games are more stable with DLSS. My video shows this too if you look at stuff being moved by the wind like the plantations in the background. Native will require DLAA to get similar temporal stability, but then you also introduce a large framerate cost and still have to deal with lost details as per details above since you cannot use DLSS and DLAA at once.
 
Last edited:
If DLSS/FG is adding more details then it's not recreating the native image correctly, like i said you don't make a more detailed copy of something like the Mona Lisa because when you do it's no longer the Mona Lisa.

DLSS is trained on a 16k ground truth. So you are right, it isn't trying to create the detail of a native 4k image.

I don't think you actually know how DLSS works.

Also any AA is moving away from the 4k native pixel image.
 
Last edited:
I forgot about that actually, was mentioned by digital foundry way way back and I recall reading about it on r/nvidia, just googled and found an old thread referring to DLSS v2.0, we are now on DLSS v3.5.10 and have come a long way in reducing the ghosting issues of old for objects in motion:


Given how DLSS utilizes 16k as the ground truth and with how on static scenes it can utilize all of the information of the previous frames onto the following frame, after a few frames it should be able to render a better than native image, which is often evidenced by smaller details. This seems more evident on lower resolutions.
 
The road is more detailed in the DLSS/FG versions not native, what you are seeing is the sun moving across the sky creating a softer surface look, there's no way to stop the movement of weather/sun in the game between screenshots, so lighting will subtly change as screenshots are taken.

You can literally zoom into any occluded part of the road showing where it meets the kerb etc for detail retention that isn't there on the native version - A more worthwhile measure of detail gain/loss than the sunlit areas of road where RTGI/Occlusion etc plays a part based on the direction of the sun etc.

It's long been clear that upscaling uses AI to reconstruct lost details, that was one of the marketing bits from Nvidia and AMD back in the early DLSS/FSR early days. It's just much more obvious now because they have advanced quite a bit to actually be good quality for the most part vs the low quality blurfest of old.



I didn't pick anything, I took 3 screenshots with each setting mentioned whilst in-game and that's as far as "picking" went.

If you really seem unconvinced, then here's a video, at these follow camera angles it's clear that the native resolution version doesn't have the definition for distant occlusion/detail on the surface so it's muted away, whereas DLSS accounts for this as the AI has analysed the scene, noted that there is meant to be detail there and so puts the texture/shadow detail there with correct occlusion:


Screenshot from the video zoomed in:

Native:
LXsv1Rq.png


DLSS:
5KRXwiY.png



This is actually the first time I've watched this quirk on video and had the time to rewind and fast forward between the renders so it's pretty clear what's happening vs just taking a screenshot of any given scene in game. The detail /is/ technically there, it's just not being rendered at native properly when the camera is zoomed out. Like at an overhead angle looking directly at that distant area you'll make it out whether native or not. The purpose of AI reconstruction is to restore the details that would otherwise be there regardless of rendering method, to be realistically convincing. When you get low to the ground in real life you don't suddenly see less detail, so why should game rendering see the same issue as shown in native resolution?

I find it amusing but also in some ways insulting that no matter what actual proof is posted, someone will always be there to go "oh that was cherry picked", or come up with some other excuse to try to throw shade onto something.

Ok you put all 3 upscalers on this thread along with native and we will see if DLSS is doing some fancy AI as you claim. I suspect the temporal upscalers will all look about the same since they all use multiple frames to generate a frame. From my testing I can cealrly see that native intermittently losing the shadows as you move back and forward while the upscalers all render the shadow since they can get the info from a few frames to render a frame.
 
DLSS is trained on a 16k ground truth. So you are right, it isn't trying to create the detail of a native 4k image.

I don't think you actually know how DLSS works.

Also any AA is moving away from the 4k native pixel image.
Or people simply don't understand what native means, seriously people who say upscaling/frame generation is better than native have lost their marbles. If i made a copy of the Mona Lisa and shrunk my copy down to 7x5 cm it wouldn't be better than native simply because I've made a smaller copy with the same details.

If DLSS or any post process upscaling technology is using a 16k 'ground truth' it's not better than native because the native image was 16k, native is native. :rolleyes:
 
Or people simply don't understand what native means, seriously people who say upscaling/frame generation is better than native have lost their marbles. If i made a copy of the Mona Lisa and shrunk my copy down to 7x5 cm it wouldn't be better than native simply because I've made a smaller copy with the same details.

If DLSS or any post process upscaling technology is using a 16k 'ground truth' it's not better than native because the native image was 16k, native is native. :rolleyes:

Enjoy digging that hole. Because clearly that is what everyone means.

If you have a 4k photo of the Mona Lisa, and I generate a more accurate Mona Lisa picture from a 2k photo because I'm also using information from a 16k photo, then yes I can produce a better than 4k photo.
 
Last edited:
So you don't think native means native? :cry:

Native is meaningless. I agree a particular resolution is a particular resolution, and image quality is a particular image quality.

We both know that when people say DLSS brings out detail you wouldn't see in a 4k render, it is because of super sampling.

In fact SSAA 2x and 4x is exactly that, without even needing deep learning. Just give up, you failed to understand what DLSS is.
 
Last edited:
Back
Top Bottom