• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Radeon Image Sharpening works in every gave without input, FidelityFX needs to be implemented.

Anybody who says DLSS offers better-than-native image quality is drinking green Kool-aid.

Distant features should not be sharp, so if DLSS is making thin cables and fences placed hundreds of meters away from the camera then DLSS is killing the realism of the image.
 
I'm sorry what? How can something be "better" than the orginal rendered image. For it to be "better" the original must be lacking something.

What does just as sharp mean? Why is being "sharp" better? Surely the game devs know how "sharp" they want certain details to be?

Also how are these comparisons done? How can we be sure that throughout the game, DLSS is providing the same IQ?
How is this trained? Is there someone playing a version of this game at Nvidia HQ at 8K to help train this.

I would have agreed with you, but try watching the Digital foundry feature on death stranding... they recommend DLSS above native 4K for quality and it does actually look better. They give several examples of hair for example where oddly enough the DLSS version actually gives you more detail and better IQ than native. Strange but true.

humbug - are you using fidelity FX with star citizen to run at a lower render resolution? If so, how? I’d like to do it with that and assetto corsa competizione too but I was under the impression it had to be enabled in game?

I’m hoping AMD have some plans to combat DLSS, because it is constantly improving... regardless of how you feel about it vs native I think it can be extremely useful in VR especially where performance can be very hard to maintain, especially at super sampled resolutions.
 
Last edited:
I'm not sure you understand. DLSS is better than native in image quality (bar a few problems with particles).
And that's the crux of it. It's not better then native because true non post process AA doesn't alter/mess up particle effects.

And would agree that some are taken in and indoctrinated by marketing from Nvidia.
The developer did not create objects that was missing texture or was set to a lower texture based on higher IQ settings. So lets stop being silly here.

What is happening, obviously, is there is post process, weather effect, etc causing the true native image/object to look distorted. Nothing more, nothing less.
 
Last edited:
At the end of the day, its impossible to have better quality image than 4k AND better performance, logically that doesnt make sense, its not some kinda black magic
 
At the end of the day, its impossible to have better quality image than 4k AND better performance, logically that doesnt make sense, its not some kinda black magic


I really suggest watching this video... And I say that as someone who is a bit sceptical of DLSS overall. You can skip to around 15 mins to get to the bulk of the DLSS vs native vs fidelity FX stuff.

 
Still not overly sold on DLSS but it does look (a bit) better there than FidelityFX. A proper implementation can be impressive it's just a shame it's too few and far between.

unfortunately screens don't tell the full picture

where dlss2.0 really excels in death stranding is when you actually play the game because it's got far less jaggies than fidelity fx

we should be looking at video comparisons and not static screenshots because they both look clean in a still frame but when the game starts moving the jaggies start flowing
 
And that's the crux of it. It's not better then native because true non post process AA doesn't alter/mess up particle effects.

And would agree that some are taken in and indoctrinated by marketing from Nvidia.
The developer did not create objects that was missing texture or was set to a lower texture based on higher IQ settings. So lets stop being silly here.

What is happening, obviously, is there is post process, weather effect, etc causing the true native image/object to look distorted. Nothing more, nothing less.

At the end of the day, its impossible to have better quality image than 4k AND better performance, logically that doesnt make sense, its not some kinda black magic

I just remembered MKBHD blind smartphone camera shootout. Something looking better and something looking right are not the same. Weaker cameras were beating out stronger cameras because the post processing (or weaker hardware) made the image look "better" even though they looked wrong relative to real life.

I wonder if this is the same in this situation. People are mistaking the post processing for being better in some situations.

I'm yet to watch Zeeflyboy's recommendation so it might change my opinion we shall see.

the videos for those intesrested
 
unfortunately screens don't tell the full picture

where dlss2.0 really excels in death stranding is when you actually play the game because it's got far less jaggies than fidelity fx

we should be looking at video comparisons and not static screenshots because they both look clean in a still frame but when the game starts moving the jaggies start flowing

I will pay more attention to it when it's available on more than a dozen games.

I am wondering if it's an issue with the game engine possibly too that DLSS corrects because it is the first time I have actually looked at a DLSS image and thought there was any improvement really.
 
I dont think you answered my question unless your eluding to have owned a Vega and now ceased. The weird part was the 15-20 years banding, if it is 20 years then you have owned a Radeon right up till now. If it was 15 years then it could be as they were bought out by AMD.

I had V64, now 480, and various gpus from every release all the way to ATI 9600 Pro and even older preceded only by 3dfx voodoo 2 I think, kinda forgot exact models of the earlier ones. So that's why I say Radeon and not AMD.
 
I will pay more attention to it when it's available on more than a dozen games.

I am wondering if it's an issue with the game engine possibly too that DLSS corrects because it is the first time I have actually looked at a DLSS image and thought there was any improvement really.
Yes, there is something going on in the game engine. I would further add and say its something being removed more so then fixed.
Because again, assets in the game weren't added originally to look like what we see now a blurry, pixelated, distorted. That is not how the assets look by themselves. IE: Fence, power wire, buildings. The developer didn't put in those assets looking like they do so that DLSS clears it up. No, no, no. Those assets were always sharp. There is something in the game engine causing this distortion. We know it's post processing and dynamic weather effects, to name a few. But I'm sure there is something else going on.


I just remembered MKBHD blind smartphone camera shootout. Something looking better and something looking right are not the same. Weaker cameras were beating out stronger cameras because the post processing (or weaker hardware) made the image look "better" even though they looked wrong relative to real life.

I wonder if this is the same in this situation. People are mistaking the post processing for being better in some situations.

I'm yet to watch Zeeflyboy's recommendation so it might change my opinion we shall see.

the videos for those intesrested
I'm aware of that. Thanks for the reminder.
 
Last edited:
I really suggest watching this video... And I say that as someone who is a bit sceptical of DLSS overall. You can skip to around 15 mins to get to the bulk of the DLSS vs native vs fidelity FX stuff.
Snip

Just watched the section from 15 mins and it just seems like an AA comparison. Native was always with TAA and the issues he points to seem to me to be issues with TAA. If the argument is that DLSS is a new form AA then thats a different discussion to be had. Why did he never do a comparison to straight 4K without TAA? It seems odd to not do that.
It does make me wonder when the image is downscaled and given to the tensor cores, does that image have TAA applied to? How would that affect the output?

Why was contrast adaptive sharpening run at lower than 4k res in his test (from my understanding of his statement just before he begins to discuss CAS)? CAS is supposed to fix the issues of TAA according to the AMD website so it would have been interesting to see it run at native 4K.

Edit: It would be interesting to test this. Anyone want to lend me a 2080ti? or do i need to wait to pick them up dirt cheap once Ampere is out :D
 
Just watched the section from 15 mins and it just seems like an AA comparison. Native was always with TAA and the issues he points to seem to me to be issues with TAA. If the argument is that DLSS is a new form AA then thats a different discussion to be had. Why did he never do a comparison to straight 4K without TAA? It seems odd to not do that.

Why was contrast adaptive sharpening run at lower than 4k res in his test (from my understanding of his statement just before he begins to discuss CAS)? CAS is supposed to fix the issues of TAA according to the AMD website so it would have been interesting to see it run at native 4K.

Edit: It would be interesting to test this. Anyone want to lend me a 2080ti? or do i need to wait to pick them up dirt cheap once Ampere is out :D
I can answer that question, emphatically. Remember NDA 2.0 controversy? Well as you might have forgotten DF is one of those websites that signed it. So it would make sense to toe the nvidia line about all the issues you see them avoid, omit, ignore, etc.
 
I can answer that question, emphatically. Remember NDA 2.0 controversy? Well as you might have forgotten DF is one of those websites that signed it. So it would make sense to toe the nvidia line about all the issues you see them avoid, omit, ignore, etc.
I had indeed forgotten about that, i also thought that it was scrapped or was that just the crap they tried to pull with AIB partners? Is there a list of who else has signed this?
 
I really suggest watching this video... And I say that as someone who is a bit sceptical of DLSS overall. You can skip to around 15 mins to get to the bulk of the DLSS vs native vs fidelity FX stuff.


You do realise a lot of the "native" results are using poor forms of AA,which blurs the image. What DLSS is doing is not using blurry AA methods,and instead is using sharpening algorithms. If you don't believe me,go into PS,and look at a native dSLR jpeg which is soft and try using USM,and it suddenly looks "better". Sharpening is basically using edge detection,to find edges and then apply a steep local contrast gradient.
 
I had indeed forgotten about that, i also thought that it was scrapped or was that just the crap they tried to pull with AIB partners? Is there a list of who else has signed this?
Before NDA 2.0 was GPP were Kyle at hardocp uncovered their nefarious, scrupulous, nebulous behavior Nvidia was plotting.

If I recall correctly there was a thread on this form about it. I don't remember all the websites. From memory it was digital Foundry, gamers nexus, hardware unbox, guru3d, tech powerup, Toms hardware and a few others who did. I think computer base did also but there was some controversy so I don't know how that went.

Tom's hardware I think in Europe didn't along with a few others in the area didn't.
 
Status
Not open for further replies.
Back
Top Bottom