• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DLSS 5 preview

It is looking it at the pixel information for the frame as a whole, which is what enables it to infer what the material is and how it should be lit.

No, it doesn't need any more inputs. The original raster image contains all the required information on light sources, materials, scene.
Exactly my point. It contains all that information, the AI model needs to see it and "understand" what is what. Not going pixel by pixel and colours of it and motion vectors only. I am glad we finally agree on that. And that IS an input (the whole scene) aside what you stated.
 
Last edited:
Exactly my point. It contains all that information, the AI model needs to see it and "understand" what is what. Not going pixel by pixel and colours of it and motion vectors only. I am glad we finally agree on that. And that IS an input (the whole scene) aside what you stated.
The only inputs are the original rendered frame buffer, and the motion vector buffer. Two images go in, DL system crunches it, out comes lighting adjustment buffer. No additional inputs as you kept saying.
 
Honestly can't believe all the fuss over this, especially when you look at the main Resident Evil character comparison, the non-DLSS 5 character looks like an anime character, no depth or realism to the face at all.
Considering it's supposedly a PT version and they chose the worst possible look of that character for marketing BS reasons (you can check on YouTube videos, she looks way better than that in that screenshot!), it's a bad faith comparison by NVIDIA in that specific case. But it also horribly backfired on them, they tried to make it look like their tech is night and day quality change but what they instead did was to cause a lot of people feel uncanny valley instinct kick in - which is a real human thing, we evolved to instantly spot issues with human faces and it causes literal fear in many people. To the point NVIDIA CEO went into damage control mode already, as they know they messed this one up.

Tech could be good, if it's fast enough and trained better. Not on that one example though. I want to see it in practice myself on my 5090, in motion, in few examples. I can easily spot all the DLSS issues (even with 4.5) in motion, and this one is enforcing FG to that apparently (because FPS sucks with it, I can only assume).
 
The only inputs are the original rendered frame buffer, and the motion vector buffer. Two images go in, DL system crunches it, out comes lighting adjustment buffer. No additional inputs as you kept saying.
Can you show me that in code, please? As in, the DLSS 5 code. You seem to know it for sure, like a gospel, I am sure you can show practical evidence for it, that isn't just marketing speech of NVIDIA. No?

That aside, what you said and what I did not agree with is this:
"It has no knowledge of the underlying geometry at all, the only inputs are 'what colour is this pixel' and 'what direction and speed is this pixel moving'."
It doesn't just look at pixel colours and motion vectors. That's the only thing we don't agree on, I believe.
 
Last edited:
You know, it wouldnt surprise me if Nvidia were originally going to save this tech for the 60xx reveal, as the killer app, to get 5090 owners upgrading. But now that launch has almost certainly been delayed until at least late 2027, they decided to unveil it early as a way of boosting 50xx sales during the extended generation.
 
Can you show me that in code, please? As in, the DLSS 5 code. You seem to know it for sure, like a gospel, I am sure you can show practical evidence for it, that isn't just marketing speech of NVIDIA. No?
That's precisely how nVidia have stated it works. Why have you got it into your head that that *isn't* how it works?
That aside, what you said and what I did not agree with is this:
"It has no knowledge of the underlying geometry at all, the only inputs are 'what colour is this pixel' and 'what direction and speed is this pixel moving'."
It doesn't just look at pixel colours and motion vectors. That's the only thing we don't agree on, I believe.
Once more....

The inputs are the original raster frame, and the motion vector frame. Nothing else.

Why do you think that isn't true? It makes perfect sense that's how it works.
 
qhCGrF4.jpg
 
There is a thread on Reddit with screenshots showing it is a frame buffer filter where it is misinterpreting elements from the image which it wouldn't do if it had access to and was working with the geometry data. nVidia also talked about the way you can mask parts out to prevent the AI processing it which sounds like it is working with a frame buffer.

EDIT: Faces aside you can get like 90% of the same effect from an intelligent filter fixing Unreal Engine 5's **** lighting / colour space issues which ironically this filter fixes and is a lot of what makes scenes like the screenshot with the fruit pop and look more real.
 
Last edited:
I suggest you read what is actually said exactly, what was stated in that post and what straw man is. Then you will not be asking such silly questions.

I'm not convinced that you understand what a strawman argument is.


This is pure straw man example, as I actually said none of those things. :D

Let's have a little look shall we.

For context, these are the things that you're now saying do not accurately represent the opinions you've expressed:

GordyR said:
You’ve repeatedly called it AI slop, you’ve repeatedly said that it looks better off, and you’ve repeatedly said that it does not look more realistic on.


And here are a few choice quotes from you...


On realism:

"Instagram-style heavily post-processed photos are realism!" - did you really go there? Amazing indeed. :)

AI hallucinations =/= realism. Instagram-style heavy postprocessed photos =/= realism. Light hallucinated by AI that doesn't match actually simulated by PT lighting =/= realism. It's fake beautification.

Call me crazy but just these two quotes alone suggest to me that you do not think it looks more realistic turned on. Is that correct, or are you now saying that you do think it looks more realistic?



Looks better off:

making things worse is also progress. Just not the one anyone wanted. :)

To me it looks more like a ton of "beautiful" mods one can download for many games. I stay away from those, most just look absolutely fake and unrealistic, because I actually go out and look at people and none of them look like that in reality. :)

Again, just these two quotes of many suggest that you think it looks better turned off. Is that not the case? Or are you now saying that you prefer it turned on?


AI slop:

And then AI filter on top which does NOT match the simulation of realism - shadows, light etc. are just very different. And then you call that realism? :)

None is real, this AI model is probabilistically "imagining" how light would look like in a scene like that, basing it on the training material of various lit scenes they fed it with initially. And that shows, it looks different than actual PT does (ergo, actually calculated light propagation in the scene).


To be honest, most of these quotes suffice as evidence for any of the points. Is your objection purely because you didn't actually say the specific word "slop"?

Are you now saying that you don't think it's AI slop? Is it your opinion that it looks more realistic turned "on" now then? Are you now saying that you think the "on" shots and clips look better?

If the answer answer isn't a resounding yes to all of those, then it's not a strawman by definition.


I dare you to quote them or you can already save your time and apologise.

Done.

Thank you for showing very clearly to everyone what straw man looks like exactly. :)


Rightio champ...

You know what is actually a good example of a strawman? This, from you:

It's also amazing to me how in almost an instant a lot of people switched from "PT looks the most realistic, it's the true light simulation!" to "PT looks rubbish, AI so much prettier!" - as I've witnessed all over internet since dlss5 was shown.


Let's be very clear here, in a discussion about an emerging rendering technology, and it's merits and future potential or lack of, you've been unnecessarily combatative; while absolutely no one else has. And this latest "i've not said those things" from you is playground levels of dishonesty.

No one cares about a difference of opinion, but at least try to be consistent between posts.

Wow, the shear level of audacity, dogmatism and arrogance, which makes it sound like "Everyone who doesn't agree with me is wrong, only I am right!" means you lost me and likely a lot of other people on this.

Every accusation is a confession, isn't that the phrase?

Anyway, have you found any comparison shots that you'd like to provide as an example of DLSS 5 looking worse turned on yet? I haven't seen any examples that make a convincing case yet.
 
Last edited:
Some of it looks pretty good and a lot of it looks flipping awful, also pay attention to some of the elements in the background of what you're supposed to be focusing on, you may spot a lot of weird ****

My problem with this is its just yet another excuse for game dev's to be lazy and cheap, just release it without polishing it the user if they can afford it can just check a box and make our game look exactly the same as everyother game all using the same RTX TikTok filters.....
 
Last edited:
If it is frame buffer interception then you can 100% bet that once modders figure this out they will leverage it using spare GPU resources much like how people got MFG working via one of the two mod variants right now with good levels of quality among other things.
 
I genuinely don’t think the geometry is being modified in any way at all.

I have no idea who this guy is or what his reputation is like, but there’s some good zoomed in side by sides in this video that makes it pretty clear that the geometry is identical between DLSS 5 on and off.


The only one he says he doesn’t feel 100% certain on is the Grace in the city shot, but if you take a look at the “off” clip in motion you can see that the minor difference appears to be merely the fact that it’s different frames between the stills, with her mouth slightly opening and closing between them.

No one cares about technically this or technically that, this is just ACTUALLY missing the point.... the 15 year old in Hogwarts looks 36 with DLSS 5 on.
Whatever the ACTUALLY reason for that are it does not change the facts or make it anymore palatable. Its AI Slop.
 
No one cares about technically this or technically that, this is just ACTUALLY missing the point....

We would likely need a several-fold increase in GPU power to achieve the same levels of fidelity, lighting and realism using traditional rendering techniques in real-time.

That's why this is an exciting technology. If a developer wants the age to look different, they simply change the geometry, dial down the intensity or modify the settings etc.

Note though, that I still caveat this with the fact that we've yet to see it in fast motion.

the 15 year old in Hogwarts looks 36 with DLSS 5 on.


In the DLSS 5 off screenshot, the character is completely flat and barely lit.

It looks like a mannequin and I can't make out any kind of potential age whatsoever; just a human shaped blob.


Whatever the ACTUALLY reason for that are it does not change the facts or make it anymore palatable. Its AI Slop.

Forget this has anything to do with DLSS or AI and answer this question for me if you wouldn't mind.

Which of the two screenshots below do you think looks more realistic, and which would you say has the better visuals/graphics in the broad, colloquial sense?


qhRvZ8v.jpg


Are you honestly telling me that in the before screenshot, you have a clear gauge of the age of the mannequin-like character, so much so that you can perfectly place him at 15, but in the after shot those visuals and the illusion of realism has been completely destroyed for you by the extra detail and better lighting? I mean, if that's really true then I honestly don't know what to say.

I'm beginning to wonder if perhaps the divide is simply due to the Uncanny Valley effect at work:


Perhaps it's just that an awful lot of people are very sensitive to it, and are experiencing a kind of immediate revulsion to the sudden increase in realism.

if that is the case, then maybe the ridiculously overblown and absurdly hyperbolic YouTube community groupthink-like response that I've taken such umbrage with, makes a little more sense.
 
Last edited:
Jensen's reply to the OcUK community.

Curious:

"He also stressed that DLSS 5 is not a traditional post-processing effect applied after a frame is rendered. Instead, he described it as a geometry-level system with what NVIDIA calls content-controlled generative AI. Huang added that studios can experiment with different looks, including stylized...

Source: VideoCardz.com
https://videocardz.com/newz/jensen-huang-says-gamers-are-completely-wrong-about-dlss-5-backlash"

The images so far do not indicate a geometry-level system, at least not on the input side, showing interpretation errors that wouldn't happen if it was, though that doesn't necessarily mean what we are seeing and what will be released are the same thing.
 
Last edited:
CGI and AI = not real and uncanny. As you get closer to real you just dip further into the uncanny abyss I guess. Rogue One CGI actors etc, our brains know faces and people pretty well so harder to trick. Until one day it's not uncanny which seems to be the case with AI now, just not this DLSS 5.0 beta example.
 
Last edited:
I am very excited, because assuming what we’re seeing resembles what we actually end up with (with the strong caveat that it works well in fast motion) then it’s absolutely transformative as a technology from day one, let alone as we see it mature over the years.



These aren’t stylistic disagreements or opinion differences.

This is a counter to the absolutely ridiculous levels of braindead YouTube rage bait we’ve been seeing from so many creators, all mindlessly copying each other with the same phrases and slogans, none of which are grounded whatsoever in reality.



I dislike how Minecraft looks also, but that’s a subjective opinion on art style, not an objective claim on photorealism.

If the yardstick we’re using to measure DLSS 5 is one of realism, definition, light accuracy etc. then there is a clear improvement.

People may dislike the look stylistically, especially if the increase in realism is provoking the “uncanny valley” effect and the psychological discomfort it invokes in some people, but that doesn’t mean it’s not more photorealistic.



Again, this is a conflation of subjective opinions on art style, with things that are objective and measurable in terms of accuracy versus reality.

If you or anyone else truly believe that the off shots look better to your taste based upon style or whatever reason, that’s fine, it’s an opinion. But this relentless, mindlessly repeated claim from so many creators, that they look like less realistic slop is just garbage bandwagon jumping.

Do you have any good examples where you think that the “on” shots look less realistic than the “off” shots? I’ve asked repeatedly and so far no one’s made a convincing case.

If people feel so strongly that this is ugly, unrealistic, AI slop that makes things look worse, then shouldn’t it be fairly easy for them to provide side-by-side shots or clips that clearly demonstrate that to be the case?

I think the fact that people haven’t been forthcoming with examples speaks volumes.

Anyway, for those who are interested, here’s a lot more hands on footage that I’ve not seen elsewhere, that includes some very interesting in motion examples of the technology.


Did you actually read anything I said? I didn't state an opinion on this and now you are arguing with me?

These are my posts in the thread:

My only comment on the tech was that it was "interesting" - also people are going to argue about this for 10000 pages.

Moreover,any early tech has issues - so even if someone highlights issues,why is that a problem? This is an early tech demo,so OFC it has issues. It's not a shipping product. Its pretty much Alpha or Beta at best.

DLSS1 had issues,as did DLSS2. It was only after people highlighted these issues that DLSS3,DLSS4 and DLSS4.5 arrived. So highlighting issues,does not mean you are against the tech but would want improvements. That is on top of people's subjective opinions on how they want games to look.



duty_calls.png
 
Last edited:
Overall its complicated and i do use DLSS, but I think the AI stuff is actually holding us back in a way. For a long time, new gfx advances were added without compromise. The next few years spent optimising to get the best result. But maybe about 8yr ago we started down the road of adding effects with compromise eg taa. We gained better effects at the cost to clarity. New techniques were being developed at the time to counter this, but then dlss a fancy sharpener is sold as a quick fix. We stopped optimising the current techniques and jumped to DLSS as the solution. People then start hyping it as better than native etc... We have now made a couple of jumps that are built on poorly optimised render techniques.
A good example for clarity using good optimisation vs dlss on a taa based render found in most modern games is Half life alyx. The game doesn't use dlss and is noticeably much sharper than any modern game using dlss. Games that usually use DLSS are blurry, and DLSS makes it appear better but will never reach the same clarity as a well optimised game such as Alyx.

How well, that modified TAA, will hold up if you upscale from 66%... 50%... or 33%?

They do not change - they just have increased clarity/resolution. They are exactly the things that the artist put in, and the same across all platforms when at the same resolution.

The 'generative' changes - the name gives it away - is much more than that. It is actually recreating what it 'thinks' it should look like. That cannot look the same across platforms, even with identical power of hardware.

There is a huge difference between upscaling/interpolation and generative manipulation. Whilst some ML upscaling may use AI to 'interpolate' what should be there, and more recent ray reconstruction may slightly alter the reality that the artist intended, this new approach is far more dependent on the specifics of the model and that would not be common between cards.

I guess that if you are an NVidia shareholder and don't care for game developers to have a mostly consistent platform, this seems like a good thing.
For everyone else, it will be a spiral down to banality and vendor lock-in.

I understand your point and I agree with it. However, can we also agree that maximum settings represents the actual artistic vision?


In other news, apparently the tone mapping is broken https://www.reddit.com/r/hardware/s/lGzh6s2lFX
 
How well, that modified TAA, will hold up if you upscale from 66%... 50%... or 33%?
I'm probably being dozy, but I'm unsure what your question is?

Edit* it's much easier to upscale a blurry image and add sharpening, many might even claim the final result is better than native :D I haven't played an upscaled game that has the same clarity as a well optimised game like Alyx.
 
Last edited:
Back
Top Bottom