• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DLSS 5 preview

Not the same thing, as that is all based upon the lighting that the artists put in place. It may be a difference, but it's not changing the actual geometry/art.

DLSS5 is allowing just that. Any hardware *could* replicate the path tracing. They could not replicate the same image that DLSS5 presents because it *makes it up*.
But you do change the geometry, texture and overall, the art, with the casual settings we had even in raster alone. A game looks and plays differently at low res, low settings, low fps vs top settings and resolution + frame rate.
 
I genuinely don’t think the geometry is being modified in any way at all.

I have no idea who this guy is or what his reputation is like, but there’s some good zoomed in side by sides in this video that makes it pretty clear that the geometry is identical between DLSS 5 on and off.


The only one he says he doesn’t feel 100% certain on is the Grace in the city shot, but if you take a look at the “off” clip in motion you can see that the minor difference appears to be merely the fact that it’s different frames between the stills, with her mouth slightly opening and closing between them.
 
Last edited:
Overall its complicated and i do use DLSS, but I think the AI stuff is actually holding us back in a way. For a long time, new gfx advances were added without compromise. The next few years spent optimising to get the best result. But maybe about 8yr ago we started down the road of adding effects with compromise eg taa. We gained better effects at the cost to clarity. New techniques were being developed at the time to counter this, but then dlss a fancy sharpener is sold as a quick fix. We stopped optimising the current techniques and jumped to DLSS as the solution. People then start hyping it as better than native etc... We have now made a couple of jumps that are built on poorly optimised render techniques.
A good example for clarity using good optimisation vs dlss on a taa based render found in most modern games is Half life alyx. The game doesn't use dlss and is noticeably much sharper than any modern game using dlss. Games that usually use DLSS are blurry, and DLSS makes it appear better but will never reach the same clarity as a well optimised game such as Alyx.
 
That's such a sad straw man, mate. :)

I’m not trying to dig at you, you’re welcome to hold whatever opinion you like, but how is it a straw-man argument?

You’ve repeatedly called it AI slop, you’ve repeatedly said that it looks better off, and you’ve repeatedly said that it does not look more realistic on.

So surely that’s by definition, not a straw-man. It’s an accurate representation of your opinions, no?
 
Last edited:
I genuinely don’t think the geometry is being modified in any way at all.

I have no idea who this guy is or what his reputation is like, but there’s some good zoomed in side by sides in this video that makes it pretty clear that the geometry is identical between DLSS 5 on and off.


The only one he says he doesn’t feel 100% certain on is the Grace in the city shot, but if you take a look at the “off” clip in motion you can see that the minor difference appears to be merely the fact that it’s different frames between the stills, with her mouth slightly opening and closing between them.
It's taking the rendered raster image, and a motion vector image, and building everything off that. It has no knowledge of the underlying geometry at all, the only inputs are 'what colour is this pixel' and 'what direction and speed is this pixel moving'.....and the deep learning system is making adjustments based on those inputs and it's training data.

It's happening *after* the 3d scene has been projected, so there's no way it can 'alter the geometry'. It *can* alter the pixels such that it looks like the geometry is different, same as ancient techniques like normal maps 'fake' geometry.
 
I’m not trying to dig at you, you’re welcome to hold whatever opinion you like, but how is it a straw-man argument?

You’ve repeatedly called it AI slop, you’ve repeatedly said that it looks better off, and you’ve repeatedly said that it does not look more realistic on.

So surely that’s by definition, not a straw-man. It’s an accurate representation of your opinions, no?

Dude,seriously you are coming across as well overexcited with this. Thats nice and I am glad you are really hyped up for this! :cool: But when people disagree on what even a "pretty" game is,you can't just expect everyone to agree with you.

Some people love how Minecraft looks,but I dislike it. Some people like the look of Borderlands,others hate it. Some prefer isometric Fallout games over the later 3D versions. Some here obsess about having to play only RT/PT games as they love shiny graphics but I still play a lot of older games which look like arse.

You are not going to change anybodies opinion by going on a religious crusade about it. We all get caught into this trap online. Once you go past one or two replies,it's not going to change anything if the other person won't concede.

Agree to disagree and move on.
 
Last edited:
I actually like what I've seen in the demos, but I'm skeptical that it will look that good when we actually get to use it in-game.

Also, if it's a feature we can toggle off and on, I don't understand the intensity of the backlash.

I don't like the way DLSS can be used as a crutch by devs to avoid optimizing games, or the way the AI bubble has vacuumed up the wafer supply and backed the world into an economic corner, but I can't deny that I like what I'm seeing in the demos.
 
I’m not trying to dig at you, you’re welcome to hold whatever opinion you like, but how is it a straw-man argument?
I suggest you read what is actually said exactly, what was stated in that post and what straw man is. Then you will not be asking such silly questions.
You’ve repeatedly called it AI slop, you’ve repeatedly said that it looks better off
This is pure straw man example, as I actually said none of those things. :D I dare you to quote them or you can already save your time and apologise.
So surely that’s by definition, not a straw-man. It’s an accurate representation of your opinions, no?
Thank you for showing very clearly to everyone what straw man looks like exactly. :)
 
It's taking the rendered raster image, and a motion vector image, and building everything off that. It has no knowledge of the underlying geometry at all, the only inputs are 'what colour is this pixel' and 'what direction and speed is this pixel moving'.....and the deep learning system is making adjustments based on those inputs and it's training data.
That's not how it works, that's not even consistent with what Nvidia said how it works. How they described it, it seems to be changing the lightning of the object and the scene, it knows what object it's analysing currently (human, fruit, nature, building etc), it knows the texture and material (skin, metal etc.), it knows where light sources are supposed to be in the scene and it knows motion vectors to keep generated image consistent between frames. Then, it "imagines" what the scene and object should be lit up like, using training data it was fed.

The tech itself isn't the problem potentially but how they trained it seems to be the problem - they seemingly didn't do it on real people, they seem to have fed it AI generated images most of all, because it has very very distinct look of such style. I agree it doesn't touch geometry itself but geometry isn't that important in games (it never really was), the lighting is what gives the actual look in all 3D games - and that's what this is changing and that's what is messing up the image, IMHO. Maybe if they train it better and tweak it better it will look better. Currently it just doesn't look realistic to me as all, it looks wrong and out of place.

I use my 5090 most of all to run various local AI models including photo and video generators, I see such things daily, I work with them. What Nvidia shown isn't it, to me - much more work is required before it actually looks good. But it has potential, if the performance is sensible (and so far it doesn't seem to be!).
It's happening *after* the 3d scene has been projected, so there's no way it can 'alter the geometry'. It *can* alter the pixels such that it looks like the geometry is different, same as ancient techniques like normal maps 'fake' geometry.
Agreed.
 
Last edited:
But you do change the geometry, texture and overall, the art, with the casual settings we had even in raster alone. A game looks and plays differently at low res, low settings, low fps vs top settings and resolution + frame rate.
They do not change - they just have increased clarity/resolution. They are exactly the things that the artist put in, and the same across all platforms when at the same resolution.

The 'generative' changes - the name gives it away - is much more than that. It is actually recreating what it 'thinks' it should look like. That cannot look the same across platforms, even with identical power of hardware.

There is a huge difference between upscaling/interpolation and generative manipulation. Whilst some ML upscaling may use AI to 'interpolate' what should be there, and more recent ray reconstruction may slightly alter the reality that the artist intended, this new approach is far more dependent on the specifics of the model and that would not be common between cards.

I guess that if you are an NVidia shareholder and don't care for game developers to have a mostly consistent platform, this seems like a good thing.
For everyone else, it will be a spiral down to banality and vendor lock-in.
 
It's exactly how nVidia say it works.
I suggest you read more carefully and also to quote things fully, as what you just did is complete misunderstanding of what they actually said. The full quote is:
"DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content."
That is completely different meaning than what you described and it talks only about anchoring it in the scene and not how it actually changes things and what it sees and analyses for those changes. In their articles they said more about what it DOES and not just how it ANCHORS itself in the scene. Very different things. But granted, they did not reveal too many details. Still, knowing how AI models actually generate images (like image2image) helps a lot, as NVIDIA most likely did not reinvent the wheel, they used tech already available and repurposed it.
 
Last edited:
Dude,seriously you are coming across as well overexcited with this.

I am very excited, because assuming what we’re seeing resembles what we actually end up with (with the strong caveat that it works well in fast motion) then it’s absolutely transformative as a technology from day one, let alone as we see it mature over the years.

Thats nice and I am glad you are really hyped up for this! :cool: But when people disagree on what even a "pretty" game is,you can't just expect everyone to agree with you.

These aren’t stylistic disagreements or opinion differences.

This is a counter to the absolutely ridiculous levels of braindead YouTube rage bait we’ve been seeing from so many creators, all mindlessly copying each other with the same phrases and slogans, none of which are grounded whatsoever in reality.

Some people love how Minecraft looks,but I dislike but I am not going around telling everyone how wrong they are.

I dislike how Minecraft looks also, but that’s a subjective opinion on art style, not an objective claim on photorealism.

If the yardstick we’re using to measure DLSS 5 is one of realism, definition, light accuracy etc. then there is a clear improvement.

People may dislike the look stylistically, especially if the increase in realism is provoking the “uncanny valley” effect and the psychological discomfort it invokes in some people, but that doesn’t mean it’s not more photorealistic.

Some people like the look of Borderlands,others hate it. Some prefer isometric Fallout games over the later 3D versions.You are not going to change anybodies opinion by going on a religious crusade about it.

Agree to disagree and move on.

Again, this is a conflation of subjective opinions on art style, with things that are objective and measurable in terms of accuracy versus reality.

If you or anyone else truly believe that the off shots look better to your taste based upon style or whatever reason, that’s fine, it’s an opinion. But this relentless, mindlessly repeated claim from so many creators, that they look like less realistic slop is just garbage bandwagon jumping.

Do you have any good examples where you think that the “on” shots look less realistic than the “off” shots? I’ve asked repeatedly and so far no one’s made a convincing case.

If people feel so strongly that this is ugly, unrealistic, AI slop that makes things look worse, then shouldn’t it be fairly easy for them to provide side-by-side shots or clips that clearly demonstrate that to be the case?

I think the fact that people haven’t been forthcoming with examples speaks volumes.

Anyway, for those who are interested, here’s a lot more hands on footage that I’ve not seen elsewhere, that includes some very interesting in motion examples of the technology.

 
Last edited:
I suggest you read more carefully and also to quote things fully, as what you just did is complete misunderstanding of what they actually said. The full quote is:
"DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content."
That is completely different meaning than what you described and it talks only about anchoring it in the scene and not how it actually changes things and what it sees and analyses for those changes. In their articles they said more about what it DOES and not just how it ANCHORS itself in the scene. Very different things.
I don't think you really understand what you're talking about here to be honest.

nVidia are clear, it's taking two inputs, two images. One is the initial raster generated frame. One is the motion of each pixel expressed as RGB (very common technique for encoding spatial data as an image).

The deep learning system uses those inputs to adjust the final output. It doesn't have any deeper knowledge of the scene, lighting etc, beyond those two inputs.
 
It's interesting tech, works well on environments, but less so on faces where we aren't so easily fooled. It reminds me of the early ray tracing scenes in the first games that had it (omg thanks i can play without reflections in cars lol) vs PT today which looks amazing.

Gamer GPU market is now 5%, tomorrow 3% and less. As long as AI stuff doesn't burst (it probably needs to) we get AI tech crumbs like DLSS 5.0.

Maybe some games in the future will be a stickman figure made by an indy dev, then you bolt on your AI wrapper and you get AAA PT. You can do stickman to character in real time now btw like StreamDiffusion + ControlNet, checkout the reddit forums, it's pretty amazing, it's just a shame AI has eaten all the ram but when you get into local image/video gen you realise why, it needs loads of RAM, it's never enough.
 
Last edited:
I don't think you really understand what you're talking about here to be honest.

nVidia are clear, it's taking two inputs, two images. One is the initial raster generated frame. One is the motion of each pixel expressed as RGB (very common technique for encoding spatial data as an image).
Sure, and how from that it knows exactly what a human face is, where light sources are supposed to be in the scene, how to improve e.g. eyebrows or eyes (it has to know what eyebrows or eyes are, which it can't going pixel by pixel!)? It even changes lighting of whole scene to different one that comes out form PT - that can't be done by looking at colours of single pixels. It is anchoring itself to pixels and general colours of the scene and motion vectors. It does NOT use just that to generate images, it actually has to check whole scene and objects on it, assign meaning to "skin", "eyes", "eyebrow" and treat them properly as per training in the AI model. You look at few words and take them as gospel without actually understanding what is needed to do what they shown and described it doing.
The deep learning system uses those inputs to adjust the final output. It doesn't have any deeper knowledge of the scene, lighting etc, beyond those two inputs.
Yes, along with a lot more inputs than just those. Otherwise, I don't see any way for it to be possible. What you describe is like looking under electron microscope at single atoms and figuring out how the person looks like from that - that's not how reality works.

That said, I know very well that all those models use probabilistic approach but this isn't generating random images from noise, it actually uses more data to do it. And because it's a black box (like all DLSS are), they might never say exactly how it works, aside what it takes as input that devs provide it with, aside what it takes itself without the need for devs to know about.
 
Last edited:
Sure, and how from that it knows exactly what a human face is, where light sources are supposed to be in the scene, how to improve e.g. eyebrows or eyes (it has to know what eyebrows or eyes are, which it can't going pixel by pixel!)? It even changes lighting of whole scene to different one that comes out form PT - that can't be done by looking at colours of single pixels. It is anchoring itself to pixels and general colours of the scene and motion vectors. It does NOT use just that to generate images, it actually has to check whole scene and objects on it, assign meaning to "skin", "eyes", "eyebrow" and treat them properly as per training in the AI model. You look at few words and take them as gospel without actually understanding what is needed to do what they shown and described it doing.
It is looking it at the pixel information for the frame as a whole, which is what enables it to infer what the material is and how it should be lit.
Yes, along with a lot more inputs than just those. Otherwise, I don't see any way for it to be possible. What you describe is like looking under electron microscope at single atoms and figuring out how the person looks like from that - that's not how reality works.
That said, I know very well that all those models use probabilistic approach but this isn't generating random images from noise, it actually uses more data to do it. And because it's a black box (like all DLSS are), they might never say exactly how it works, aside what it takes as input that devs provide it with, aside what it takes itself without the need for devs to know about.
No, it doesn't need any more inputs. The original raster image contains all the required information on light sources, materials, scene.
 
These aren’t stylistic disagreements or opinion differences.
Wow, the shear level of audacity, dogmatism and arrogance, which makes it sound like "Everyone who doesn't agree with me is wrong, only I am right!" means you lost me and likely a lot of other people on this. I am not even going to read the rest of your stuff. To the ignored you go.
 
Back
Top Bottom