• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DLSS 5 preview

1773861771192.png
 
If the information is there in the original image AI these days can extract it i.e. it can infer light sources in 3D space which aren't in the original image - albeit not always with perfect results. AI can identify surface types and features in an image to a scary degree - find a picture say of cars parked on a street and ask AI to tell you the models, colour, etc. of the vehicles in the picture - it can even often get year specific models, etc. often.
Yes, exactly my point! :)
 
Yes, exactly my point! :)

Oh right I was thinking you were meaning the input needed to be 3D tagged data.

It is a bit confusing what is going on with DLSS 5 as nVidia are claiming it is doing one thing, but the screenshots clearly show an absence of some of those things.
 
Last edited:
I agree with your point of running at native with a tuned TAA and being sharper than upscaled DLSS. I agree. DLAA is also native and looks better than lower presets.
It's a bit out of topic here but TAA was designed to be a temporarily stable antialiasing. And that's it - not an upscaler for sure. And initially it was designed to use just 1-2 previous frames for temporal stability. Modern games abuse TAA to mask various rendering artefacts and feed it multiple past frames, which causes smearing (blur if you will). That's not what it was designed to do. There are much improved versions of it too. Problem is that for AA alone NVIDIA created this problem in a way and then offered solution by tying people up to their hardware specifics (tensor cores) for DLAA. There was no reason for DLAA to ever exist if TAA (and other forms of AA) were properly implemented in games, alas here we are. And then (to get more into the topic and my initial point) NVIDIA is at it again, first with pushing RT as RTX (and tying it up to their specific hardware) and now DLSS 5 (to again, tie it up to their specific hardware). None of what they do is open and able to run on any other hardware and that's exactly why they do it. And they constantly create problems and then offer solutions for those - but all tied up to their hardware. It's brilliant really, best way to get total monopoly and it works as witnessed on the market these days.
 
Last edited:
Because of what I stated already and will state again. And thank you for confirming you have absolutely no evidence of how they system actually works.

Once more indeed... Original raster frame is not "just pixel colours", that's the point I'm making. For an AI model to know what it needs to improve and how exactly, it needs much more information than that. And Nvidia very clearly listed what it does (texture improvements, eye brows, eyes, subsurface scattering of the skin, hair, etc.) - ergo it knows where light sources are (to generate proper light, shadows, etc.), what is eyebrow, what is skin, metal etc. This is the bit you are just adamant to completely ignore. There's much more in that "black box" of theirs then they are saying.

It makes 0 sense at all the way you describe it. It might as well predict the future by looking at tea leaves.
You really aren’t being clear at all. The input is literally just the one image of the original frame (which is just pixel colour) and one image where the pixel information is motion.

No other inputs whatsoever. No additional material information, no additional lighting information, no tags to label what is eye brow, what is skin, what is metal.

The deep learning system infers everything purely from those two input frames, just colour, and motion.
 
Oh right I was thinking you were meaning the input needed to be 3D tagged data.
Nah, not anything devs would have to tag, just analysing the frame. Which, BTW, takes quite a bit of computational power too, so I am not surprised it required separate 5090 to run it currently... I have no clue how they will make it faster than that.
It is a bit confusing what is going on with DLSS 5 as nVidia are claiming it is doing one thing, but the screenshots clearly show an absence of some of those things.
Yeah, and because it's a total black box, we will likely never know what exactly it's doing inside. However, what they said are the controls they give to devs... those are pathetic IMHO. NVIDIA calls it "Full detailed, artistic control" - but that's an absolute marketing BS, because all they give devs is just control of the intensity, colour grading and masking and not the AI model itself at all. I reckon devs will have to learn how to "guide" it with the generated input frames and hope it looks as desired - if it's stable at all between gameplays (it likely isn't).
 
You really aren’t being clear at all. The input is literally just the one image of the original frame (which is just pixel colour)
"which is just pixel colour" - it's not just pixel colours. It's a 2D flat image consisting of pixels, ergo, each pixel has specific position (x,y) and not just colour as data. And those positions matter a lot, what the neighbours are of those pixels also matter a lot, as AI doesn't look at single pixels and their colours, it looks at patterns and clusters after converting it to a much smaller latent image (which is just bunch of complex vectors and not pixels).

So yes, it could be "one frame" but it's NOT "just pixel colour". And how modern AI image generators work, from those vectors it anchors itself to motion vectors and pixels and starts generating NEW image to overlay old one, by figuring out from the frame where light sources should be (imprecisely though!), what is hair, what is skin, eye, etc. and how those usually look when lit from that direction with that colour of light etc. And so, we get DLSS 5 - a new generated image, overlayed on top of original frame, in a way AI "imagines" it should look based on training data it was fed. And then in result we get fancy dolled-up character in grief as if she's going for a date, because devs do NOT have control over hallucinations of AI model.
 
I'm sure they will support fine tuning of the model (for larger customers anyway), and/or provide others means to customise the desired output such as reference styles etc.

I don't see anyone else pushing the boundaries of what's possible in real-time rendering, it's great to see this stuff coming out, and I'll be incorporating the SDK into my work and trying it out as soon as it's available.
 
I'm sure they will support fine tuning of the model (for larger customers anyway)
Which DLSS version offered "fine tuning" of the model for ANY customer? None. You can adjust strength, colours and few other small bits and bobs but not the model itself, unless NVIDIA will offer few versions of the model (like they do with previous DLSS versions) for devs/players to choose from. But that's not been stated yet.

I don't see anyone else pushing the boundaries of what's possible in real-time rendering, it's great to see this stuff coming out, and I'll be incorporating the SDK into my work and trying it out as soon as it's available.
It has nothing to do with "real-time rendering". It's a completely different thing, a replacement of it as a target (which NVIDIA CEO stated many times over the past few years).
 
Last edited:
As if artists creating a game do not know what settings the game supports? Or maybe, just maybe, they made sure it looks consistent irrelevant of presets used by the user? Consistent doesn't mean identical, to be clear.
To paraphrase you: as if artists creating a game with dlss do not know how the game looks with it on?
 
It's a bit out of topic here but TAA was designed to be a temporarily stable antialiasing. And that's it - not an upscaler for sure. And initially it was designed to use just 1-2 previous frames for temporal stability. Modern games abuse TAA to mask various rendering artefacts and feed it multiple past frames, which causes smearing (blur if you will). That's not what it was designed to do. There are much improved versions of it too. Problem is that for AA alone NVIDIA created this problem in a way and then offered solution by tying people up to their hardware specifics (tensor cores) for DLAA. There was no reason for DLAA to ever exist if TAA (and other forms of AA) were properly implemented in games, alas here we are. And then (to get more into the topic and my initial point) NVIDIA is at it again, first with pushing RT as RTX (and tying it up to their specific hardware) and now DLSS 5 (to again, tie it up to their specific hardware). None of what they do is open and able to run on any other hardware and that's exactly why they do it. And they constantly create problems and then offer solutions for those - but all tied up to their hardware. It's brilliant really, best way to get total monopoly and it works as witnessed on the market these days.

What's stopping AMD or Intel to come up with their own, open source, better solutions?

With upscaling and FG? Sure, they do, it's consistent (key word!). With DLSS 5? They can guess and hope in current state. :)

Does it look different on other PCs compared to what they see in their development? From I've saw, no. So if they ship something, it is the developer's own artistic view.
 
Last edited:
What's stopping AMD or Intel to come up with their own, open source, better solutions?
Huge backlash that NVIDIA got from their approach, so far. But they will anyway, PlayStation already mentioned it too. Because NVIDIA is a market leader by far, and if they push something, all other corpos just come after, irrelevant how stupid idea that is in the first place.
Does it look different on other PCs compared to what they see in their development? From I've saw, no. So if the ship something, if developer's own artistic view.
It remains to be seen, but what you don't seem to understand is how AI models generating images are trained and used. To generate a 1080p image in normal good quality on a 5090 takes seconds (sometimes many seconds). Not FPS, Seconds Per Frame. Plus, a ton of vRAM. Those are good quality models. NVIDIA had to prune their model (as in, remove most data from it) to cut down vRAM use and increase performance and even that required second 5090 just to run it. When they say they will optimise it - the only way to optimise it is to prune it down even more. Ergo, remove even more data, or "intelligence/creativity" (to simplify it) out of it, and by that force it to run on total averages of human faces, materials etc. In effect, it might become more consistent but looking mostly exactly the same irrelevant of games. In every single game textures could look almost the same, materials, approach to lighting scene, faces, make-up, eyes etc. And how they train those models is almost never by using photos of normal people but by "stealing" tons of data from Instagram and other social media, where you get mostly heavily filtered, postprocessed photos of "supermodels" like people. That is the core of the problem here - completely unrealistic training data, which will then be pruned to the most generic possible averages, just so it can run 30FPS (I highly doubt it will run 60FPS even, this is why they force FG to be always on with it) on 5090. And they call that progress and ultra-realism. :)
 
Last edited:
Huge backlash that NVIDIA got from their approach, so far. But they will anyway, PlayStation already mentioned it too. Because NVIDIA is a market leader by far, and if they push something, all other corpos just come after, irrelevant how stupid idea that is in the first place.

What backlash when you come up with a TAA-like solution that works better than the existing one or a replacing for DLAA, run RT/PT more efficiently etc? RT got into a DX version and so it seems other versions of nVIDIA's tech, just like Mantle got into Vulkan and/or inspired DX12.

It remains to be seen, but what you don't seem to understand is how AI models generating images are trained and used. To generate a 1080p image in normal good quality on a 5090 takes seconds (sometimes many seconds). Not FPS, Seconds Per Frame. Plus, a ton of vRAM. Those are good quality models. NVIDIA had to prune their model (as in, remove most data from it) to cut down vRAM use and increase performance and even that required second 5090 just to run it. When they say they will optimise it - the only way to optimise it is to prune it down even more. Ergo, remove even more data, or "intelligence/creativity" (to simplify it) out of it, and by that force it to run on total averages of human faces, materials etc. In effect, it might become more consistent but looking mostly exactly the same irrelevant of games. In every single game textures could look almost the same, materials, approach to lighting scene, faces, make-up, eyes etc. And how they train those models is almost never by using photos of normal people but by "stealing" tons of data from Instagram and other social media, where you get mostly heavily filtered, postprocessed photos of "supermodels" like people. That is the core of the problem here - completely unrealistic training data, which will then be pruned to the most generic possible averages, just so it can run 30FPS (I highly doubt it will run 60FPS even, this is why they force FG to be always on with it) on 5090. And they call that progress and ultra-realism.

I understand how they work. On the examples presented, the enhanced models were based on the original one and looked alike (well, Ultra version vs. Medium or Low settings version, so to speak). How that will run in the end and how stable it will look (of that I'm skeptical myself), remains to be seen.

With that said, is not like your characters from games are not "beautified" themselves.
 
Last edited:
Back
Top Bottom