Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I would pay 5k for that
Yes, exactly my point!If the information is there in the original image AI these days can extract it i.e. it can infer light sources in 3D space which aren't in the original image - albeit not always with perfect results. AI can identify surface types and features in an image to a scary degree - find a picture say of cars parked on a street and ask AI to tell you the models, colour, etc. of the vehicles in the picture - it can even often get year specific models, etc. often.

Yes, exactly my point!![]()
It's a bit out of topic here but TAA was designed to be a temporarily stable antialiasing. And that's it - not an upscaler for sure. And initially it was designed to use just 1-2 previous frames for temporal stability. Modern games abuse TAA to mask various rendering artefacts and feed it multiple past frames, which causes smearing (blur if you will). That's not what it was designed to do. There are much improved versions of it too. Problem is that for AA alone NVIDIA created this problem in a way and then offered solution by tying people up to their hardware specifics (tensor cores) for DLAA. There was no reason for DLAA to ever exist if TAA (and other forms of AA) were properly implemented in games, alas here we are. And then (to get more into the topic and my initial point) NVIDIA is at it again, first with pushing RT as RTX (and tying it up to their specific hardware) and now DLSS 5 (to again, tie it up to their specific hardware). None of what they do is open and able to run on any other hardware and that's exactly why they do it. And they constantly create problems and then offer solutions for those - but all tied up to their hardware. It's brilliant really, best way to get total monopoly and it works as witnessed on the market these days.I agree with your point of running at native with a tuned TAA and being sharper than upscaled DLSS. I agree. DLAA is also native and looks better than lower presets.
You really aren’t being clear at all. The input is literally just the one image of the original frame (which is just pixel colour) and one image where the pixel information is motion.Because of what I stated already and will state again. And thank you for confirming you have absolutely no evidence of how they system actually works.
Once more indeed... Original raster frame is not "just pixel colours", that's the point I'm making. For an AI model to know what it needs to improve and how exactly, it needs much more information than that. And Nvidia very clearly listed what it does (texture improvements, eye brows, eyes, subsurface scattering of the skin, hair, etc.) - ergo it knows where light sources are (to generate proper light, shadows, etc.), what is eyebrow, what is skin, metal etc. This is the bit you are just adamant to completely ignore. There's much more in that "black box" of theirs then they are saying.
It makes 0 sense at all the way you describe it. It might as well predict the future by looking at tea leaves.
Nah, not anything devs would have to tag, just analysing the frame. Which, BTW, takes quite a bit of computational power too, so I am not surprised it required separate 5090 to run it currently... I have no clue how they will make it faster than that.Oh right I was thinking you were meaning the input needed to be 3D tagged data.
Yeah, and because it's a total black box, we will likely never know what exactly it's doing inside. However, what they said are the controls they give to devs... those are pathetic IMHO. NVIDIA calls it "Full detailed, artistic control" - but that's an absolute marketing BS, because all they give devs is just control of the intensity, colour grading and masking and not the AI model itself at all. I reckon devs will have to learn how to "guide" it with the generated input frames and hope it looks as desired - if it's stable at all between gameplays (it likely isn't).It is a bit confusing what is going on with DLSS 5 as nVidia are claiming it is doing one thing, but the screenshots clearly show an absence of some of those things.
"which is just pixel colour" - it's not just pixel colours. It's a 2D flat image consisting of pixels, ergo, each pixel has specific position (x,y) and not just colour as data. And those positions matter a lot, what the neighbours are of those pixels also matter a lot, as AI doesn't look at single pixels and their colours, it looks at patterns and clusters after converting it to a much smaller latent image (which is just bunch of complex vectors and not pixels).You really aren’t being clear at all. The input is literally just the one image of the original frame (which is just pixel colour)
Which DLSS version offered "fine tuning" of the model for ANY customer? None. You can adjust strength, colours and few other small bits and bobs but not the model itself, unless NVIDIA will offer few versions of the model (like they do with previous DLSS versions) for devs/players to choose from. But that's not been stated yet.I'm sure they will support fine tuning of the model (for larger customers anyway)
It has nothing to do with "real-time rendering". It's a completely different thing, a replacement of it as a target (which NVIDIA CEO stated many times over the past few years).I don't see anyone else pushing the boundaries of what's possible in real-time rendering, it's great to see this stuff coming out, and I'll be incorporating the SDK into my work and trying it out as soon as it's available.
To paraphrase you: as if artists creating a game with dlss do not know how the game looks with it on?As if artists creating a game do not know what settings the game supports? Or maybe, just maybe, they made sure it looks consistent irrelevant of presets used by the user? Consistent doesn't mean identical, to be clear.
With upscaling and FG? Sure, they do, it's consistent (key word!). With DLSS 5? They can guess and hope in current state.To paraphrase you: as if artists creating a game with dlss do not know how the game looks with it on?
