• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DLSS 5 preview

That's not how AI models evolve - as in they don't get much more powerful on the same hardware, they grow bigger and require more, faster hardware, and that's mostly how they get more powerful. What works in data centers doesn't work on home GPUs the same way, especially if it needs to run a game as well and other DLSS methods (upscaling and FG) - all at the same time. Unless one is happy with the future where games with AI in them will be playable just online on Nvidia cloud data centres, then sure - very doable, for a price.
It's clear you're simply in this thread to froth at nVidia at this point so I'm done correcting your obvious lack of understanding of the subject at hand. You might want to google knowledge distillation though.
 
Hive mind: It's completely changing the vision of the artists and game developers! I can't stand having this extra toggle in my games. Wahhhh!

Game developers & Artists: Actually it's not. We think it's an incredible leap forward and this gets us much closer to our vision. Also, you've got it completely wrong about how this tech works, and you're just mindlessly regurgitating what some rage-baiting YouTuber has told you by screaming the phrase "AI slop" repeatedly in a mob-like frothing stupor.

Hive mind: Game developers and artists know nothing about their own visions and no one cares.

;) :p

Not only that, we have already gone over this with DLSS when it first released and look how that turned out.
 
Last edited:
It's clear you're simply in this thread to froth at nVidia at this point so I'm done correcting your obvious lack of understanding of the subject at hand. You might want to google knowledge distillation though.
Rubbish. As visible in my signature I own 5090 (and before that 4090) and use it 90% of the time with various AI models, along with working extensively with AI models at work. I know well enough how they work and scale up, thank you very much. That you have no actual arguments and just give up is not my fault, so don't put it on me. :)
 
Last edited:
Rubbish. As visible in my signature I own 5090 (and before that 4090) and use it 90% of the time with various AI models, along with working extensively with AI models at work. I know well enough how they work and scale up, thank you very much.
You clearly don't because you say stuff like this which is just flat out wrong:
That's not how AI models evolve - as in they don't get much more powerful on the same hardware, they grow bigger and require more, faster hardware, and that's mostly how they get more powerful.
The opposite is true. Progress in ML is focused on increasing performance with less resources to make them more economically viable. Same reason rendering is moving away from just throwing more and more transistors, electricity, and heat at the problem. ML improvements are focused on achieving more, with less, through distillation and various other optimisations.
 
morgan-freeman-pointing.gif
 
Hive mind: It's completely changing the vision of the artists and game developers! I can't stand having this extra toggle in my games. Wahhhh!

Game developers & Artists: Actually it's not. We think it's an incredible leap forward and this gets us much closer to our vision. Also, you've got it completely wrong about how this tech works, and you're just mindlessly regurgitating what some rage-baiting YouTuber has told you by screaming the phrase "AI slop" repeatedly in a mob-like frothing stupor.

Hive mind: Game developers and artists know nothing about their own visions and no one cares.

;) :p

This is exactly right with what's going on here
 
You clearly don't because you say stuff like this which is just flat out wrong:

The opposite is true. Progress in ML is focused on increasing performance with less resources to make them more economically viable.
In data centers, not on home GPUs. And yes, you can look at Qwen3.5 LLM for example, which implemented a bunch of new solutions speeding things up, lower memory use etc. But those changed very little on my 5090, because again - data centers aren't home GPUs and on such relatively slow devices changes are really not that big to notice most of the time. It's like comparing wholesale big corporation to a small local shop - very different scales. And most of the time it's quickly offset by increased complexity (increased numbers of parameters) - you get bigger, smarter models, requiring more hardware but being cheaper per token. It doesn't mean you can make them smarter and easier to run on small devices, as that's not where main innovations are being done currently. There's a limit to how much you can optimise that tech and there are always compromises done.
Same reason rendering is moving away from just throwing more and more transistors, electricity, and heat at the problem.
Not by choice, we simply hit a physics wall in that, otherwise that's where it would be still moving.
ML improvements are focused on achieving more, with less, through distillation and various other optimisations.
Yes but also by increasing number of parameters to make them "smarter". And in case of Nvidia image generator they can only really do it by pruning, which is making it generally dumber. Ergo, things might get worse by the time they manage to make it run on single GPU fast enough with the game, not better. Though, what people mostly dislike currently is simply effect of very biased training data.
 
Last edited:

Well, as expected, it seems to be nothing more than a glorified AI filter on top of game's frames with close to 0 real control by game devs, sans colours, alpha blending (intensity) and masking. It doesn't even replace lighting properly - it still requires proper PT for best effect, so there's no speed up of FPS at all. And that comes straight from NVIDIA's "mouth". Plus, obviously very biased training data clearly focused on pro photos from social media (or whatever else NVIDIA was able to just scrape from the internet, like all the AI companies tend to do).
Also, Hideo Kojima and quite a few other devs and artists from various studios voiced their dislike of DLSS 5 (often calling it AI slop destroying artist's intention), etc.

Also, Jensen Huang is directly contradicting official materials posted by NVIDIA constantly, which adds a lot to the confusion. NVIDIA claim officially as Daniel shown in his video, whilst NVIDIA CEO claim:
“They’re completely wrong… you can fine tune the generative AI to your artistic style… if you want cartoon, toon shader, made of glass… it’s up to you.It’s not post processing at the frame level. It’s generative control at the geometry level. This is very different than generative AI. It’s content control generative AI.”

So, either engineers shown J.H. something different than NVIDIA shown to the public (and then officially posted details about), or he has no clue what he's talking about.
 
Last edited:
Hideo Kojima retweeted this:

a-screenshot-of-hideo-kojima-reposting-a-tweet-taking-a-dig-at-nvidia-dlss-5.png



Some more commentary from a developer on the same game:


As if Nvidia weren’t already facing plenty of criticism for its DLSS 5 showcase from earlier this week, veteran animator on a number of major games like God of War Ragnarök and Death Stranding 2: On the Beach, Mike York, has offered his own thoughts on the technology. In a video reacting to Digital Foundry’s coverage of DLSS 5, York analysed one of the major examples used by Nvidia – Resident Evil Requiem protagonist Grace Ashcroft.


On seeing Grace’s transformation for the first time, York seemed to be visibly dumbstruck, saying “Whoa, hold on” as he more carefully analyzes the changes. He went on to note that the changes made by DLSS 5 are quite radical rather than simply being upgraded lighting. Instead of simply being improvements, he referred to it as being “like a complete AI re-render.”


“No, no, no, no, no, no, no, no, no, no,” he exclaimed. “No. This isn’t just some lighting, dude. What the f–… I’m telling you, this is like a complete AI re-render.”


“Who even is that? That’s a different girl,” York continued. “You know why I can tell […] – look, her eyes are no longer looking, like, correctly. That one eye is looking over here, and one eye is looking there. And it rendered the eyes differently. And you can also tell that it’s… how do I say this… it has somehow put wrinkles into her lips that weren’t there before… it’s added all kinds of details that were not there before.”

I can surmise the people working at Kojima Productions are not a big fan so far!

l1Kst3WE3rg2WRHxu.webp
 
Last edited:
I’m not a big fan of that weird walking simulator

Lol. Same here.

Sure, great faces can be done. But you need the budget and the game type/engine.

I am glad people are losing their **** as this will force nvidia to improve it and do so faster.

I still like that it is coming and will still use it on some games, especially ones i play more than once. As I said before, better to have options. No one is forced to use it.

Devs will have to be very careful about how they implement this or it could lead to a lot of bad press. I personally would wait and release it 3 months after a game launches unless it is fantastic and almost flawless from the get go.

In my head it makes a lot of sense for games that have been out for a long time like Starfield and Hogwarts etc.

Apart from that maybe indie devs with much smaller budgets, a tool for them in the future, as they will never get Death Stranding 2 quality faces to begin with.
 
So will you need a 5090 for this as I've heard so far this has been done using two of them lol. I don't think in future I'll be buying much above a 5070/6070/7070Ti class GPU. Just for what I've seen so far I've been impressed I must say. However I understand the concerns that this may be taking away the artistic rights from the developers? If that's how it works. As for us if you don't like it just don't turn it on. Be interesting though to see how much it hurts performance and what class of card it requires. I come from a ZX spectrum generation so gameplay (sadly lacking these days) and not fancy graphics are important to me. However it's nice to see these new technologys.
 
Interesting the Death Stranding comparison. Not a lot of artistry involved with the characters, they’re not the product of an artist (primarily at least) they’re actual scans of actual people (well known ones at that). Not denying they look good and are very effective mind.

What I would be more interested in seeing is if DLSS5 can fix the issues with facial animation, get rid of the mannequin effect and manage to make lips and all the incidental little muscle movements feel more lifelike, because no one has really cracked that (Naughty Dog getting closest).
 
Back
Top Bottom