• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DLSS 5 preview

It will eventually be used to good effect, bin off changing faces, only keep it for the light and shadow enhancements as they DO look good.

My main question is what will the frame render time impact be, it's not changing assets, it's layering on top of existing assets, so there has to be a render time implication as a result of this for things in the render output chain to be in sync. The average latency is 33ms when factoring in modern tech used like FG, PT etc. The Nvidia demo shown by DF showed them using a controller, which is the go-to method to use when you don't want to feel the increased latency introduced by slower rendering pipeline hence why consoles feel fine even at 30fps.
 
Honestly not sure that's as much of a bad thing as you think it is. Whilst the 70s probably wasn't the best in terms of gaming — certainly I'd take being stuck with 80s and 90s gaming over today.

I genuinely don’t understand this point of view.

You can still play Pong if you want. You can still play games from the 80s and 90s if you want. Nothing about modern gaming has erased that.

So why would you prefer to have only 80s and 90s games available, instead of having access to the 80s, 90s, 00s, 10s, and 20s as we do now?

More choice doesn’t remove the old. It adds to it.

Actually pushing the limits of the hardware available and innovating with what could be created is far more preferable than whatever today now looks like.

But that’s exactly what’s happening.

Tech like this is hardware being pushed to its limits. It’s just being pushed in a different direction, towards reconstruction, inference, and smarter rendering rather than brute force alone.

In the 90s, innovation meant fixed-function pipelines and early hardware acceleration. Today, it means neural rendering, frame generation, and AI-assisted reconstruction. It’s still engineers squeezing more out of silicon, just with different tools.

Drawing a line and saying "this counts as innovation, but that doesn’t" feels frankly, quite arbitrary.

Adding "features" to sell new hardware whilst simultaneously lessening the need for good developers, just so that publishers can push out AI generated slop faster. No thanks.

I don’t see how this lessens the need for good developers at all.

If anything, it raises the ceiling. You still need strong art direction, strong design, and strong engineering, otherwise you just get sharper-looking bad games.

Technology has always been used to sell new hardware. That was true of 3D acceleration, programmable shaders, multi-core CPUs, ray tracing, and now AI-assisted rendering.

I’ve been gaming since around 1989, and I’ve seen this exact pattern play out with every meaningful technological advance. When we moved from sprites to polygons, people said it lost the magic. When fixed-function pipelines gave way to programmable shaders, it was "too artificial." When deferred rendering, temporal AA, and ray tracing arrived, they were dismissed as gimmicks or marketing fluff.

But games have always been simulations. They’re not reality; they’re mathematical approximations of it. Every generation has simply found smarter ways to decide which pixels end up on the screen.

Rasterisation was an approximation. Shadow maps were an approximation. Screen-space reflections are approximations. Even real-time ray tracing is an approximation.

So why are we suddenly nostalgic for older algorithms that placed pixels less convincingly, while treating newer ones that do a far more convincing job of it, as somehow illegitimate?
 
Last edited:
People like complaining, It gives their lives meaning.
The hand-wringing over 'fake frames' and stuff makes me laugh.

None of it is real! It's all just lots of maths and numbers to try and set the colour of each pixel to something that looks good given the frame time budget. Absent some miraculous leap forward in transistor density, just throwing more and more raster units at the problem isn't scalable or efficient.
 
So why would you prefer to have only 80s and 90s games available, instead of having access to the 80s, 90s, 00s, 10s, and 20s as we do now?
You're right it's not that I'd prefer to have only 80s and 90s games available - I think it's actually that I'd rather go back and live in the 90s. Again probably just a case of rose tinted glasses, but every game back then felt largely new.

More choice doesn’t remove the old. It adds to it.
It may not remove it, but it certainly devalues it with the endless stream of remasters, remakes and generally derivative nonsense

But that’s exactly what’s happening.

Tech like this is hardware being pushed to its limits. It’s just being pushed in a different direction, towards reconstruction, inference, and smarter rendering rather than brute force alone.
There's no limits being pushed with this though. A new "feature" gets rolled out, we get a couple of tech demos, and then it's forgotten about when the next feature is pushed out with next week's new hardware.

In the past we had longer product cycles (particularly in the console space), so by the end of a "generation" you'd get the best looking games and best performing games as developers optimised every inch for the hardware they had. New generations of hardware generally brought new types of games - mechanics that were now possible with more performance, or new types of hardware.


I don’t see how this lessens the need for good developers at all.

If anything, it raises the ceiling. You still need strong art direction, strong design, and strong engineering, otherwise you just get sharper-looking bad games.
Not sure why you need good artists, when whatever they've painstakingly created is overpainted with AI filters :confused:

I’ve been gaming since around 1989, and I’ve seen this exact pattern play out with every meaningful technological advance. When we moved from sprites to polygons, people said it lost the magic. When fixed-function pipelines gave way to programmable shaders, it was "too artificial." When deferred rendering, temporal AA, and ray tracing arrived, they were dismissed as gimmicks or marketing fluff.
Because most of those weren't gimmicks. Moving from sprites to polygons allowed games that weren't previously possible (or certainly not in a convincing or performant way), programmable shaders allowed a huge step forward in realism and again effects that weren't otherwise possible or practical.

The others mentioned are all something of a nothing burger - even the much vaunted Ray Tracing has what a handful of games that really make extensive use of it, and afaik none where light being more accurate has added new game play mechanics.

So why are we suddenly nostalgic for older algorithms that placed pixels less convincingly, while treating newer ones that do a far more convining job of it, as somehow illegitimate?
If the DLSS5 screen shots are your idea of far more convincing, then I'll take the older algorithms that are less convincing due to their limitations any day :D
 
Sure this is early preview tech, but I can't un-see the generic AI slop boilerplate faces which underpin the enhancements and that kind of ruins the immersion in a game.

Once you've seen it in a few pictures people have generated from AI you instantly recognise it.
 
Last edited:
I would never go back to gaming in the 80s and 90s. I was just chatting to my mate whilst playing Ark Raiders last night, both of nearly 50 and so pleased we didn't give it up as a hobby, great time to be alive tech wise

I remember early online gaming around 2003 when Star Wars Galaxies came out, It was painful, Most don't realise how good we have it now.
 
Last edited:
You're right it's not that I'd prefer to have only 80s and 90s games available - I think it's actually that I'd rather go back and live in the 90s. Again probably just a case of rose tinted glasses, but every game back then felt largely new.

The 90s absolutely did feel special. Entire genres were emerging. 3D worlds were new. Online multiplayer was new. Even basic lighting tricks felt groundbreaking because we had not seen them before.

Part of that magic was the technology, but part of it was simply that we were encountering those ideas for the first time. It is very hard to recreate that sense of discovery once the foundations of modern game design are already established.


It may not remove it, but it certainly devalues it with the endless stream of remasters, remakes and generally derivative nonsense

I understand the fatigue with remasters and safe sequels.

But that feels like a publishing and risk appetite issue, not a rendering technology issue. Big budgets tend to produce safer bets. That has been true since long before neural rendering entered the conversation.

At the same time, we have more genuinely experimental and inventive smaller titles now than at any other point in gaming history and the industry is broader than it used to be, not narrower.


There's no limits being pushed with this though. A new "feature" gets rolled out, we get a couple of tech demos, and then it's forgotten about when the next feature is pushed out with next week's new hardware.

I strongly disagree here.

What was shown yesterday is not a minor feature toggle. None of this appears to be just another incremental sharpening pass or resolution trick. It is neural rendering being applied directly to lighting and shading. You can see it in the faces. You can see it in indirect light response. It is learning how light behaves in a scene and reconstructing that behaviour in a way traditional pipelines simply could not.

That is not just a cosmetic filter, it's a shift in how parts of the image are being generated at a core level.

We spent decades building increasingly complex deterministic pipelines to approximate global illumination, subsurface scattering, soft shadows and material response. Now we are starting to replace parts of that stack with models trained to reproduce those phenomena more efficiently and, in cases like this, far more convincingly.

That is absolutely pushing limits. It is just pushing them in a different direction than raw raster throughput.


Not sure why you need good artists, when whatever they've painstakingly created is overpainted with AI filters :confused:

Again, this really doesn't seem to be a case of slapping a stylistic filter over finished art. It is reconstructing lighting interaction and shading detail based on scene data. It still depends entirely on geometry, materials, animation, composition and art direction.

If the underlying work is weak, neural rendering will not magically make it strong. It can refine light transport and detail reconstruction, but it cannot invent taste, composition or gameplay.


Because most of those weren't gimmicks. Moving from sprites to polygons allowed games that weren't previously possible (or certainly not in a convincing or performant way), programmable shaders allowed a huge step forward in realism and again effects that weren't otherwise possible or practical.

The others mentioned are all something of a nothing burger - even the much vaunted Ray Tracing has what a handful of games that really make extensive use of it, and afaik none where light being more accurate has added new game play mechanics.

It doesn't immediately unlock a brand new genre in the way 3D space did, but it dramatically changes the ceiling for visual simulation and realism going forward.

Lighting is not just cosmetic, it drives mood, readability, facial fidelity, atmosphere and immersion.

Not every rendering advance creates a new mechanic. Some advances deepen simulation quality and that's a legitimate part of progression too.

If the DLSS5 screen shots are your idea of far more convincing, then I'll take the older algorithms that are less convincing due to their limitations any day :D

I simply cannot fathom how anybody could look at the screenshots, let alone at the clips in motion, and not immediately see the garguantuan improvement in realism.

Don't get me wrong, I find the memes very funny... :D

But can you truly point to a specific side-by-side screenshot, or preferably a clip in motion, where you genuinely think the DLSS 5 version looks worse than the original?

Not just "different" but actually worse in terms of lighting response, facial detail, temporal stability, or material definition.

As I said before, games have always been approximations. Now parts of that approximation stack are being solved with statistically trained models instead of entirely hand-authored heuristics.

If someone prefers a particular art style, that is completely fair. But if the claim is that the new lighting and shading reconstruction looks worse in motion, I would genuinely like to see an example, because the material I've seen so far suggests the precise opposite.
 
Last edited:
Sure this is early preview tech, but I can't un-see the generic AI slop boilerplate faces which underpin the enhancements and that kind of ruins the immersion in a game.

Once you've seen it in a few pictures people have generated from AI you instantly recognise it.

It will be refined over multiple versions / years.

Same way DLSS is 4 versions old now.

(5 if you count 4.5 as a version)

At the beginning people where ripping that to pieces for being blurry. Now it's better than native rendering!
 
What was shown yesterday is not a minor feature toggle. None of this appears to be just another incremental sharpening pass or resolution trick. It is neural rendering being applied directly to lighting and shading. You can see it in the faces. You can see it in indirect light response. It is learning how light behaves in a scene and reconstructing that behaviour in a way traditional pipelines simply could not.

That is not just a cosmetic filter, it's a shift in how parts of the image are being generated at a core level.

Is it? I think that's reading too much into it. And cosmetic is an apt word, given the literal added cosmetics.


DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame.
Controllability: Allows game developers to tune intensity, color, and masking to determine where and how enhancements are applied to maintain the game’s unique aesthetic.

Sounds to me that it will be the game devs job to guardrail it via input, and outcomes. It's functioning like a semi-tamed wildcard.
The controllability suggests so.

DLSS 5 honors artistic intent in two ways:


  • Inputting the game’s color and motion vectors for each frame into the model, anchoring the output in the source 3D content.
  • By providing developers with detailed controls such as intensity and color grading. Artists can use these controls to adjust blending, contrast, saturation, and gamma, and determine where and how enhancements are applied to maintain the game’s unique aesthetic. Developers can also mask specific objects or areas to be excluded from enhancement.

I think Nvidia is being quite honest here that it's a sophisticated and clever post processing enhancement, a last pass before displaying on screen, and not something more fundamental as you seem to suggest.


If someone prefers a particular art style, that is completely fair. But if the claim is that the new lighting and shading reconstruction looks worse in motion, I would genuinely like to see an example, because the material I've seen so far suggests the precise opposite.

In the DLSS 5 demo, at 13 mins, the woman's face notably changes and looks different between when looking towards the left of camera and then looking right.
 
Last edited:
@Armageus Looks like John Linneman from DF was blindsided by the video and was not consulted about it:
Screenshot-2026-03-17-at-18-09-55-1-Mischief-on-X-It-seems-like-not-everyone-from-Digital-Foundry.png

He made some negative comments on bluesky but deleted them.
 
Last edited:
Looks like John Linneman from DF was blindsided by the video and was not consulted about it.

He made some negative comments on bluesky but deleted them.

Not surprising, Nvidia is an unofficial sponsor of DF so anything negative will be brushed away quickly.
 
Last edited:
It will be refined over multiple versions / years.

Same way DLSS is 4 versions old now.

(5 if you count 4.5 as a version)

At the beginning people where ripping that to pieces for being blurry. Now it's better than native rendering!

That’s very true, but that didn’t mean we couldn’t point out how terrible it was.
 
For me realism in a game aids immersion (I'm a VR gamer too which itself adds immersion), the after DLSS characters look more like real people/more realistic, the moaners might not like the style or that AI has done it, but they're undeniably more realistic looking = greater immersion for me. Looking forward to it.
 
Back
Top Bottom