• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will DLSS 3 add latency ?

2xqkj26bj5p91.jpg

I'm not sure what the point of looking at a still of a single frame is. The inserted frame is there for 1/120th of a second before being replaced, while the human eye will happily watch a film at 24fps without struggling, so I doubt very much you'll notice a small artefact on a single frame. The question is how it looks when it's actually running. Something that's going to be near impossible to evaluate until we've seen the system is person.
 
I'm not sure what the point of looking at a still of a single frame is. The inserted frame is there for 1/120th of a second before being replaced, while the human eye will happily watch a film at 24fps without struggling, so I doubt very much you'll notice a small artefact on a single frame. The question is how it looks when it's actually running. Something that's going to be near impossible to evaluate until we've seen the system is person.

Are you really that surprised when it comes to nvidia/dlss, people love to find one scene where it will be worse and lambast it :D :p :cry:

But in all seriousness, 100% agree, I could quite easily go and screenshot a lot of particular scenes where dlss looks no different to native or in a few frames, it looks better from that same video :cry: There is no doubt going to be some drawbacks/areas where there will be issues but as long as they aren't severe like dlss 1 or/and 90+% of the time, dlss 3 looks just as good, if not better then I'm all for it, if it is good like dlss 2 then AMD will need to act quick and get something out asap to compete with it and not 1-2 years later again.....

I've read that apparently the VR headsets use similar frame interpolation for certain fps ranges?
 
That is a terrible argument, for why artifacting is okay in a video game.

Problem is there is some kind of "artifacting" in the majority of games regardless of dlss/fsr, that is why fsr 1 was beyond awful as it enhanced all the issues in the native image e.g. shimmering, aliasing, ghosting/trailing.

e.g. days gone and rdr 2 (native with their best quality AA settings, no dlss)

TTSLtQX.jpg

HP0C3Dl.jpg

611HEgE.jpg

b6hwo4N.jpg

GHUG0lh.jpg

SE 3 shimmering galore, not even the worst case scenario:


That's why I always say to people you shouldn't be over sharpening the image as whilst it might look better initially, you are creating/enhancing artifacts as well.

But everyone has different sensitivity to these kind of things e.g. I could deal with the issues in that spiderman screenshot above but shimmering/aliasing like in SE 3, is a complete no go for me.

LOL good luck playing a game at 24FPS with frame interpolation.

According to nvidia, latency will be lowered overall with dlss 3 vs native, although no mention as to what exactly that latency refers too, I'm presuming using nvidias overlay/latency measurement?

sLezCYK.png

The one which will confirm how much latency is added is the dlss 2 vs 3 comparisons though. I'm 100% expecting there to be a considerable latency jump between dlss 2 and dlss 3 with frame generation.

If their measurement is purely from the overlay, iirc, when using max settings with dlss balanced @ 3440x1440 with a fps range of 50-60 in cp 2077, the latency is around 20-30 ms.
 
Last edited:
How come power draw is lower with DLSS 3, is it because the GPU is doing less work? Edit: I just read the article, the tensor cores are more efficient at this and offload work from the rest of the GPU (I am not sure whether this is speculation on their part).
2D work on a GPU uses a lot less power than 3D work - far fewer transistors involved. DLSS frame interpolation (or FRUC as they are calling it) pretty much compares two 2D frames and generates a third 2D frame in between them. They have some extra transistors (in tensor) to accelerate this transformation, but that's still far fewer than are involved in a full 3D pipeline.

That by itself wouldn't reduce power draw though - you're generating extra frames rather than reducing the number rendered traditionally, so they must be hitting a cap/limitation somewhere else which means they can't render the same number of traditional frames. This reduction plays out as a drop in power draw.

So basically you can use FRUC + Upscale in a number of ways:
Imagine you can normally traditionally render 50FPS
A) Use DLSS upscale (also a 2D method) to reduce rendered resolution, cap at 50fps = lower power draw, same performance
B) Use DLSS upscale to reduce rendered resolution, don't cap = ~same power draw, more 'interactive' frames per second
C) Use DLSS FRUC to frame interpolate, cap at 50 fps = lower power draw, same visual performance, half the 'interactive' frames per second
D) Use DLSS FRUC to frame interpolate, don't cap = ~same power draw, ~twice the visual performance, same 'interactive' frames per second
E) Use DLSS upscale + FRUC to reduce rendered resolution + frame interpolate, cap at 50 fps = very low power draw, same visual performance, half the 'interactive' frames per second
F) Use DLSS upscale + FRUC to reduce rendered resolution + frame interpolate, don't cap = ~same power draw, >twice the visual performance, more 'interactive' frames per second

It's scenario F where I think Nvidia will be boasting about 2x performance - that it's not more like 3x means they're hitting another bottleneck - and that bottleneck is what's reducing the number of traditional frames rendered and thus lowering power draw.
 
Human eyes are really good at spotting change. If the expected motion is interrupted we will notice it. We might not see exactly what caused the flickering but we will notice it.

The human eye is really bad at noticing lots of things, and really good at noticing others. The only way we will know which DLSS falls under is by seeing it in action, a still of a single frame can't tell us.
 
I'm not sure what the point of looking at a still of a single frame is. The inserted frame is there for 1/120th of a second before being replaced.
The replacement frames are half the overall framerate - if there's artifacting then in a minutes' worth of play you'll have 30 seconds of onscreen artifacts.
 
According to nvidia, latency will be lowered overall with dlss 3 vs native, although no mention as to what exactly that latency refers too, I'm presuming using nvidias overlay/latency measurement?

The one which will confirm how much latency is added is the dlss 2 vs 3 comparisons though. I'm 100% expecting there to be a considerable latency jump between dlss 2 and dlss 3 with frame generation.
I think if you're rendering 3 frames ahead then there's either no difference (if you hit a traditionally rendered frame) or +1 frame difference (if you hit an interpolated frame) in latency. So definitely not a reduction if the frame ahead is kept the same.

What Nvidia seem to be doing is turning Reflex (= reducing frames rendered ahead) on, which then of course makes a difference to latency, but you can do the same without DLSS3 (or with DLSS2) and get an even bigger reduction in latency since you don't need to render as many frames ahead as you do for FRUC.
 
When comparing the 60fps against the DLSS3 frame picture above, there is a clear difference. However, DLSS3 blurb seems to be that the ‘calculated’ frame is a qtr resolution upscaled to full size … so it’ll have the same sort of artefacts within it from the upscale as what the generated frame will have. I’m going to guess that the footage will consistently be artefacted rather than alternating between clear / artefacted.

The issue then being whether the artefacts are acceptable during play I suppose.
 
Last edited:
When comparing the 60fps against the DLSS3 frame picture above, there is a clear difference. However, DLSS3 blurb seems to be that the ‘calculated’ frame is a qtr resolution upscaled to full size … so it’ll have the same sort of artefacts within it from the upscale as what the generated frame will have. I’m going to guess that the footage will consistently be artefacted rather than alternating between clear / artefacted.

The issue then being whether the artefacts are acceptable during play I suppose.

Also worth bearing in mind that those screenshots are from using the slowest youtube speed too e.g. here are some comparisons from the same video using normal speed:

091ef1q.png

dutZi67.png

UBzge23.png

jKUORCl.png

Of course, using the slowest speed helps point out the issues but obviously you aren't going to be playing games in slow motion.

The other concern I have with this is potentially enhancing shimmering but given that dlss eliminates shimmering compared to native, I'm hopeful it won't be an issue here.

Hopefully DF will get their videos out this weekend as it is by far the most interesting thing nvidia have shown, well except for maybe their remix tool.... I think "overall", the pros of dlss 3 + frame generation will outweigh the cons, same as is the case with dlss 2 and FSR 2.1.
 
Also worth bearing in mind that those screenshots are from using the slowest youtube speed too e.g. here are some comparisons from the same video using normal speed:

Wait a second? Those are stills from a YouTube video? How do people imagine they can tell the difference between dlss artefacts and video compression artefacts. Explains why those looked so much like mpeg artefacts to me.
 
Wait a second? Those are stills from a YouTube video? How do people imagine they can tell the difference between dlss artefacts and video compression artefacts. Explains why those looked so much like mpeg artefacts to me.

Yup:


Somewhat agree but it's still a good/valid comparison though as any issues with the encoding and compression etc. will also apply to the non dlss 3 footage too (if the footage is captured using the same tools and encoded using the same settings etc. which knowing DF, it will be)

The problem is when you slow the footage down entirely then take a screenshot of just 1-2 frames and ignore the rest to come to the conclusion "zOMG DLSS 3 is awful!" :p
 
Problem is there is some kind of "artifacting" in the majority of games regardless of dlss/fsr, that is why fsr 1 was beyond awful as it enhanced all the issues in the native image e.g. shimmering, aliasing, ghosting/trailing.

e.g. days gone and rdr 2 (native with their best quality AA settings, no dlss)

TTSLtQX.jpg

HP0C3Dl.jpg

611HEgE.jpg

b6hwo4N.jpg

GHUG0lh.jpg

SE 3 shimmering galore, not even the worst case scenario:


That's why I always say to people you shouldn't be over sharpening the image as whilst it might look better initially, you are creating/enhancing artifacts as well.

But everyone has different sensitivity to these kind of things e.g. I could deal with the issues in that spiderman screenshot above but shimmering/aliasing like in SE 3, is a complete no go for me.



According to nvidia, latency will be lowered overall with dlss 3 vs native, although no mention as to what exactly that latency refers too, I'm presuming using nvidias overlay/latency measurement?

sLezCYK.png

The one which will confirm how much latency is added is the dlss 2 vs 3 comparisons though. I'm 100% expecting there to be a considerable latency jump between dlss 2 and dlss 3 with frame generation.

If their measurement is purely from the overlay, iirc, when using max settings with dlss balanced @ 3440x1440 with a fps range of 50-60 in cp 2077, the latency is around 20-30 ms.
50ms at 170 FPS????

LMAO

60 FPS latency but as long as the counter tells us higher!!!!

In fact it is worse than that LOL.

60hz-vs-240hz.gif
 
Last edited:
this does kind of strike me as making the number go up, without the benefits of the number going up
Yes exactly. Frame doubling in an FPS sense but still low FPS number latency.

This on top of screen latency and natural human latency.

LOL!

Good luck with any game requring an ounce of accuracy from you, you will be too late to react.

Imagine playing UT 2004 like I do here but you are dead before you even see the enemy.

 
Last edited:
Back
Top Bottom