• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will DLSS 3 add latency ?

Nvidia adding fake frames in because they know they won't be able to keep up with the raw power of the 7900XT :cry: couldn't make it up. They might as well just make there own fps counting software that reports double the frame rate

I wish amd shared your enthusiasm, sadly they don't and their latest blog post reiterated that rdna3 is focused on efficiency, not maximum performance
 
Last edited:
This doesn't follow.

Without frame generation, frame rate is 50Hz, a frame every 20ms and input polled every 20ms.
With frame generation, frame rate is 100Hz, a frame every 10ms and input polled every 20ms.

The latency remains the same.



Have a look at Nvidia's diagram:

HQlIisx.png

The generated frame comes after the rendered one, not before it.



I think this probably depends on how it is integrated with the game engine. If the engine has full control and access to what is happening with these generated frames then it can use the headroom allowed by half the frames being generated to run a physics update on every rendered frame, doubling the number of times it polls input each second. Obviously you won't see this on screen immediately, but you would get the faster response.
That image shows the order in which the frames are displayed, not the order in which they are calculated by the GPU.

The way they are calculated and displayed is as follows:
Frame 1 is calculated and displayed
Frame 3 is calculated
Frame 2 is calculated and displayed
Frame 3 is dipalyed
Frame 5 is calculated
Frame 4 is calculated and displayed
Frame 5 is displayed
etc.

It says so right in the blog "For each pixel, the DLSS Frame Generation AI network decides how to use information from the game motion vectors, the optical flow field, and the sequential game frames to create intermediate frames." It's creating new intermediate frames in between those generated by the traditional DLSS super resolution.
 
That image shows the order in which the frames are displayed, not the order in which they are calculated by the GPU.

The way they are calculated and displayed is as follows:
Frame 1 is calculated and displayed
Frame 3 is calculated
Frame 2 is calculated and displayed
Frame 3 is dipalyed
Frame 5 is calculated
Frame 4 is calculated and displayed
Frame 5 is displayed
etc.
Exactly - I just read the blog on FRUC (as they're calling it) - they give a very clear example of generating a frame between two other frames:

nvidia said:
Using a complete flow vector map between the two frames, the algorithm generates an interpolated frame between the two input frames. Such an image may contain few holes (pixels that don’t have valid color). This figure shows a few small gray regions near the head of the horse and in the sky that are holes.

Holes in the interpolated frame are filled using image domain hole infilling techniques to generate the final interpolated image.
..
The calling application can interleave this interpolated frame with original frames to increase the frame rate of video or game

Which to me suggests they need a minimum frame render ahead but maybe the Reflex cutting this to a low number is enough.
 
It says so right in the blog "For each pixel, the DLSS Frame Generation AI network decides how to use information from the game motion vectors, the optical flow field, and the sequential game frames to create intermediate frames." It's creating new intermediate frames in between those generated by the traditional DLSS super resolution.

I concede you can read it that way. From their description live, that diagram, and their claims about the impact on latency I think it's incorrect but we'll have to wait and see until there's actually some better technical information available.
 
All with 40 series can soon test these games that have DLSS 3.


Here’s the complete list:

  • A Plague Tale: Requiem
  • Atomic Heart
  • Black Myth: Wukong
  • Bright Memory: Infinite
  • Chernobylite
  • Conqueror’s Blade
  • Cyberpunk 2077
  • Dakar Rally
  • Deliver Us Mars
  • Destroy All Humans! 2-Reprobed
  • Dying Light 2 Stay Human
  • F1 22
  • F.I.S.T.: Forged In Shadow Torch
  • Frostbite Engine
  • Hitman 3
  • Hogwarts Legacy
  • Icarus
  • Jurassic World Evolution 2
  • Justice
  • Loopmancer
  • Marauders
  • Microsoft Flight Simulator
  • Midnight Ghost Hunt
  • Mount & Blade II: Bannerlord
  • Naraka: Bladepoint
  • Nvidia Omniverse
  • Nvidia Racer RTX
  • Perish
  • Portal with RTX
  • Ripout
  • S.T.A.L.K.E.R. 2: Heart of Chornobyl
  • Scathe
  • Sword and Fairy 7
  • Synced
  • The Lord of the Rings: Gollum
  • The Witcher 3: Wild Hunt
  • Throne and Liberty
  • Tower of Fantasy
  • Unity
  • Unreal Engineer 4 & 5
  • Warhammer 40,000: Darktide
 
That's how you do it AMD, sorry I forgot, FSR is free..... ;) :D :p

Hoping DF will get their videos out soon to see if there are any other improvements to dlss aside from frame generation and reflex.
 
Suggest you go back and read that again as that's not how it works. It interpolates between the current and previous frame to insert a new frame in-between the two. Therefore the current frame has to be delayed so the newly rendered intermediate can be drawn on screen first. This is why it operates in conjunction with reflex to minimise that latency hit.

Yeah you're right. That's disappointing, I had thought it was something innovative, instead of derivative (with improvements).
To predict would be more challenging than interpolating a frame in the middle of 2 existing frames. I'd be curious then if this is technically feasible or is there something fundamental within the GPU rendering process that would prevent it? It would need to be higher up the pipeline to work with geometry.

I mean technically consoles could/would already have been doing effectively the same on TV depending on settings used - minus a few tricks NV are using to try and improve the experience.
 
Anyone checked out this?



From this link https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

It doesn't look good, the bigger swings in minimum FPS and 1% lows compared to native, but then again the native at 1440p even though the setting are all ultra and pyscho doesn't look good for the power of the GPU?!

And still the latency hasn't been hugely increased, 170FPS average at over 50ms would still feel really sluggish, maybe worse than 60FPS at 70ms?
 
Last edited:

But reflex isn't a new feature to dlss!!!!! :mad:

:cry:

Any word on when DF will have their videos? I'm hoping there will be performance improvements with "DLSS super resolution" outside of their frame generation given how FSR 2 is gaining similar performance without dedicated hardware now....
 
Last edited:
Anyone checked out this?



From this link https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

It doesn't look good, the bigger swings in minimum FPS and 1% lows compared to native, but then again the native at 1440p even though the setting are all ultra and pyscho doesn't look good for the power of the GPU?!

And still the latency hasn't been hugely increased, 170FPS average at over 50ms would still feel really sluggish, maybe worse than 60FPS at 70ms?
I think lower latency will still feel better regardless of FPS but overall I don't think DLSS 3 looks that appealing so far. When I ran SLI back in the day my main beef was the extra frame of latency due to AFR meaning I really needed a decent frame rate without SLI for the input latency to feel okay in SLI. I hope that DLSS frame insertion isn't gonna be the same deal...
 
For the same number of frames rendered ahead DLSS should have the same kind of latency. Comparing non-DLSS+Reflex (render ahead 1 or even 0) to DLSS FRUC+Reflex the DLSS may have higher latency due to needing at least a couple of frames to compare.

What I suspect Nvidia will try and do is compare non-Reflex (render ahead 3 say) to DLSS+Reflex (render ahead 2 perhaps) and claim DLSS lowers latency. But that's apples to oranges.
 
Anyone checked out this?



From this link https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

It doesn't look good, the bigger swings in minimum FPS and 1% lows compared to native, but then again the native at 1440p even though the setting are all ultra and pyscho doesn't look good for the power of the GPU?!

And still the latency hasn't been hugely increased, 170FPS average at over 50ms would still feel really sluggish, maybe worse than 60FPS at 70ms?
Interesting - so, for reference I just tested out CP at Ultra RT/Psycho @1440 on the 3090Ti and I get an average of 38-ish fps wandering around the market at night (one of the heaviest locations) - that seems to tally up with the 4090 being around 1.5 times faster (excluding DLSS).

Two more data points - if I enable DLSS 2.x Quality the FPS goes up to 64 so the 4090 by comparison (DLSS 3.0) is now 2.6 times faster.

I wonder if some enterprising soul is going to hack DLSS 3.0 to run on the 30xx series?
 
Last edited:
From what I've read DLSS 3.0 is looking like TruMotion that's found on those HDTVs. It will be great for single player games but must avoid in competitive E-sport type of games. As it has the potential to put you behind the sever in latency/lag.
 
Last edited:
2xqkj26bj5p91.jpg
 
Anyone checked out this?



From this link https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

It doesn't look good, the bigger swings in minimum FPS and 1% lows compared to native, but then again the native at 1440p even though the setting are all ultra and pyscho doesn't look good for the power of the GPU?!

And still the latency hasn't been hugely increased, 170FPS average at over 50ms would still feel really sluggish, maybe worse than 60FPS at 70ms?
How come power draw is lower with DLSS 3, is it because the GPU is doing less work? Edit: I just read the article, the tensor cores are more efficient at this and offload work from the rest of the GPU (I am not sure whether this is speculation on their part).
 
Last edited:
Back
Top Bottom