• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA RTX 50 SERIES - Technical/General Discussion

So for instance if your running 60fps native and then apply FG your native base fps actually decreases so it’ll drop to say 55fps for FGx1 50fps for FG x2 and 45fps for FG x3

There is a penalty for the initial FG x2, but from what I recall the additional latency from x3 and x4 is nominal, practically the same as FG x2.

See the timestamp in this vid:

 
Last edited:
There is a penalty for the initial FG x2, but from what I recall the additional latency from x3 and x4 is nominal, practically the same as FG x2.

See the timestamp in this vid:

Hardware unboxed placed it at 3-4ms for x3 and x4. I find for best results I need a base fps of about 80-90, and then my monitor is only 165 anyway so there's no benefit for me from running x3-4
 
Last edited:
Hardware unboxed placed it at 3-4ms for x3 and x4. I find for best results I need a base fps of about 80-90, and then my monitor is only 165 anyway so there's no benefit for me from running x3-4

Yeah the use cases for x4 are really limited… 500hz 1440p OLED when you have a 5070…?

I’m tempted to give x3 a go when I’m hovering at 85fps… should then get to around 240fps which will be my monitors limit.
 
Last edited:
No It doesn't wait for the next frame, the input lag would double if it did that and it doesn't.
Mate, I suggest you do a bit of reading - I had initially (over 2y ago) similar doubts but NVIDIA themselves explained exactly how FG works and yes, it does wait for new frame, doesn't display it but delays whilst it generates and displays frames in between and THEN it displays the second frame. This is exactly why FG and MFG adds lag and is not a performance increase but animation smoothness increase.
 
Last edited:
Everyone is so negative about these connectors, "oh it could burn someone's house down!!" But we need to look at the positive. Think of all the ai data centers and crypto farms it could burn down! :)

Nvidia does not use this connector on its data centre cards

They are not that stupid to burn their biggest customers
 
Last edited:
Interpolation, like TV's do, doubles the input lag. DLSS FG must doing something else as it doesn't double input lag.

I'm not going to argue with your description as you've not said pre-rendered, but if you're suggesting DLSS FG is just interpolation then I think you'd be wrong.
It is just AI based interpolation yes. That's exactly what it is. Nothing more, nothing less. And this is why NVIDIA requires Reflex to be turned on as well, to have any sensible latency as indeed, without Reflex, you would get a 2x+ latency increase.
DLSS4 frame generation is using extrapolation according to everything I can find.
Did you just read Reddit or so? :) Seriously, one of the first links to articles about it, where they dumb it down for average Joe to understand: "In the simplest terms possible, frame generation takes two “real” frames and then uses them to approximate a “fake” frame to interject between them. If that sounds temporally weird in the context of a video game, you’re on the right track: frame generation needs to essentially hold a frame hostage until the interim “fake” frame is generated. It then shows you the “fake” generated frame followed by the “real” frame. And so it goes. This happens exceedingly quickly, of course, but it’s where the added latency comes from, and why it’s not technically wrong to say that turning on frame-gen is a performance net negative."

There are more technical articles about it, along with DF interviewing AI VP of NVIDIA who describes it well with words too.
The Extra in Intel ExtraSS is precisely because intel claim it's doing extrapolation, and the articles covering it say it's because they borrowed the idea from nvidia's frame gen.
Intel indeed claim their tech does extrapolation but they underline it in their messaging that this is the opposite of what NVIDIA is doing in FG, which is just interpolation. So, in other words, Intel is confirming the above description of FG by NVIDIA. I haven't seen Intel's tech in action, especially not in direct comparison to FG, to judge if it's any better. Article from Tech Powerup: https://www.techpowerup.com/316835/extrass-framework-paper-details-intels-take-on-frame-generation
"ExtraSS is a technology that relies on frame extrapolation, instead of interpolation on FSR 3 and DLSS 3. "
DLSS 4 FG is only adding around 7ms for the first fake frame and another 3-4ms for 3x and 4x, so that's not straight interpolation.
FG adds much more than that, it's just hard to measure the actual latency. When it starts generating frames, you get first frame, then second delayed, then FG in between - that's a big latency jump up, even with Reflex. But then it drops a bit as FG needs only one more real frame to add next interpolated ones, as it remembers previous second real frame for previously generated "fake" frames, which now became first frame again, without the need to wait for another first frame. It still lowers performance, but it's not x2, thanks to the above and Reflex.
 
Last edited:
This situation was the same with the 4090 and the angled connector.
And it's never happened with 3090 and almost identical connector. Because back then NVIDIA designed the power delivery on the side of the graphics card much better, with higher tolerances to failure. And then they made it worse and here we are. But this time media aren't as willing to believe NVIDIA that it's users' error, especially after NVIDIA claimed it won't be a problem with 5090 and it turns out to be even worse than it was with 4090.
 
Last edited:
It's literally what TV's have been doing for decades, frame interpolation. Nothing new.
The main difference is frame pacing and AI used to make it bit better quality and more consistent than what TVs were doing for ages now. 5000 series improved mostly on the frame pacing apparently, so it's less jarring.
 
It puzzles me that it doesn't violate some kind of electrical standard.
I suspect that it actually does in at least some places, just nobody ever got the idea to actually check. Maybe now they will. :)
I mean, the equation is pretty simple. You know the gauge of the wire, the number of wires, the wattage and the fact there's no balancing or safety mechanism. Buyers need to be protected by safety standards because sometimes it's clear that manufacturers just do not care. Buyers implicitly seem to trust 'if it's on sale, then it's safe'. People now still seem to be flat out refusing to accept it's not safe and are still ordering 5090s. Tech influencers also (mostly) seem unwilling to just come out and say it.
Big names this time aren't letting it off that easily and are publicly bashing NVIDIA for it. Things might finally pull some gaze from the people at power too, to take a closer look at such power hungry consumer devices with regard to consumer safety AND power use as well.
 
The new Linus vid where he’s using a 5090 at 8K is an entertaining watch.

It goes through all of the pros and cons of DLSS, Ray Tracing and Frame Gen… as well as how these things stack together. In a chatty ‘friends on the sofa format’.


The most interesting thing for me is seeing their reactions and dislike of artefacts caused by using DLSS quality. The other guy says something along the lines off:

“See… this is why I don’t want to use DLSS. I get told I have to turn this on to enjoy ray tracing at any sort of playable framerate… but doing so introducing a load of artefacts that I hate… so what’s the point?!”

The major takeaway from them both was that a 5090 takes away the need to rely on DLSS, when this is necessary to enjoy a good frame rate with a 4090. On top of the obvious “8K gaming is pointless.”

Cool vid.
 
Last edited:
Worth noting that the input latency in a game can be a decent amount more than time to render a single frame time so waiting 1 frame extra for FG wouln't mean double latency. Turn off Reflex in Cyberpunk and the latency is pretty nasty for example which I'm sure NV use to make the FG latency numbers look better.
 
Hopefully the rumour of stock of the 5090 being “stupidly high soon” is true. I havent been bothered to try and get one yet but will do as soon as they’re available and hopefully at more ‘sensible’ prices. Not sure whether to try and get an fe or go for one of the cheaper AIB’s.

Is this melting problem just happening with the fe cards?

Are the fe cards generally better made compared to the cheaper aib brands?

What would be the better budget brand to go for, zotac palit etc?
 
Worth noting that the input latency in a game can be a decent amount more than time to render a single frame time so waiting 1 frame extra for FG wouln't mean double latency. Turn off Reflex in Cyberpunk and the latency is pretty nasty for example which I'm sure NV use to make the FG latency numbers look better.
This is why I like to use modern tools to measure not just frame latency but game engine latency, along with GPU Busy. Ergo, nothing beats Intel's Present Mon currently (it's Open Source too), though even that has some trouble with FG. NVIDIA seems to have their own tool based on Present Mon but they modified it and did not made any changes public, so it's a bit of a black box, hence I avoid it.

Example of what I see in CP2077 just now on my 4090 - exact settings, scene etc. irrelevant (it's with all settings to max, including PT and DLSS Quality), just pure comparison of Reflex on/off and FG:
Reflex off, FG off: 72FPS, 50ms game latency
Reflex on, FG off: 72FPS, 26ms game latency
Reflex on, FG on: 125FPS, 36ms game latency (and frame time itself is about 16ms)

Reflex by itself cuts game latency by half, then FG adds 10ms on top of it (still with Reflex, which matter a lot with FG). Frame pacing isn't ideal with FG, but playable with base 72FPS, even though I can already feel game lagging a bit whilst using my mouse. Adding game latency, mouse and KB latency and monitor latency and we're looking (even with Reflex) into 60ms+ of overall input lag. Humans need on average over 250ms of time to react to things changing on the screen, which this puts in summary at over 300ms of input lag including humans. Younger gamers are quite a bit below 200ms reaction time, though, apparently. :) But even in my 40s I can feel instantly a difference between FG on and off and between playing on a modern PC versus playing on older consoles and computers (8 and 16 bit machines), where input lag was close to 0 and it was just down to human reaction.
 
Last edited:
Hopefully the rumour of stock of the 5090 being “stupidly high soon” is true. I havent been bothered to try and get one yet but will do as soon as they’re available and hopefully at more ‘sensible’ prices. Not sure whether to try and get an fe or go for one of the cheaper AIB’s.

Is this melting problem just happening with the fe cards?

Are the fe cards generally better made compared to the cheaper aib brands?

What would be the better budget brand to go for, zotac palit etc?
I bet there are many more FE cards out in the wild than AIB 5090s at this point. FEs began being manufactured in the first weeks of January, and I assume AIBs in the weks after, hence the lack of AIB cards at the moment.

It is tricky to answer the which is better made question. The 5090 FE runs hot compared to AIBs. Even the near MSRP (in theory) 5090 Gamerock runs a bit cooler. Does that mean it is worse quality? No. The 5090 FE basically needs to be power limited unless you want to cook the memory in a few years time imo.
 
This is why I like to use modern tools to measure not just frame latency but game engine latency, along with GPU Busy. Ergo, nothing beats Intel's Present Mon currently (it's Open Source too), though even that has some trouble with FG. NVIDIA seems to have their own tool based on Present Mon but they modified it and did not made any changes public, so it's a bit of a black box, hence I avoid it.

Example of what I see in CP2077 just now on my 4090 - exact settings, scene etc. irrelevant (it's with all settings to max, including PT and DLSS Quality), just pure comparison of Reflex on/off and FG:
Reflex off, FG off: 72FPS, 50ms game latency
Reflex on, FG off: 72FPS, 26ms game latency
Reflex on, FG on: 125FPS, 36ms game latency (and frame time itself is about 16ms)

Reflex by itself cuts game latency by half, then FG adds 10ms on top of it (still with Reflex, which matter a lot with FG). Frame packing isn't ideal with FG, but playable with base 72FPS, even though I can already feel game lagging a bit whilst using my mouse. Adding game latency, mouse and KB latency and monitor latency and we're looking (even with Reflex) into 60ms+ of overall input lag. Humans need on average over 250ms of time to react to things changing on the screen, which this puts in summary at over 300ms of input lag including humans. Younger gamers are quite a bit below 200ms reaction time, though, apparently. :) But even in my 40s I can feel instantly a difference between FG on and off and between playing on a modern PC versus playing on older consoles and computers (8 and 16 bit machines), where input lag was close to 0 and it was just down to human reaction.
Thanks for sharing those numbers. Very interesting. I still find FG a non-starter in first person games (incl. CP2077), even if my base frame rate is >120fps. Maybe if i had a 500fps monitor and had a base frame rate of 250fps it would be fine, but I wouldn't need it then. The only perfect use case I have found is Microsoft flight sim.
 
This is why I like to use modern tools to measure not just frame latency but game engine latency, along with GPU Busy. Ergo, nothing beats Intel's Present Mon currently (it's Open Source too), though even that has some trouble with FG. NVIDIA seems to have their own tool based on Present Mon but they modified it and did not made any changes public, so it's a bit of a black box, hence I avoid it.

Example of what I see in CP2077 just now on my 4090 - exact settings, scene etc. irrelevant (it's with all settings to max, including PT and DLSS Quality), just pure comparison of Reflex on/off and FG:
Reflex off, FG off: 72FPS, 50ms game latency
Reflex on, FG off: 72FPS, 26ms game latency
Reflex on, FG on: 125FPS, 36ms game latency (and frame time itself is about 16ms)

Reflex by itself cuts game latency by half, then FG adds 10ms on top of it (still with Reflex, which matter a lot with FG). Frame packing isn't ideal with FG, but playable with base 72FPS, even though I can already feel game lagging a bit whilst using my mouse. Adding game latency, mouse and KB latency and monitor latency and we're looking (even with Reflex) into 60ms+ of overall input lag. Humans need on average over 250ms of time to react to things changing on the screen, which this puts in summary at over 300ms of input lag including humans. Younger gamers are quite a bit below 200ms reaction time, though, apparently. :)
Yeah Cyberpunk has never felt good to me in terms of input latency when played as a shooter but it's definitely a lot better since Reflex got added. For me ideally it needs to be <30ms for mouse aiming as far as nvidia's monitoring of PC latency goes in that game. I played through the whole of Phantom Liberty using a controller and melee weap build though (because PT needed FG for the FPS :p)
 
Worth noting that the input latency in a game can be a decent amount more than time to render a single frame time so waiting 1 frame extra for FG wouln't mean double latency. Turn off Reflex in Cyberpunk and the latency is pretty nasty for example which I'm sure NV use to make the FG latency numbers look better.
So using the 60fps example from above, waiting for a 2nd frame to render before being able to display the frame gen and then 2nd frame would result in at least a 16.6ms increase in latency, which it still doesn't.

Yes reflex does reduce latency, it does so with frame gen off as well.
 
Last edited:
So using the 60fps example from above, waiting for a 2nd frame to render before being able to display the frame gen and then 2nd frame would result in at least a 16.6ms increase in latency, which it still doesn't.
I don't see why it needs to be at least 16.6ms given that if system is cleverly enough designed it enough it could start displaying a bit early based on when it expects the next frame and thus the interpolated frame to be ready
 
I don't see why it needs to be at least 16.6ms given that if system is cleverly enough designed it enough it could start displaying a bit early based on when it expects the next frame and thus the interpolated frame to be ready
Because people are saying it's doing simple interpolation only (like a TV) which means the 2nd frame needs to be fully rendered to be used for the interpolating. At 60fps the GPU is rendering a real frame every 16.6ms, so it's not ready to be used for interpolation before then.

As soon as you start with doing anything with the frame before it's fully rendered then its not just interpolation between 2 fully rendered frames, it's extrapolating parts of it at least.

VR does a similar thing with asynchronous spacewarp, which does use prediction as part of its function.
 
Last edited:
Back
Top Bottom