• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA RTX 50 SERIES - Technical/General Discussion

Because people are saying it's doing simple interpolation only (like a TV) which means the 2nd frame needs to be fully rendered to be used for the interpolating. At 60fps the GPU is rendering a real frame every 16.6ms, so it's not ready to be used for interpolation before then.
I wouldnt say it was simple interpolation since the GPU has access to all the data such as motion vectors and there is a cost to FG but it looks to me like the input latency is always more than two frames anyway. So the best can do is try to streamline the process for minimal latency addition or brute force it with a higher base rate.

105 FPS in Cyberpunk to me is like 22.2ms latency - add FG and its ~174 at 26.4ms - those are nvidia's overlay number though :)
 
I wouldnt say it was simple interpolation since the GPU has access to all the data such as motion vectors and there is a cost to FG but it looks to me like the input latency is always more than two frames anyway. So the best can do is try to streamline the process for minimal latency addition or brute force it with a higher base rate.

105 FPS in Cyberpunk to me is like 22.2ms latency - add FG and its ~174 at 26.4ms - those are nvidia's overlay number though :)

Yeah the convo started because someone claimed FG is exactly the same as what TV's have been able to do for years (which introduces loads of input lag)
 
Last edited:
Because people are saying (...)
These people are NVIDIA themselves... You can go raise it with them that you don't believe them. :) You can check this article and all the NVIDIA slides about FG: https://hothardware.com/reviews/nvidia-geforce-rtx-40-architecture-overview For FG to work it needed to use Optical Flow Accelerators, to avoid big artefacts. These need 2 frames to compare, do calculations of how stuff move and then AI can generate (interpolate) frame in between quickly, insert it and then display the second real frame. With new algorithm they got rid of optical flow and instead used pure AI method, but it's still just comparing 2 real frames and then interpolates frames in between. It just does it in a smarter way than TVs do.
Also, this is direct link to NVIDIA AI VP describing it, as one example of that whole interview (there's more in it):
it's doing simple interpolation only (like a TV) which means the 2nd frame needs to be fully rendered to be used for the interpolating. At 60fps the GPU is rendering a real frame every 16.6ms, so it's not ready to be used for interpolation before then.
It's frame interpolation is bit more complex than what TVs are doing, as it's optimised for latency (hence, Reflex is required). TVs don't care about latency at all, hence their own interpolation is useless for gaming (way too high input lag). Also, games aren't responding to input with every single frame with 16.6ms latency at 60FPS. Use PresentMon tool or at least NVIDIA's built in stats to see actual input latency of the game - to get it down to 16.6ms in my own experience, game needs to run natively at about 175FPS, not 60FPS. That also depends on the specific engine. When you look at how long it took for the GPU to render a frame with old school tools, you don't actually measure FG at all, you just measure real frame time usually. Gamer Nexus has few good videos about it and PresentMon, discussing difficulty of measuring real latency introduced by FG and it gets even harder with MFG.
As soon as you start with doing anything with the frame before it's fully rendered then it not just interpolation between 2 fully rendered frames, it's extrapolating parts of it at least.
That's not how it works, though.
VR does a similar thing with asynchronous spacewarp, which does use prediction as part of its function.
FG and MFG don't work in VR for a reason. Reprojection used in VR is similar to what Reflex 2 is doing, not FG. Completely different things.
 
Last edited:
Looks like a new nvflash 5.867.0 works with series 5. I'll be throwing the 600w OC bios onto my Palit when it arrives. That 25w and 1% gain wouldve really bugged me :D
Hi mate, Was wondering if you know where to download the Palit 5080 Gamerock OC bios. Couldn't find it anywhere
 
Last edited:
Hi mate, Was wondering if you know where to download the Palit 5080 Gamerock OC bios. Couldn't find it anywhere
Hey, the bios usually get uploaded to techpowerup bios list. There's only 3 or 4 5090 bios available on there atm. The Palit Gamerock oc bios hasn't been uploaded by anyone yet. The ability to extract and flash the 5 series has only been a few days and im not even sure ive seen an OC in the wild yet. So over the coming weeks, i'm sure it will be uploaded by someone.
 

Agree with all of this, Nvidia don't care about gamers.


Wow, that first video is suggesting that the 5070 Ti, despite being a new generation and costing only $100 less than an RTX 4080 did, it's performance is still less than the 4080. So no generational improvement at all. This gen is so disappointing. Roll on a die shrink refresh or new gen..
 
Wow, that first video is suggesting that the 5070 Ti, despite being a new generation and costing only $100 less than an RTX 4080 did, it's performance is still less than the 4080. So no generational improvement at all. This gen is so disappointing. Roll on a die shrink refresh or new gen..
Welcome to modern era graphics. It has been that way since the 30 series.
 
I don't think so, I paid £800 for my 4070 Ti Super, that's way cheaper than the 3090 was? https://technical.city/en/video/GeForce-RTX-3090-vs-GeForce-RTX-4070-Ti-SUPER

With the refresh cards, hopefully on a die shrink well see some real improvements. In every metric the 5000 Series stands apart as probably the worst generation Nvidia have ever launched.
According to TPU it's ~12% faster for a card that came out over 3 years later with much less VRAM. After that length of time that's hardly outstanding progress is it?
 
According to TPU it's ~12% faster for a card that came out over 3 years later with much less VRAM. After that length of time that's hardly outstanding progress is it?

It's faster and cheaper. That's what you want from a new gen.

5080 cards are selling for more than 4090s and are slower (Also have less vram than a 4090). If 5070 Ti comes in at similar price to the 4080, but is slower, this is woeful.
 
Last edited:
It's faster and cheaper. That's what you want from a new gen.

5080 cards are selling for more than 4090s and are slower (Also have less vram than a 4090). If 5070 Ti comes in at similar price to the 4080, but is slower, this is woeful.
I agree , it's one of the outlying examples but part of that is just because of how bad value the 3090 was in the first place. ~15% faster than the 3080 for more than double the money.

The 3080 to 4070Ti Super looks less appealing money wise too 14% higher RRP for ~28% more after more than 3 years.
 
apparently in a month there is going to be an excess of stock?


Here's a snip from that page...

gibsG41.png


I wonder what combination of that is Nvidia over-estimating datacentre demand and companies like DeepSeek demonstrating that you don't need anywhere near as much hardware as Meta, OpenAI etc have purchased in order to produce models LLMs that are superior. With the second half of that equation, companies like Microsoft, Amazon etc would have all cut back their orders of Nvidia's datacentre solutions and so we get this apparent repurposing.
 
This is why I like to use modern tools to measure not just frame latency but game engine latency, along with GPU Busy. Ergo, nothing beats Intel's Present Mon currently (it's Open Source too), though even that has some trouble with FG. NVIDIA seems to have their own tool based on Present Mon but they modified it and did not made any changes public, so it's a bit of a black box, hence I avoid it.

Example of what I see in CP2077 just now on my 4090 - exact settings, scene etc. irrelevant (it's with all settings to max, including PT and DLSS Quality), just pure comparison of Reflex on/off and FG:
Reflex off, FG off: 72FPS, 50ms game latency
Reflex on, FG off: 72FPS, 26ms game latency
Reflex on, FG on: 125FPS, 36ms game latency (and frame time itself is about 16ms)

Reflex by itself cuts game latency by half, then FG adds 10ms on top of it (still with Reflex, which matter a lot with FG). Frame pacing isn't ideal with FG, but playable with base 72FPS, even though I can already feel game lagging a bit whilst using my mouse. Adding game latency, mouse and KB latency and monitor latency and we're looking (even with Reflex) into 60ms+ of overall input lag. Humans need on average over 250ms of time to react to things changing on the screen, which this puts in summary at over 300ms of input lag including humans. Younger gamers are quite a bit below 200ms reaction time, though, apparently. :) But even in my 40s I can feel instantly a difference between FG on and off and between playing on a modern PC versus playing on older consoles and computers (8 and 16 bit machines), where input lag was close to 0 and it was just down to human reaction.
Can I ask what Driver and settings you're using to achieve these please?

Running on my 4090 I play on DLSS Performance mode with photo tracing on and everything maxed @ 4k. Once DLSS4 launched Driver 566.36 started giving me a steady 115fps & 45ms latency, reflex on.

The two new drivers they've released since the 50 series launch just tanked performance. I'd get about 95fps max and latency around 90-100ms. Even the new 572.42 driver did nothing to improve that.

It's starting to feel like Nvidia launched a new driver that literally didn't care in the slightest what it did to the previous top end card.
 
Back
Top Bottom