• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DLSS Momentum Continues: 50 Released and Upcoming DLSS 3 Games, Over 250 DLSS Games and Creative Apps Available Now

amd won't be happy :cry:

D3K9oDg.png
 
I think you meant nvidia, Why would AMD care if you can use DLSS with FG on a 3080?

Read the comment by puredark.....

AMD are blocking other "upscaling" tech being used with their frame gen, so much for "giving people the choice" eh but nope, amd decide what is best for you.

Not that it'll make any difference to me or others who want a smooth and visually good experience anyway unless puredark can sort out the issues with amds frame gen.
 
Read the comment by puredark.....

AMD are blocking other "upscaling" tech being used with their frame gen, so much for "giving people the choice" eh but nope, amd decide what is best for you.

Not that it'll make any difference to me or others who want a smooth and visually good experience anyway unless puredark can sort out the issues with amds frame gen.
its all open source that's what modders do. DLSS + FG on a 3080 (if it worked properly) would only impact nvidia as FG is exclusive to the 4000 series. I hope he does get it working and that it works better than the built in FSR + FG.
 
  • Like
Reactions: J.D
Intel details its own frame generation

Interesting spin on what Nvidia and AMD are doing - since frame extrapolation only uses previous frames and doesn't have to delay image rendering the way FSR and DLSS does, it could actually have less latency penalty at the cost of less precision/more artifacts.

Frame extrapolation has less latency but has difficulty handling the disoccluded areas because of lacking information from the input frames. Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.

I like that each vendor has different pros and cons to their methods - gives more choice to gamers and should create more competition.
 
Last edited:
Interesting spin on what Nvidia and AMD are doing - since frame extrapolation only uses previous frames and doesn't have to delay image rendering the way FSR and DLSS does, it could actually have less latency penalty at the cost of less precision/more artifacts.



I like that each vendor has different pros and cons to their methods - gives more choice to gamers and should create more competition.

Hopefully intels option will actually be usable where it is a viable option, they have proven themselves with XESS and their proper approach to these techniques are very good though so I have faith in intel! :D
 
Intel are now working on Frame Extrapolation, better than Frame Generation, so as Alex says, probably expect NV/AMD to adopt this technique and see current DLSS/FSR be updated using techniques Intel bring to the table for even better gains etc.

 
its all open source that's what modders do. DLSS + FG on a 3080 (if it worked properly) would only impact nvidia as FG is exclusive to the 4000 series. I hope he does get it working and that it works better than the built in FSR + FG.

Someone's already got FSR3 FG + DLSS working :p .




How to install AMD FSR 3 Frame Generation Mod for Cyberpunk 2077:

1. Download the dlssg-to-fsr3-0.3.zip archive (~4.27 MB). https://github.com/Nukem9/dlssg-to-fsr3/releases

2. Extract the archive into the Cyberpunk 2077 folder so that the dbghelp.dll and dlssg_to_fsr3_amd_is_better.dll are in the same folder as the main EXE file.


3. Very important: Hardware Accelerated GPU Scheduling must be enabled in your Windows settings in order to utilize FSR 3 Frame Generation.4. Launch the game, go into the graphics options and enable DLSS Frame Generation.5. Play the game with FSR 3 Frame Generation.


I've tested it and works well on the 3080FE.
Also got it to work on The Witcher 3 and Ark Survival Ascended. Amazing stuff.
 
Last edited:
Intel are now working on Frame Extrapolation, better than Frame Generation, so as Alex says, probably expect NV/AMD to adopt this technique and see current DLSS/FSR be updated using techniques Intel bring to the table for even better gains etc.
@Grim5 already posted this above. 'Better' remains to be seen - better latency perhaps at the cost of worse image quality (read the article).
 
It's in the very quote you posted:
Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.
Examples need to be shown of course, but it's pretty black and white in its function. Faster, better quality. XeSS is already really high quality and in various games like RoboCop, is actually temporally more stable than even DLSS.
 
Last edited:
It's in the very quote you posted:

Examples need to be shown of course, but it's pretty black and white in its function. Faster, better quality. XeSS is already really high quality and in various games like RoboCop, is actually temporally more stable than even DLSS.
You snipped the important part:

Frame interpolation and extrapolation are two key methods of Temporal Super Sampling. Usually frame interpolation generates better results but also brings latency when generating the frames.
...
Frame extrapolation has less latency but has difficulty handling the disoccluded areas because of lacking information from the input frames. Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.

They're saying their method has better qualities than prior extrapolation methods (which are inferior to interpolation) - hence this'll have better latency than Nvidia/AMD at the cost of image quality (since guessing what the next frame is going to be isn't going to be as accurate as interpolating between two already generated frames).
 
I think we're discussing the same thing but reading it differently.

There is no extrapolation currently, it's all interpolation, Inte's method uses future frame data with no hold as Alex describes from their take on the news.

What intel say in the quote you posted again states that because of the difficulty in the disoccluded areas of the frame, they devised a way to mitigate that difficulty by using frame extrapolation (this new "warping method"). As a result this extrapolation has better qualities than existing frame generation methods.

It's the warping method to extrapolate future frames this way which is what's giving the boost seemingly. It's aligned with Ray reconstruction where Nvidia replace several RT traditional denoisers with a single AI pass to predict what the light and ray bounces are doing which then results in a faster and at the same time, high quality image render.
 
Last edited:
There is no extrapolation currently, it's all interpolation

There's no extrapolation on consumer GPUs currently - Intel is discussing methods of temporal supersampling of which extrapolation is one of them:

Frame extrapolation is another way to increase the framerate by only using the information from prior frames. Li et al.[2022] proposed an optical flow-based method to predict flow based on previous flows and then warp the current frame to the next frame. ExtraNet [Guo et al .2021] uses occlusion motion vectors with neural networks to handle dis-occluded areas and shading changes with G-buffers information. Their methods fail when the scene becomes complex and generate artifacts in the disoccluded areas.

mrk said:
What intel say in the quote you posted again states that because of the difficulty in the disoccluded areas of the frame, they devised a way to mitigate that difficulty by using frame extrapolation (this new "warping method"). As a result this extrapolation has better qualities than existing frame generation methods.

You're misinterpreting the article - as we can see above, they're saying their extrapolation method has better qualities than these prior extrapolation methods.
 
The quote literally says though:

lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods
Nowhere does it say previous extrapolation techniques, this is based on current techniques, which is interpolation, which is the exact takeaway Digital Foundry made from the announcement too.

Let's just wait and see when the presentation hits the streams.
 
The quote literally says though:

Nowhere does it say previous extrapolation techniques, this is based on current techniques, which is interpolation, which is the exact takeaway Digital Foundry made from the announcement too.

Let's just wait and see when the presentation hits the streams.
Extrapolation = Frame Generation - the whole paper is about improvements over prior extrapolation methods and interpolation (Nvidia/AMD) is only mentioned as an alternate method of Frame Generation (of generally superior quality to existing extrapolation methods whilst having greater latency).

Direct copy/paste from the paper itself (ExtraSS: A Framework for Joint Spatial Super Sampling andFrame Extrapolation):

Frame extrapolation is another way to increase the frame rate by only using the information from prior frames. Li et al. [2022]proposed an optical flow based method to predict flow based on previous flows and then warp the current frame to the next frame. ExtraNet [Guo et al. 2021] uses occlusion motion vectors with neural networks to handle disoccluded areas and shading changes with G-buffers information. Their methods fail when the scene becomes complex and generate artifacts in the disoccluded areas. Furthermore, it requires higher resolution inputs since they only generate new frames. We are the first one to propose a joint framework to solve both spatial super sampling and frame extrapolation together while staying efficient and high quality. Note that NVIDIA DLSS 3 is a combination of super sampling and interpolation since it generates intermediate frames 3. The interpolation based method will introduce extra latency for the rendering pipeline, so it requires an additional module NVIDIA Reflex to decrease the latency. However, NVIDIA Reflex decreases the latency by reducing the bottleneck between CPU and GPU, and it doesn’t eliminate the latency from the frame interpolation.

They're proposing a method of doing upscaling (spatial SS) and frame extrapolation (temporal SS) and it's better than prior extrapolation methods.

 
Last edited:
No there is nothing that points to the image quality being lower than FG interpolated frames currently. Direct from Intel:

We introduce ExtraSS, a novel framework that combines spatial super sampling and frame extrapolation to enhance real-time rendering performance. By integrating these techniques, our approach achieves a balance between performance and quality, generating temporally stable and high-quality, high-resolution results.

Leveraging lightweight modules on warping and the ExtraSSNet for refinement, we exploit spatial-temporal information, improve rendering sharpness, handle moving shadings accurately, and generate temporally stable results. Computational costs are significantly reduced compared to traditional rendering methods, enabling higher frame rates and alias-free high resolution results.

Evaluation using Unreal Engine demonstrates the benefits of our framework over conventional individual spatial or temporal super sampling methods, delivering improved rendering speed and visual quality. With its ability to generate temporally stable high-quality results, our framework creates new possibilities for real-time rendering applications, advancing the boundaries of performance and photo-realistic rendering in various domains.
I remain aligned with Digital Foundry's take on this.

Edit* Yeah to clarify, what the Intel paper says is as a baseline, frame extrapolation is lower quality than interpolation, which is why NV and AMD use latency reduction tech to supplement interpolated frame gen - Intel has worked around that by using the new warping method (mentioned earlier) to mitigate the quality and latency issue using this one method. So the end result of ExtraSS is high quality FG but without the latency inducing output requiring Reflex etc to reduce input latency. So the lower quality trait of extrapolation doesn't apply, it's just a reference knowledge point to describe what extrapolation was like, before the new warping method was introduced.

If that works across the board like XeSS generally does regardless of GPU vendor (assuming it just uses AI cores on whatever GPU is installed instead of profile specific to ARC GPUs) then for a short time it's chess move check for NV and AMD until they update FSR3 and DLSS FG to also use these new methods.
 
Last edited:
No there is nothing that points to the image quality being lower than FG interpolated frames currently. Direct from Intel:


I remain aligned with Digital Foundry's take on this.
I'm afraid you're going to be disappointed then. This research is interesting, but it isn't magic.

Nvidia has two perfect frames to interpolate between in addition to motion vectors and whatever buffers the engine exposes for DLSS and frame-gen - and even then, it's far from perfect. AMD has to turn frame-gen off altogether for UI elements as it's unable to resolve them.

Intel's proposed method *is* novel - and if they're able to get anywhere close to the quality of Nvidia/AMD's methods then I'll be impressed, but they're working with significantly less image data and there will be trade-offs.
 
People said the same about Ray Reconstruction with DLSS 3.5 too, but here we are, higher quality ray tracing and reflections with zero trade-offs in performance or image quality. The by-product being about 4-5fps gained too which wasn't even part of the design process of RR but just an observed gain thanks to the efficient nature of RR.

Ok ray reconstruction relies on tensor cores to work since it uses the DLSS pipeline, but it looks like ExtraSS works similarly, that it relies on XeSS as its foundation and then introduces the new addons.
 
Back
Top Bottom