Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I think you meant nvidia, Why would AMD care if you can use DLSS with FG on a 3080?amd won't be happy
I think you meant nvidia, Why would AMD care if you can use DLSS with FG on a 3080?
its all open source that's what modders do. DLSS + FG on a 3080 (if it worked properly) would only impact nvidia as FG is exclusive to the 4000 series. I hope he does get it working and that it works better than the built in FSR + FG.Read the comment by puredark.....
AMD are blocking other "upscaling" tech being used with their frame gen, so much for "giving people the choice" eh but nope, amd decide what is best for you.
Not that it'll make any difference to me or others who want a smooth and visually good experience anyway unless puredark can sort out the issues with amds frame gen.
Interesting spin on what Nvidia and AMD are doing - since frame extrapolation only uses previous frames and doesn't have to delay image rendering the way FSR and DLSS does, it could actually have less latency penalty at the cost of less precision/more artifacts.Intel details its own frame generation
Intel Frame Generation Technology For XeSS Could Be Coming Soon: ExtraSS With Frame Extrapolation To Boost Game FPS
Intel could be the third major PC player to debut its frame generation technology for its XeSS framework known as ExtraSS.wccftech.com
Frame extrapolation has less latency but has difficulty handling the disoccluded areas because of lacking information from the input frames. Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.
Interesting spin on what Nvidia and AMD are doing - since frame extrapolation only uses previous frames and doesn't have to delay image rendering the way FSR and DLSS does, it could actually have less latency penalty at the cost of less precision/more artifacts.
I like that each vendor has different pros and cons to their methods - gives more choice to gamers and should create more competition.
its all open source that's what modders do. DLSS + FG on a 3080 (if it worked properly) would only impact nvidia as FG is exclusive to the 4000 series. I hope he does get it working and that it works better than the built in FSR + FG.
How to install AMD FSR 3 Frame Generation Mod for Cyberpunk 2077:
1. Download the dlssg-to-fsr3-0.3.zip archive (~4.27 MB). https://github.com/Nukem9/dlssg-to-fsr3/releases
2. Extract the archive into the Cyberpunk 2077 folder so that the dbghelp.dll and dlssg_to_fsr3_amd_is_better.dll are in the same folder as the main EXE file.
3. Very important: Hardware Accelerated GPU Scheduling must be enabled in your Windows settings in order to utilize FSR 3 Frame Generation.4. Launch the game, go into the graphics options and enable DLSS Frame Generation.5. Play the game with FSR 3 Frame Generation.
@Grim5 already posted this above. 'Better' remains to be seen - better latency perhaps at the cost of worse image quality (read the article).Intel are now working on Frame Extrapolation, better than Frame Generation, so as Alex says, probably expect NV/AMD to adopt this technique and see current DLSS/FSR be updated using techniques Intel bring to the table for even better gains etc.
Examples need to be shown of course, but it's pretty black and white in its function. Faster, better quality. XeSS is already really high quality and in various games like RoboCop, is actually temporally more stable than even DLSS.Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.
You snipped the important part:It's in the very quote you posted:
Examples need to be shown of course, but it's pretty black and white in its function. Faster, better quality. XeSS is already really high quality and in various games like RoboCop, is actually temporally more stable than even DLSS.
Frame interpolation and extrapolation are two key methods of Temporal Super Sampling. Usually frame interpolation generates better results but also brings latency when generating the frames.
...
Frame extrapolation has less latency but has difficulty handling the disoccluded areas because of lacking information from the input frames. Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.
There is no extrapolation currently, it's all interpolation
Frame extrapolation is another way to increase the framerate by only using the information from prior frames. Li et al.[2022] proposed an optical flow-based method to predict flow based on previous flows and then warp the current frame to the next frame. ExtraNet [Guo et al .2021] uses occlusion motion vectors with neural networks to handle dis-occluded areas and shading changes with G-buffers information. Their methods fail when the scene becomes complex and generate artifacts in the disoccluded areas.
mrk said:What intel say in the quote you posted again states that because of the difficulty in the disoccluded areas of the frame, they devised a way to mitigate that difficulty by using frame extrapolation (this new "warping method"). As a result this extrapolation has better qualities than existing frame generation methods.
Nowhere does it say previous extrapolation techniques, this is based on current techniques, which is interpolation, which is the exact takeaway Digital Foundry made from the announcement too.lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods
Extrapolation = Frame Generation - the whole paper is about improvements over prior extrapolation methods and interpolation (Nvidia/AMD) is only mentioned as an alternate method of Frame Generation (of generally superior quality to existing extrapolation methods whilst having greater latency).The quote literally says though:
Nowhere does it say previous extrapolation techniques, this is based on current techniques, which is interpolation, which is the exact takeaway Digital Foundry made from the announcement too.
Let's just wait and see when the presentation hits the streams.
Frame extrapolation is another way to increase the frame rate by only using the information from prior frames. Li et al. [2022]proposed an optical flow based method to predict flow based on previous flows and then warp the current frame to the next frame. ExtraNet [Guo et al. 2021] uses occlusion motion vectors with neural networks to handle disoccluded areas and shading changes with G-buffers information. Their methods fail when the scene becomes complex and generate artifacts in the disoccluded areas. Furthermore, it requires higher resolution inputs since they only generate new frames. We are the first one to propose a joint framework to solve both spatial super sampling and frame extrapolation together while staying efficient and high quality. Note that NVIDIA DLSS 3 is a combination of super sampling and interpolation since it generates intermediate frames 3. The interpolation based method will introduce extra latency for the rendering pipeline, so it requires an additional module NVIDIA Reflex to decrease the latency. However, NVIDIA Reflex decreases the latency by reducing the bottleneck between CPU and GPU, and it doesn’t eliminate the latency from the frame interpolation.
I remain aligned with Digital Foundry's take on this.We introduce ExtraSS, a novel framework that combines spatial super sampling and frame extrapolation to enhance real-time rendering performance. By integrating these techniques, our approach achieves a balance between performance and quality, generating temporally stable and high-quality, high-resolution results.
Leveraging lightweight modules on warping and the ExtraSSNet for refinement, we exploit spatial-temporal information, improve rendering sharpness, handle moving shadings accurately, and generate temporally stable results. Computational costs are significantly reduced compared to traditional rendering methods, enabling higher frame rates and alias-free high resolution results.
Evaluation using Unreal Engine demonstrates the benefits of our framework over conventional individual spatial or temporal super sampling methods, delivering improved rendering speed and visual quality. With its ability to generate temporally stable high-quality results, our framework creates new possibilities for real-time rendering applications, advancing the boundaries of performance and photo-realistic rendering in various domains.
I'm afraid you're going to be disappointed then. This research is interesting, but it isn't magic.No there is nothing that points to the image quality being lower than FG interpolated frames currently. Direct from Intel:
I remain aligned with Digital Foundry's take on this.