Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Should mean my 6800 lasts a few years.
Yeah ultra quality is probably what I would run and then if still required more frames drop some setting from ultra before going to the next tier.
This is all assuming the setting below ultra quality get more noticeable in image quality drop off.
Thing is, the comparison isn't vs native, it's vs alternatives, and in the case of something like Godfall since it's on UE4, that would be TAAU which is a simple toggle (see examples here (FSR looks closer to what simple TAA upscale looks like): https://docs.unrealengine.com/4.26/en-US/RenderingAndGraphics/ScreenPercentage/). In the future as projects move to UE5 the comparison will be even more unfavourable because of how good TSR is (see here: https://www.reddit.com/r/hardware/comments/nozuvo/testing_unreal_engine_5_temporal_super_resolution/).Well that looks bloody great, if its as good in motion I cannot see a reason not to at least apply ultra quality. Should mean my 6800 lasts a few years.
The screenshot says native 4K vs Ultra Quality? https://www.overclockers.co.uk/forums/posts/34837742Thing is, the comparison isn't vs native, it's vs alternatives
Thing is, the comparison isn't vs native, it's vs alternatives, and in the case of something like Godfall since it's on UE4, that would be TAAU which is a simple toggle (see examples here (FSR looks closer to what simple TAA upscale looks like): https://docs.unrealengine.com/4.26/en-US/RenderingAndGraphics/ScreenPercentage/). In the future as projects move to UE5 the comparison will be even more unfavourable because of how good TSR is (see here: https://www.reddit.com/r/hardware/comments/nozuvo/testing_unreal_engine_5_temporal_super_resolution/).
So if it's not even as good as that then what's the point? If it were a driver toggle that would be more understandable (and useful), but since it requires per-game integration anyway then meh.
So why would you need another upscaling method in UE5 games if it's that good? Just use TSR and that's it. The quality is good enough and no other method will give you more FPS.Thing is, the comparison isn't vs native, it's vs alternatives, and in the case of something like Godfall since it's on UE4, that would be TAAU which is a simple toggle (see examples here (FSR looks closer to what simple TAA upscale looks like): https://docs.unrealengine.com/4.26/en-US/RenderingAndGraphics/ScreenPercentage/). In the future as projects move to UE5 the comparison will be even more unfavourable because of how good TSR is (see here: https://www.reddit.com/r/hardware/comments/nozuvo/testing_unreal_engine_5_temporal_super_resolution/).
So if it's not even as good as that then what's the point? If it were a driver toggle that would be more understandable (and useful), but since it requires per-game integration anyway then meh.
Yesterday hardware unboxed asked AMD if FSR produces higher framerate at a given internal resolution compared to native, their answer was you'll have to test it yourself - I.e they either don't even know how well FSR works or doesn't or they're hiding that it's no better than just using a resolution slider, main benefit = one click, no thinking and testing needed
Thats a higher setting too, as the 1060s only on Quality.
At least you don't get ghosting.I think that is the problem. The quality mode on FSR seems poor form the material AMD released.
This is likely because FSR is only using a spatial up-sampling algorithm without temporal sampling. This provides a strict upper bound to the information available to reconstruct, so quality is going to be closer to DLSS version 1.
Also, Saphire AMD cards already have access to FSR and have for several months now, it's called Trixx Boost
Thing is, the comparison isn't vs native, it's vs alternatives, and in the case of something like Godfall since it's on UE4, that would be TAAU which is a simple toggle (see examples here (FSR looks closer to what simple TAA upscale looks like): https://docs.unrealengine.com/4.26/en-US/RenderingAndGraphics/ScreenPercentage/). In the future as projects move to UE5 the comparison will be even more unfavourable because of how good TSR is (see here: https://www.reddit.com/r/hardware/comments/nozuvo/testing_unreal_engine_5_temporal_super_resolution/).
So if it's not even as good as that then what's the point? If it were a driver toggle that would be more understandable (and useful), but since it requires per-game integration anyway then meh.
The only reason amd didn't use a driver toggle was adoption - if it's limited to amd driver then even if it's open source Intel and Nvidia won't touch it. AMD has lots of examples for this in its other open source software that no one but amd uses.
By implementing it into the game, it makes the feature work with any modern graphics card or console, at the expense of having to implement in games one by one
so pick ya poison, instant implementation but no one but AMD uses it or try to get everyone to use it with slow implementation
As for what's the point, simple, not all games are built on UE5, most games don't have simple built in resolution sliders that is easy for gamer to use. Gamers would like a one button click for more fps, that's what FSR can offer. Don't expect miracles from FSR and you won't be disappointed- yesterday hardware unboxed asked AMD if FSR produces higher framerate at a given internal resolution compared to native, their answer was you'll have to test it yourself - I.e they either don't even know how well FSR works or doesn't or they're hiding that it's no better than just using a resolution slider, main benefit = one click, no thinking and testing needed
From what info AMD has provided, there is no temporal component in FSR , at least currently. SO TSR and DLSS will be in a separate league
Well given the fact that AMD's own video shows an extreme Vaseline look I don;t think you will be looking at anythign close to native under the current iteration.