• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution 2.0

Oh look - more PAUSED AND ZOOMED images to `prove a point`.
What's the harm in comparisons? If you don't like it, take it up with every tech press.... You're on a forum where comparisons are made for every single thing. Also, some things such as the temporal stability are "noticeable" in normal gameplay too, zoomed in/slowed down footage just emphasises it, same way HUB do it to illustrate the ghosting in their scenario for dlss.

Obviously it's not going to matter for amd users since they can only use FSR 2 but for those running nvidia, it's good to see where each one is better or worst hence the comparisons....

In my mind, it probably takes a flat (average) amount of computations per frame, so the beefier your gpu the more of those it can do and still keep up with itself. A slower gpu will end up dedicated a greater % of it's time to the upscaling algorithm(s).

Not entirely sure how DLSS apportions resources to upscaling... does it not have a similar thing of being increasingly more effective with faster hardware?

Either way, I'm interested in FSR 2.0, and especially that they're leaving it open to any hardware. Feels like a game dev would do well to put their time into adapting for this tech, rather than the one that only runs on nvidia cards. AMD being the champion of open standards with this one, and that's good for everyone :)
It seems this is where having tensor cores dedicated for things like this works better from the looks of it. Bit like the same situation for RT performance too.

I think performance uplift with dlss seems to be similar going from turing to ampere but hard to tell given how much better a 3080 is than the 2080.

For DLSS, if the game engine has it, it is literally minutes/hours, if that to enable it for a game. Not sure about FSR but I imagine it is similar and according to amd, if a game has dlss already, it makes the integration of fsr 2 even quicker. The majority of work for implementing both fsr 2 and dlss is all down to the motion vectors and is entirely down to the game developers from my understanding.
 
Last edited:
On the topic of AMD need to inovate quicker, they pioneered DX12 first, Nv's DX12 was rubbish for years.

They were first with a working SAM, some argued that much about how tech sites shouldn't be using it because Rebar can't get close to SAM.

They have a one click oc in an all AMD setup.

Their CP sharpening works never mind their front end software suit is light years ahead of Nv's XP experience.

Maybe not inovative to Nv users but they are to AMD users.

Back on topic for FSR2, considering It takes 300% zooming to percieve noticable differences with a win some lose some, most folks are delighted having access to FSR2, the rest are arguing mines better-just to feel superior.
 
On the topic of AMD need to inovate quicker, they pioneered DX12 first, Nv's DX12 was rubbish for years.

They were first with a working SAM, some argued that much about how tech sites shouldn't be using it because Rebar can't get close to SAM.

They have a one click oc in an all AMD setup.

Their CP sharpening works never mind their front end software suit is light years ahead of Nv's XP experience.

Maybe not inovative to Nv users but they are to AMD users.

Back on topic for FSR2, considering It takes 300% zooming to percieve noticable differences with a win some lose some, most folks are delighted having access to FSR2, the rest are arguing mines better-just to feel superior.
Valid points on Sam/rebar (although they're the same technical thing i.e. to remove the 256MB swap buffer between the cpu and gpu vram and allow full access, also, no one has argued for rebar/sam to be disabled because of it not benefitting nvidia as much, see HUBs video why they weren't all for it i.e. too many fluctuations and depends too much on what you're running and in "some" cases, it harmed performance).

DX 12, not so sure you can say that was an "amd" thing as nvidia were also working with Microsoft on this just far less. If anything mantle was more of a win for amd as it eventually went on to help form vulkan but I suspect that was very much for amds own benefit more than the "industry" given how shocking they were in dx 11 (have read they recently improved this considerably in a driver update though? Another example of years later when we're onto new things.... :p)

I am mainly referring to features which are/were very much "must haves" and did/have changed the game considerably across the board though where it has given people no choice but to buy into nvidia if they wanted these features ASAP. Sharpening (which is done much better with reshade/sweetfx still) and an auto oc button (that isn't stable or anywhere as good as manual tweaking), pretty UI (which I agree with but outside of initial setup, how often does one use/go in here?) is not going to win the mindshare compared to how much of an impact things like gsync (personally having come from a freesync premium screen to a gsync ultimate one, I will say I find gsync slightly better for the low fps ranges but alas, you still don't want to be dropping below 60 fps although have read that gsync module does better for pixel overdrive tuning as opposed to relying on the monitors overdrive setting), dlss, RT, streaming/recording capabilities etc. make/made.....



On the topic at hand, I don't think anyone is doing zooms to point out that one is "king" or to feel "superior".... Tim said it best himself; "there is some differences between the 2 but you'll be hard pressed to notice any differences outside of these comparisons but if you have nvidia, use dlss since it still has the edge overall in both IQ and performance".

EDIT:

And another example, not one I cared for at all though.... hairworks vs tressfx... although bigger problem with tressfx was the lack of games it was in, again, amds doing/approach of wanting an over the fence solution.
 
You asked for examples of inovation not excuses.:p

But at least you remember the main one-Mantle, not many know/remember MS had already confirmed at that time, no more DX advancements DX was dead in the water.

It kickstarted PC gaming, AMD donated the code to MS and Khronos to evlove, modify and do as they see fit, you love a bit of RT'ing don't you?

As you've just arrived on Nv, imo using every gen through from Kepler, the 20 series is where Nv got to grips with DX13, 3 or 4 architectures.

Marketing is what AMD need, theyv'e got the cash now, until they market like Nv can-especially getting it into games, they won't get much inroads into Nv's share unless they actually smash them on performance.

AMD should be getting devs to bust 8gb/10gb more, just like Nv know it doesn't take much to bust AMD's rt'ing.

Edit-Instead we have AMD's marketting think I'm going to be rushing into their eco system if I spend over £400 for a cpu/gpu and they'll reward me game pass for a month, whereas I could just spend a quid for 1 months game pass. :cry:
 
Last edited:
DX 12 was pretty bleh, at least in the battlefield games it was in. In bfv it massively increased load times to get into a game vs dx11. Bf1 it was just a stuttery hitchy mess.

Not sure if the implementation was to blame but at least for those 2 games it just caused issues.
 
You asked for examples of inovation not excuses.:p

But at least you remember the main one-Mantle, not many know/remember MS had already confirmed at that time, no more DX advancements DX was dead in the water.

It kickstarted PC gaming, AMD donated the code to MS and Khronos to evlove, modify and do as they see fit, you love a bit of RT'ing don't you?

As you've just arrived on Nv, imo using every gen through from Kepler, the 20 series is where Nv got to grips with DX13, 3 or 4 architectures.

Marketing is what AMD need, theyv'e got the cash now, until they market like Nv can-especially getting it into games, they won't get much inroads into Nv's share unless they actually smash them on performance.

AMD should be getting devs to bust 8gb/10gb more, just like Nv know it doesn't take much to bust AMD's rt'ing.
Good "unique/worthwhile" innovation.... :p There was nothing on the market that could do what gsync achieved until adaptive/free sync came along (and on release of adaptive/free sync, it wasn't exactly problem/hassle free, took another good few months to get good fps ranges + LFC added as well as drivers up to scratch to avoid black screens/flickering), there was nothing that could match/do what DLSS did until FSR 2/TSR (at least not providing anywhere as good of an experience), there was nothing that could run/use RT until turing, that's the kind of innovation/worthwhile features I mean.

SAM/rebar, you can take it or leave it given that nvidia equivalent cards still match/beat amd "overall" regardless of SAM/rebar, see HUBs recent videos showing this (not factoring in RT either). Sharpening, plenty of other options you can use which again could be argued to be better and likewise with the other examples provided there. It's like saying nvidia have reflex + low latency as a unique/worthwhile feature but it's not really as using rivatuner to cap fps with vsync turned on in the nvidia control panel in combination with gsync provides similar or in some cases lower input lag i.e. it's nothing to shout home about or get people buying into said product.

I was a big advocate of mantle as at the time it worked wonders on my weaker/older cpu paired with a mid/high end gpu in BF 4 (a game I spent many hours on) but sadly it was hardly adopted (only other games I recall of being able to use it was hardline [which lasted 10 hours] and sniper elite), dx 12 benefits were defo better on amd back then too but it wasn't exactly a big gain in perf. tbf, certainly nothing like what SAM/rebar can provide.

But yes do agree amd need to get more sponsored games using their tech, although it won't happen imo and either way, how much more can they push vram? Next gen cards are around the corner, unless they go even higher vram on their rdna 3 cards compared to nvidia but then you have consoles that will hold it back somewhat, not to mention it will lock out all of amds gpus with <10GB vram including nvidias i.e. a game developer will not go for it especially if they make some silly vram requirements, it will have to be justified by showing its worth (FC new dawn with hd textures looked just as good as fc 6 textures, if not better imo and it ran just fine on 8gb cards).
 
DX 12 was pretty bleh, at least in the battlefield games it was in. In bfv it massively increased load times to get into a game vs dx11. Bf1 it was just a stuttery hitchy mess.

Not sure if the implementation was to blame but at least for those 2 games it just caused issues.
Same, I would put it largely down to the frostbite engine and dice.
 
Already posted Matt..... Defo need to go to specsavers ;) :p


Seems FSR 2 performance gain depends largely on how powerful the gpu is so newer gpus/architectures see more of a performance uplift.

DLSS is better for performance over FSR 2 on nvidia gpus too. Wonder if this is down to tensor cores or just differences in the implementation.
 
However, here is a new one, the one some have been anxiously awaiting..... :p ;) :D


It's no secret that we weren't hugely impressed with FSR 1.0 - but AMD has stepped up its game dramatically with its image reconstruction-based successor. Up there with the best software upscalers out there, comparable to native resolution rendering and DLSS, this is everything FSR 1.0 should have been. Alex Battaglia has this in-depth look at the first supported game: Arkane's wonderful Deathloop.
 
DF take on motion clarity and quite an interesting one:

oNJIQbq.png

Uri9InX.png


Just like DLSS, I'll use FSR 2 over native/TAA when dlss is not available.

Would be good to know what exactly it is that benefits dlss/nvidia more, is it the tensor cores/dedicated hardware or/and purely down to software/implementation......
 
DF take on motion clarity and quite an interesting one:

oNJIQbq.png

Uri9InX.png


Just like DLSS, I'll use FSR 2 over native/TAA when dlss is not available.

Would be good to know what exactly it is that benefits dlss/nvidia more, is it the tensor cores/dedicated hardware or/and purely down to software/implementation......
Interesting video. It seems like they found a few shortfalls of the FSR 2.0 performance mode vs the DLSS performance mode. Can't say I would ever use that though, same as DLSS performance/FSR performance mode the reduction in quality is just too much.

That top example is a good differentiator between the technologies IMO. You have the FSR 2.0 image providing more detail in textures, vs the DLSS image provide a much softer image.

I'm quite happy with that, it's all personal preference. No doubt you think the more detail is a bad thing as the image is over sharpened. Just as I'd think the DLSS image looks rather blurry overall in comparison and less detailed. I'm glad FSR has a sharpening slider, that could be useful for fine tuning how much detail you want.

I think we're now at a stage where they are so close, it's literally splitting hairs. You have to go to fairly extreme lengths of zooming into 300%, strafing backwards and forwards, looking at a distance fence/object to try and see a noticeable difference between them. Can't say I ever do that when playing a game to be honest. :cry:
 
Last edited:
Yeah would have been good to see the quality comparisons. I have used performance dlss before with cp 2077 at 4k, as it is required for a locked 60 when using max RT, it's pretty good but then again, when sitting 6/7 feet back from a 55", you wouldn't notice the softness. At 1440P, I defo wouldn't be dropping below balanced at the very most.

The thing I find interesting with the walkie talkie and grass in the background is the lack of bokeh/DOF around the walkie talkie in FSR 2 where as with dlss and native, the grass is blurred out as it should be, I wonder if this is done this way by amd in order to avoid any kind of ghosting when strafing/panning like that? Or would disabling sharpening solves these issues?

I will definitely be turning the sharpening slider right down for FSR 2 ;) There is a sharpness slider that can/should be added along with dlss but for some reason, developers don't add it, RDR 2 has it now, iirc god of war too and one other but I have never tried it due to my dislike for sharpening :p

Agree, in motion you wouldn't notice although personally I would notice the lesser temporal stability shimmering, aliasing/jaggies from FSR 2, it's just something that really bothers me, been playing some forza 5 lately and even with 4xmsaa, the jaggies and aliasing as driving throughout the world still annoys me, enabling fxaa helps but then makes the whole image too blurry.

One thing that could be worth experimenting with on FSR 2 is the use of RSR as DLDSR works very well in combination with DLSS.
 
Back
Top Bottom