• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Here's an open secret.

There's no such thing as AI, and no software can regenerate and evolve to re-write itself.

They are all just differing algorithms that use complex methods of interpolation and extrapolation to fill in the missing details.

The rest is marketing.
 
FSR tested on a GTX970:
https://www.youtube.com/watch?v=Owuxz-ifvcY


I'd have thought that a professional reviewer would have noticed that FSR looked much like native while TAAU looked vastly better than both.

This is what he was looking at, taken from the comparison images from his review:

tmp.png


:D
 
Last edited:
Isnt it that FSR 1.0 is only the begining (as DLSS v1 was) - im sure i read that AMD will bring DirectML into it - all free (as DirectML is part of DX12 Ultimate)
 
DLSS 3.0 working in all games was a rumour, it may be a false one but who knows?

That was just something made up by MILD.

So after the dust as settled with FSR...

I think its worth while agreeing its a successful launch.
So based on this I am going to predict.

I believe before this year is over 50 games will be supported.
Before 2022 is over 100+ supported games

You will see less DLSS games being released, the reason you only need to look at Freesync for the answer.

Devs will not spend extra time and money to enable DLSS for a small player base when they can add in FSR for FREE and get more players in.

Also allows more players to enjoy higher frame rates than what they could without FSR.

@LtMatt post above about 6700 not classed as a 4k GPU but with FSR enabled it jumps a generation in performance.

Honestly my opinion FSR is more groundbreaking than DLSS because of my points above.
Not because what one is doing the better job, just like Gsync is also regarded the better of the two technologies but yo see open source free to add will always win.

Why pay for little gain.

;)

That is actually quite depressing if you stop and think about it - inferior versions of the tech win out and people see it as a good thing.

Here's an open secret.

There's no such thing as AI, and no software can regenerate and evolve to re-write itself.

They are all just differing algorithms that use complex methods of interpolation and extrapolation to fill in the missing details.

The rest is marketing.

DLSS's trick is that it isn't just trying to fill in missing detail via spatial or temporal interpolation/extrapolation techniques - it uses a model that has been trained over and over to match patterns in the low res output with potential high res references and then build something which mimics the style of the best matches which it "thinks" fits the situation. Though calling it AI is LOL as it can't break out of its training it is just a complex algorithm which itself can't learn to do better.
 
Last edited:
Problem is too many look at it due to their enthusiasts attitude,because they always have the latest technology in their systems. The fact is most gamers are on older hardware,where certain techniques are not supported well on,or play games on engines which don't have support for it. Inferior technology only wins out if the superior technology is only supported by a fraction of the market.

AMD and Nvidia have to be blamed for this,because like smartphones people are keeping GPUs longer and longer as prices rises,availability tanks and the mainstream stagnates.

If everyone was on newer generation GPUs,you could argue FSR would make much less sense TBH. Its quite clear DLSS is a more advanced technique.

Many here argued about how GSync was so much better than FreeSync/VESA Adaptive Sync which was "inferior" but had forgotten one was far easier and cheaper to implement. So VESA Adaptive Sync won out in the end.

Tessellation appeared first with the ATI 8000/9000 series in Truform,but it took until Nvidia supporting it too for MS,etc to bother to integrate it properly into DirectX. DirectX also won over other competing graphics APIs because MS put a ton of support behind it.

Creative Labs won over Aureal,etc.Its why VHS won over Betamax and Video2000,and why Android has 4X the sales marketshare of iOS devices which have the best ARM based mobile SOCs.

Plus we even need to go and look at X86 - X86 only became prominent because Intel lucked out with a design they had in their parts bin for the IBM PC.

There were probably better instruction sets available overtime.

Basically one has to imagine how much different the computing landscape would be if the best technologies won out!
 
Last edited:
That is actually quite depressing if you stop and think about it - inferior versions of the tech win out and people see it as a good thing.
The tech is less important than the people. The difference between FSR UQ and DLSS is small, the difference in user base between 15% and 100% isn't.
 
The tech is less important than the people. The difference between FSR UQ and DLSS is small, the difference in user base between 15% and 100% isn't.

What is the performance gain vs native for FSR UQ vs DLSS quality? and for FSR v DLSS at around the same image quality?
 
That is actually quite depressing if you stop and think about it - inferior versions of the tech win out and people see it as a good thing.
Not at all, though I can understand why you might see it that way. AMD's solution is substantially easier and less complicated to implement, and produces extremely good results even now. Considering that it can be applied all the way back to the NVidia 900 series is an indication that the approach is generic enough to win out on its own, without necessarily being higher quality. The more developers use it, the more likely it'll be iterated upon and improved. Perhaps it'll outperform DLSS, who knows. I'll be happy that it'll work on whatever card I buy.
 
That is actually quite depressing if you stop and think about it - inferior versions of the tech win out and people see it as a good thing.

It will win out because FSR doesn't keep you locked in to mid to high end GPU's only from one particular vendor.

As i said before, and as long as the technology is good the only thing i care about is the usability of it, the industry only cares about the same thing, if Nvidia had a different mindset their technology would actually win, but because their only interest is to use it to sell more GPU's at a higher price it will lose.
 
What is the performance gain vs native for FSR UQ vs DLSS quality? and for FSR v DLSS at around the same image quality?
Trick question(s). We neither have it in the same game nor do they suffer the same cons, so then we get into the subjective (more ghosting vs blurrier). Obviously we know FSR is generally half the cost at a minimum vs DLSS (+- dep. on the GPU), but then that's again just one aspect which isn't enough to judge its worth on.
 
The other aspect is AMD is cunning. They can push out FSR as a catch all technology that works with most modern GPUs,and they might be able to get it integrated to a level it replaces normal engine level resolution scaling in games. At a later date,they can move to something closer to DLSS,as an additional upscaling/image reconstruction tool especially if their future hardware supports this better.They did that with FreeSync - GSync was better at the start but much more expensive,and less accessible. Then AMD helped out with the FreeSync Premium branded displays for those requiring more exacting standards.
 
Last edited:
Not at all, though I can understand why you might see it that way. AMD's solution is substantially easier and less complicated to implement, and produces extremely good results even now. Considering that it can be applied all the way back to the NVidia 900 series is an indication that the approach is generic enough to win out on its own, without necessarily being higher quality. The more developers use it, the more likely it'll be iterated upon and improved. Perhaps it'll outperform DLSS, who knows. I'll be happy that it'll work on whatever card I buy.

Well ultimately the premise is that the more advanced versions of the tech will almost always lose out - that isn't really a positive ultimately.

Trick question(s). We neither have it in the same game nor do they suffer the same cons, so then we get into the subjective (more ghosting vs blurrier). Obviously we know FSR is generally half the cost at a minimum vs DLSS (+- dep. on the GPU), but then that's again just one aspect which isn't enough to judge its worth on.

Not really a trick question though it is difficult to precisely measure them side by side at the moment people seem to have lost focus of the performance side of it. When you have roughly comparable output from both DLSS is providing around 2 to 3 times the performance uplift that FSR does.
 
What is the performance gain vs native for FSR UQ vs DLSS quality? and for FSR v DLSS at around the same image quality?
It depends on each game features and how hard a card gets hit by those features. For example a Radeon card will gain probably more FPS since it behaves better at lower resolutions than a Nvidia card and also it gets hit more by RT. But i don't think it will be far from DLSS Quality gains.
 
Well ultimately the premise is that the more advanced versions of the tech will almost always lose out - that isn't really a positive ultimately.

Its useless to those other guys who can't use it. to them it might as well not exist, of course Nvidia's hope is that those people will buy into their eco system.

Well now they don't have to.
 
The problem with FSR will be the Ray Traced reflections. They were already looking worse on DLSS when compared with native, they will look even much worse when using FSR. :D
 
Its useless to those other guys who can't use it.

If you look at one of my posts,its working on a GTX960.

The problem with FSR will be the Ray Traced reflections. They were already looking worse on DLSS when compared with native, they will look even much worse when using FSR. :D

The bigger problem is mainstream hardware is still crap at doing RT to any level,which is why we are having to use upscaling,etc in the first place. AMD/Nvidia are just hiding GPUs which are not very good at RT,and charging premium pricing.

This is before games actually start using more scene wide effects liberally as opposed to some reflections,etc here and there.

Once you start going that way,I can the quality of most of the effects not getting any better.
 
Back
Top Bottom