• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Associate
Joined
20 Nov 2020
Posts
1,120
Being open source, FSR can only get better, anyone can improve it. So even if it will look as bad as in the 1060 screen captures, things can only get better from now on.
 
Associate
Joined
13 Jul 2020
Posts
501
Is it just me or is history repeating itself here?

I am getting Freesync vs Gsync vibe.

I remember Gsync being the better choice because it had £100 expensive hardware.

Well look how that ship sailed.

If Amd can pull off FSR to look just like native or there about because I am sure like DLSS it won't be perfect then does it really matter what approach each graphic manufacturer choose?

So long all users get this upsampling feature I couldn't care less about DLSS gives a slight better image if you zoom in 10x you can spot a leaf detail.

How did it sail? Gsync still better than freesync. Not sure what you mean

freesync monitors - tons of issues, flickering and crappy in general, also vrr working only above a certain fps

gsync - premium monitors, barely any issues, no flickering, vrr working at super low fps compared to freesync
 
Soldato
Joined
25 Nov 2011
Posts
20,641
Location
The KOP
How did it sail? Gsync still better than freesync. Not sure what you mean

freesync monitors - tons of issues, flickering and crappy in general, also vrr working only above a certain fps

gsync - premium monitors, barely any issues, no flickering, vrr working at super low fps compared to freesync

Think you still reading issues from about 5 years ago.

Freesync as come on massive with high resolution and Freesync windows.

My new monitor coming soon is 4k 40-160hz HDR 600
With Freesync they is a lot of monitors to choose from cheap end to premium end your choice will obviously have limitations if you got the lower end.
 
Soldato
Joined
4 Feb 2006
Posts
3,223
Also, Saphire AMD cards already have access to FSR and have for several months now, it's called Trixx Boost :)


You'd have to be a complete numbskull to believe FSR is the same as TrixxBoost. If it was then AMD would implement it in the driver without having to work with devs. Try harder mate.
 
Soldato
Joined
12 May 2014
Posts
5,265
Is it just me or is history repeating itself here?

I am getting Freesync vs Gsync vibe.

I remember Gsync being the better choice because it had £100 expensive hardware.

Well look how that ship sailed.

If Amd can pull off FSR to look just like native or there about because I am sure like DLSS it won't be perfect then does it really matter what approach each graphic manufacturer choose?

So long all users get this upsampling feature I couldn't care less about DLSS gives a slight better image if you zoom in 10x you can spot a leaf detail.
It really depends on the developers and how much support AMD throws their way to make this seamless, but FSR does have the potential to kill DLSS especially since it is used on consoles as well as PCs.


P.S. I still don't like DLSS/FSR before anyone thinks i have changed positions.
 
Caporegime
Joined
18 Oct 2002
Posts
32,621
Being open source, FSR can only get better, anyone can improve it. So even if it will look as bad as in the 1060 screen captures, things can only get better from now on.



I expect it will get better. Given some of the patents AMD have released and the state of the art, I expect AMD will be trying to keep their options open to allow allow temporal data in the future. This may not be possible under their current hardware as the ML models get much larger, but when AMD have their equivalent of Tensor Cores in RDNA3 they will then be in a position to do their own DLSS 1 to 2 type transition and ramp up the quality.
 
Associate
Joined
13 Jul 2020
Posts
501
Think you still reading issues from about 5 years ago.

Freesync as come on massive with high resolution and Freesync windows.

My new monitor coming soon is 4k 40-160hz HDR 600
With Freesync they is a lot of monitors to choose from cheap end to premium end your choice will obviously have limitations if you got the lower end.

Sorry, HDR 600 monitor is all i needed to hear.

That is anything but premium.
 
Caporegime
Joined
18 Oct 2002
Posts
32,621
It really depends on the developers and how much support AMD throws their way to make this seamless, but FSR does have the potential to kill DLSS especially since it is used on consoles as well as PCs.


P.S. I still don't like DLSS/FSR before anyone thinks i have changed positions.


How many technologies in GPUOpen are widely used? That tells you all you need to know. Especially when you see the new generation of temporal AA and SR algorithms such as in UR5 and quake 2 RTX, and the conued growth of DLSS which is now plug 'n play for many developers. There will be many choices and engine specific implementations.
 
Soldato
Joined
25 Nov 2011
Posts
20,641
Location
The KOP
Sorry, HDR 600 monitor is all i needed to hear.

That is anything but premium.

And that is the choice I made because 27" is all I need and the higher 160hz plus being 4k

If I wanted more HDR which wasn't the selling point I would be looking at something like this

https://www.overclockers.co.uk/asus...cCWwvxLwUdxCLxKYufTFUJ_O-g0y-NbRzbcc8xZAK_sXv

So because this monitor isn't HDR 1000 doesn't make it premium?
Gez no wounder manufacturers keep increasing the prices people are easy bent over.
 
Soldato
Joined
25 Nov 2011
Posts
20,641
Location
The KOP
How many technologies in GPUOpen are widely used? That tells you all you need to know. Especially when you see the new generation of temporal AA and SR algorithms such as in UR5 and quake 2 RTX, and the conued growth of DLSS which is now plug 'n play for many developers. There will be many choices and engine specific implementations.

All the fidelity FX are being used. Just look at the game support list and it wasn't released long ago.
 
Associate
Joined
20 Nov 2020
Posts
1,120
I expect it will get better. Given some of the patents AMD have released and the state of the art, I expect AMD will be trying to keep their options open to allow allow temporal data in the future. This may not be possible under their current hardware as the ML models get much larger, but when AMD have their equivalent of Tensor Cores in RDNA3 they will then be in a position to do their own DLSS 1 to 2 type transition and ramp up the quality.
Come on stop that bs about tensor cores. They are useless and are inherited from Nvidia Pro line. Better ask them to add more compute units and leave the tensor cores job to the compute units...but that will also make the older gens DLSS compatible. :D
What is the connection between Temporal data and tensor cores? UE 5 proves that you can have good TSR with no dedicated hardware and no ML.
And to be honest i am beginning to doubt that Nvidia is doing any ML, you won't have ghosting effects if the machine will know how a car looks like. You don't need temporal data for that.
 
Associate
Joined
20 Nov 2020
Posts
1,120
Better ask how many games we'll have with RTX and DLSS without Nvidia big wallet. :D
Probably 0 because AMD wouldn't have payed for the RTX either.
 
Caporegime
Joined
18 Oct 2002
Posts
32,621
Come on stop that bs about tensor cores. They are useless and are inherited from Nvidia Pro line. Better ask them to add more compute units and leave the tensor cores job to the compute units...but that will also make the older gens DLSS compatible. :D
What is the connection between Temporal data and tensor cores? UE 5 proves that you can have good TSR with no dedicated hardware and no ML.
And to be honest i am beginning to doubt that Nvidia is doing any ML, you won't have ghosting effects if the machine will know how a car looks like. You don't need temporal data for that.


ehh, tensor cores provide extremely efficient matrix operations as well as hardware support for FP16 ops etc.

It is not a choice between tensor cores or compute units, you can have both. The tensor cores take up about 4% of the die area, which is tiny. Simply adding compute units in place of the tensor cores doesn't automatically lead to any increased performance if those CU are not properly utilized, which always tends to be a problem.


I never said there was a connection between tensor cores and temporal data. Merely, I speculated than AMD might not be able to have a non-linear image reconstruction technique that can fully exploit the additional data and model complexity of added temporal data without the significant computational costs that defeat the purpose.


And yes, you can do a lot with the temporal data and no ML as UE5 shows. That is why it is dispaointing to see that currently FSR doesn't appear to leverage temporal data.
 
Associate
Joined
20 Nov 2020
Posts
1,120
ehh, tensor cores provide extremely efficient matrix operations as well as hardware support for FP16 ops etc.

It is not a choice between tensor cores or compute units, you can have both. The tensor cores take up about 4% of the die area, which is tiny. Simply adding compute units in place of the tensor cores doesn't automatically lead to any increased performance if those CU are not properly utilized, which always tends to be a problem.


I never said there was a connection between tensor cores and temporal data. Merely, I speculated than AMD might not be able to have a non-linear image reconstruction technique that can fully exploit the additional data and model complexity of added temporal data without the significant computational costs that defeat the purpose.


And yes, you can do a lot with the temporal data and no ML as UE5 shows. That is why it is dispaointing to see that currently FSR doesn't appear to leverage temporal data.

Matrix operations are matrix operations. A compute unit will do them as fast as long as it supports the instructions. 4-5% of die area is huge and you can probably add more than 5% extra compute units.
The cost of upscaling even with ML is much lower than the cost of rendering at native. So if you use a part of your compute units for the upscaling your card will behave like a card with lesser CU at a lower res, still better than adding separate hardware which is used only from time to time. It is like you lose 10% from using lesser CU and you gain 35% from upscaling. And you get to use the CU all the time.
On Nvidia Pro cards the tensor cores are doing much more than just upscaling and it is probably a good decision not to make a separate chipset for the gaming cards. But that does not mean that you absolutely need tensor cores. They were just there so Nvidia used them.
I don't understand why AMD is not using temporal data, but let's wait until 22nd and we'll find out.
 
Back
Top Bottom