• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Just watched a video from JayZ where he basically spitballs about FSR. The only facts confirmed/alleged we can take from his video are as follows.

It will work on a larger userbase (confirmed fact)
It is easier to implement than DLSS. (see video for why)
10x developers have already signed up for it (not confirmed and no idea who)
AMD have claimed it will give similar clarity as DLSS and is easier to implement (not confirmed and I consider this BS until proved).
FYI
It was over 10 game studios and engines and this was stated by AMD themselves
https://youtu.be/eHPmkJzwOFc?t=214
 
Regarding support on Nvidia GPUs doesn't CAS currently work Nvidia GPUs (CP2077) and I'm certain neither AMD or Nvidia are the ones actively making that happen. Whether or not FSR will work in the same way is to be seen.
 
Regarding support on Nvidia GPUs doesn't CAS currently work Nvidia GPUs (CP2077) and I'm certain neither AMD or Nvidia are the ones actively making that happen. Whether or not FSR will work in the same way is to be seen.

I never tried to solve it as I simply used DLSS instead but when I tried CAS in CP2077 with both my 3070 and 1070 it was making the whole scene flicker. Only really looked it out of curiosity to see how the game would perform on the 1070 vs my 3070 where the 1070 doesn't have DLSS.
 
I can bet that we'll only see Godfall supported on 22nd. Or they will announce Godfall is the first game to support FSR starting from Sept 30. :D
 
The magic of open source.

If it gets picked up by the game makers then Nvidia will optimise their cards for it.

Exactly, they will support it just like the had to do with Freesync and why proprietary PhysX became a footnote in history. Because proprietary tech tends to lose out in the end. I cannot imagine a new AAA game with FSR and no DLSS being ignored by Nvidia and the game benchmarks showing them 20% slower and looking worse. They will support it but I predict they will do so for RTX GPUs only, at least at first.
 
Regarding support on Nvidia GPUs doesn't CAS currently work Nvidia GPUs (CP2077) and I'm certain neither AMD or Nvidia are the ones actively making that happen. Whether or not FSR will work in the same way is to be seen.


CAS is a post-processing technique that is applied to the final image output from the rendering engine. Therefore, there isn't really much to integrate and strongly hardware agnostic.

Form what we know about FSR it has to integrate with the engine closer. What is liekly the case is AMD optimize all the code to run well on AMD GPUs, but Nvidia will have to put in a lot of driver support and/or modify the source code (the advantage of open source).
 
CAS is a post-processing technique that is applied to the final image output from the rendering engine. Therefore, there isn't really much to integrate and strongly hardware agnostic.

Form what we know about FSR it has to integrate with the engine closer. What is liekly the case is AMD optimize all the code to run well on AMD GPUs, but Nvidia will have to put in a lot of driver support and/or modify the source code (the advantage of open source).
You said AMD will not support Nvidia. What else should they do after they proved it works on Pascal? Write the drivers for Nvidia cards?
 
less amd users than nvidia so technically less circle jerking going off :D:p

ps you use deep learning for architecture if i recall correctly? or am i remembering wrong (often i do)


not architecture. I have used deep learning in lots of fields in the last 8 years. Lots of work on areas related to autonomous cars and driver assistance systems, some generic image recognition tasks, predicting time series. Currently more related to network infrastructure. Not only deep learning, machine learning in general but I am always keen to put the latest DL algorithms to the test. For various practical reason often the implementation that goes to production is based on older, simpler ML techniques , but the potential of DL to dominate in all clear is clear. My professional work is between that of a researcher and a production manager. I like playing with data, reading the latest research, coming up with use cases, but managing other engineers to bring the research in to production.
 
You said AMD will not support Nvidia. What else should they do after they proved it works on Pascal? Write the drivers for Nvidia cards?


Well, since it is open source, if they really wanted to FSR to displace DLSS, then they could also optimize it for Nvidia cards. No need to touch the drivers since they are writing the source code.
Without knowing details of FSR it is hard to know why AMD said it is up to Nvidia to optimize. But lets imagine that FSR uses deep learning, as that is the state of the art in image reconstruction and AMD have patents on DL based image reconstruction. RDA2 has some additional instructions that can provide a speed up to some of the linear algerbra requred for a neural network inference, AMD also have optimized libraries for earlier GPUs. If AMD really wantes FSR to do the DL as fast as possible on Nvidia GPUs they would write a code path in CUDA that can run on the tensor cores.


Now I 100% understand why AMD don't want to do that, and I find that a shame. But equally, why on earth would Nvidia put resources in to rying to undermine their own USP (DLSSS)?.
 
I don't have high expectations, i think it will not look nice except on the highest setting and even if by a miracle it will look fine, it will be trashed in many reviews because there are too much money involved in promoting DLSS.
On the other hand i don't think AMD had any other option, those who talk about AMD making their own ML upscaler have no idea what they talk about. Ok they can make one, spend a fortune and then what? You still need to fight Nvidia trying to implement your tech in every game. And the ML will only work on RDNA2.
Look what happens with the laptop market, the dirty tricks Nvidia and Intel are using in that market. And there we talk about loads of money, it is much easier for Nvidia to control the PC gaming market.

Really the only solution for AMD was to make an open source feature that can also work on older generations. This can put a lot more pressure on the game devs than an RDNA2 exclusive feature.
I don't think it is a gift, it is a battle in a big war and i don't think AMD are happy promoting such a feature. The same goes for Nvidia i don't think they are happy with AMD "gift" to players and they are not too happy with their DLSS either but it helped them to sell two generations already. Instead of investing harder in creating better cards they paid far less money to the reviewers and now there are a lot of gamers who think fake resolution is the future of gaming. :)
DLSS is good when you sell a new gen but it will not be as good when you need to convince the DLSS capable cards owners that they should buy your new product. That's why i say that Nvidia is not too happy with DLSS either.

I understand your argument to some extent. But the algorithms behind DLSS are not unique, or proprietary. You can download the source code (which mostly run on CUDA, because NVidia dominate the DL market, but the code is easy to translate). They then need to optimize for AMD hardware, which should be too hard because AMD have their own linear algebra libraries. RDNA2 does offer some additional instructions that help matrix math, but it is not the fundamental difference that Tensor Cores provide. You can run a DL based super resolution techniue on older AMD hardware, but the the model complexity will be more limited and the convolution will be slow limiting the performance uptick (part of the problem in DLSS 1 was the model was just not that fast, taken 3ms or so.

I think form a consumer perspective it would be great for AMD to fight Nvidia over DL based super resolution. Hopefully at some point MS would put some API in place and make this more agnostic for developers.

Similalrly, the temporal TSR method in UE5 looks fantastic and has no DL at all. TSR has more complex engine integration requirements, but I think this would have been a great solution.



TBH, I am really doubtful of the rumours/vague statements that FSR neither uses Deep learning, nor temporal accumulation. There simply is no other known technique that hasn't been applied in existing scaling. AMD have no 100% stated there is no DL, AMD do have patents similar to DLSS. So my money is on FSR being like DLSS1 but without the major drawbacks that made DLSS useless, such as having to send Nvidia an executable. DLSS on spatial image done well will be decent enough. Deep learning blows other linear techniques away in terms of quality.
 
Well, since it is open source, if they really wanted to FSR to displace DLSS, then they could also optimize it for Nvidia cards. No need to touch the drivers since they are writing the source code.
Without knowing details of FSR it is hard to know why AMD said it is up to Nvidia to optimize. But lets imagine that FSR uses deep learning, as that is the state of the art in image reconstruction and AMD have patents on DL based image reconstruction. RDA2 has some additional instructions that can provide a speed up to some of the linear algerbra requred for a neural network inference, AMD also have optimized libraries for earlier GPUs. If AMD really wantes FSR to do the DL as fast as possible on Nvidia GPUs they would write a code path in CUDA that can run on the tensor cores.


Now I 100% understand why AMD don't want to do that, and I find that a shame. But equally, why on earth would Nvidia put resources in to rying to undermine their own USP (DLSSS)?.
You mean that not only they should have done a DLSS, but one that works great on Nvidia's tensors. Sounds legit.
Look we made a program. We get 40 FPS, Nvidia gets 45 but it is ours and open source. :D

From what i understand it runs on Nvidia hardware anyway. Maybe not as eficient as it will run on AMD hardware but it is up to Nvidia to optimize, at least for the older cards if there is any demand to do it, that's what that guy from AMD said.
 
not architecture. I have used deep learning in lots of fields in the last 8 years. Lots of work on areas related to autonomous cars and driver assistance systems, some generic image recognition tasks, predicting time series. Currently more related to network infrastructure. Not only deep learning, machine learning in general but I am always keen to put the latest DL algorithms to the test. For various practical reason often the implementation that goes to production is based on older, simpler ML techniques , but the potential of DL to dominate in all clear is clear. My professional work is between that of a researcher and a production manager. I like playing with data, reading the latest research, coming up with use cases, but managing other engineers to bring the research in to production.
ah im getting confused with someone else using raytracing for architecture, my memory sucks..
 
The future is ML. AMD chose not do it, but Intel will for their new GPUs (it's a huge focus for the company in general), and likely their version will also be open-source because they need to combat Nvidia and they can't do proprietary lock-in starting with 0% market share. Plus, there's always new entrants, like Facebook, who's also working on such tech for their own needs and don't mind sharing (again for similar reasons). Check this out, it's a good write-up. Plus the results (click for video):

 
The future is ML. AMD chose not do it, but Intel will for their new GPUs (it's a huge focus for the company in general), and likely their version will also be open-source because they need to combat Nvidia and they can't do proprietary lock-in starting with 0% market share. Plus, there's always new entrants, like Facebook, who's also working on such tech for their own needs and don't mind sharing (again for similar reasons). Check this out, it's a good write-up. Plus the results (click for video):


I don't think we know everything that AMD are doing re: FSR, yet.
 
Back
Top Bottom