• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

That's beyond disappointing if that's all they've got to show as support. Once again reinforcing the reality that AMD is just throwing this out there in order to have the marketing badge.
 
I'm curious about the launch date silence on Navi23, wondering if it might make sense to launch this low profile card off the back of FSR...?

Check out the 1440p* performance in Godfall would be a great marketing buzz for what is understood to be a 1080p card.
 
Poor show so far more so in terms of do those games even need fsr/dlss like tech in the first place :confused: I suppose it will be great news for people on weaker old gpus and 1440/4k though.




Sorry fogot..... But Godfaaaaaaaaaaaaaaaaaaaaaaaaaaaaall!!!!!!!!!!!!!!
 
I would assume battlefield 2042 will be the flagship title later in the year but could be wrong

Looks like that is a nvidia sponsored title but would imagine fsr will be added anyway since fidelity fx cas etc. were added to likes of cyberpunk etc.
 
not architecture. I have used deep learning in lots of fields in the last 8 years. Lots of work on areas related to autonomous cars and driver assistance systems, some generic image recognition tasks, predicting time series. Currently more related to network infrastructure. Not only deep learning, machine learning in general but I am always keen to put the latest DL algorithms to the test. For various practical reason often the implementation that goes to production is based on older, simpler ML techniques , but the potential of DL to dominate in all clear is clear. My professional work is between that of a researcher and a production manager. I like playing with data, reading the latest research, coming up with use cases, but managing other engineers to bring the research in to production.

It has been a long time since I used ML in anger (since my Thesis in 2007), but I was curious if RNN's had been applied to video upscaling? I gather most upscaling is done using CNN's, but obviously there's no view of the past in CNN's, so was wondering if RNN's can inherently handle temporal information without requiring access to multiple chronological frames?

Worth saying I know enough to be dangerous in current ML techniques, and no more, so probably a stupid question...
 
For me the list looks much better than expected, the question is if FSR is any good. The tech can gain or ( most likely :D ) lose popularity based on how good it is. It doesn't matter if you don't start with AAA games, if we don't count Godfall of course. :)

The problem with the list and with FSR is that you get most gains from DLSS/FSR/TSR tech in games that are using RT or where it is a big difference between the FPS you get at 1080p or 1440p and the FPS you get at 4k. And there are not too many RT games on that list.
RT makes the biggest difference especially on the Radeon cards, a game like CP can become playable with RT @ "4K".
 
That's beyond disappointing if that's all they've got to show as support. Once again reinforcing the reality that AMD is just throwing this out there in order to have the marketing badge.

Tbh ignoring the games on offer from the leak it's better than nothing. Keep in mind DLSS only had a single game on launch that was Battlefield correct me if I wrong.
 
It has been a long time since I used ML in anger (since my Thesis in 2007), but I was curious if RNN's had been applied to video upscaling? I gather most upscaling is done using CNN's, but obviously there's no view of the past in CNN's, so was wondering if RNN's can inherently handle temporal information without requiring access to multiple chronological frames?

Worth saying I know enough to be dangerous in current ML techniques, and no more, so probably a stupid question...

It is possible, but RNNs are mostly related to the prediction of a time series, while image upscaling is convolutional in nature. The same underlying dynamics that allow CNNs to be the state of the art in image recognition allows them to be the state of the art in image reconstructions essentially, the universe we live in is highly structured and NN can learn the statistical relationships in this structure).

The temporal accumulation is to increase the amount of information available in a spatial domain before spatial upscaling via CNNs. One simplistic way to see what happens is if ever even frame you render the even frames, ever odd frame the odds frames, then is you accumulate 2 frames you have complete information - under the caveat that nothing moves. When there is motion, you have to use motion vectors to predict the displacement. This doesn't require ML, straight algebra. You do this well, you get good results with a few moiton problems, as seen in UE5 TSR.

What DLSS does, is the temporal accumulated and projected information is combined with additional spatial data such as the depth buffer, as well as the original motion vectors, and fed into a large CNN. This CNN not only applies state of the art spatial uscaling, but does so on an image that is already far more detailed than the rendered final frame. More so, the CNN is trained to not only upscale a perfect image, but is trained to correct for some of the motion artifacts that temporal accumulation & projection. The model learn motion related distorion from the accumulated spatial image and the associated motion vectors, learning a distortion function. The dpeth buffer and other spatial data is used by the model to better enhance edges as a form of morphological AA, but without an explicit AA algorithm. The actual implementation may well consistent of several DL models, could eaisaly be one that corrects for motion distortion, and the output fed into a more standard image reconstruction network



In general, the use of varied ML models is quite promising. DLSS is just one application. We can image DL models used o do things like improve light simulation or ray tracing, improve character model motion and dynamics, advanced post processing effect that make e.g. water look more realistic, improve facial animation, synthesis voices (including your own).
Procedural texture and model polygon generation can move to new levels.
 
Tbh ignoring the games on offer from the leak it's better than nothing. Keep in mind DLSS only had a single game on launch that was Battlefield correct me if I wrong.


Indeed, if these tecnology is actual half decent some more game devs and engines will support it.
 
It is possible, but RNNs are mostly related to the prediction of a time series, while image upscaling is convolutional in nature. The same underlying dynamics that allow CNNs to be the state of the art in image recognition allows them to be the state of the art in image reconstructions essentially, the universe we live in is highly structured and NN can learn the statistical relationships in this structure).

The temporal accumulation is to increase the amount of information available in a spatial domain before spatial upscaling via CNNs. One simplistic way to see what happens is if ever even frame you render the even frames, ever odd frame the odds frames, then is you accumulate 2 frames you have complete information - under the caveat that nothing moves. When there is motion, you have to use motion vectors to predict the displacement. This doesn't require ML, straight algebra. You do this well, you get good results with a few moiton problems, as seen in UE5 TSR.

What DLSS does, is the temporal accumulated and projected information is combined with additional spatial data such as the depth buffer, as well as the original motion vectors, and fed into a large CNN. This CNN not only applies state of the art spatial uscaling, but does so on an image that is already far more detailed than the rendered final frame. More so, the CNN is trained to not only upscale a perfect image, but is trained to correct for some of the motion artifacts that temporal accumulation & projection. The model learn motion related distorion from the accumulated spatial image and the associated motion vectors, learning a distortion function. The dpeth buffer and other spatial data is used by the model to better enhance edges as a form of morphological AA, but without an explicit AA algorithm. The actual implementation may well consistent of several DL models, could eaisaly be one that corrects for motion distortion, and the output fed into a more standard image reconstruction network

The mighty neural network was unable to solve the ghosting in games which should be easy as ... if any AI was involved, the AI "knows" the shape of the car and it will only display the edges from the last frame.
And i suspect they fixed most of it now by dumping most of the temporal data and focusing more on the spatial reconstruction. Because people are seeing big improvements even in Death Stranding and we were told that the motion vectors on that game were crap, that's why it had a lot of ghosting. Yet, the mighty neural network was able to get rid of most of the ghosting only after TSR was released and just before FSR is released.
Plus what you describe here looks a lot like per game training but we are told that DLSS 2.0 is not working this way anymore, you feed images from CP 2077 into neural network and that makes Metro Exodus to look better. :)
 
Back
Top Bottom