• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

In the article it states "Finally, AMD says that the deep learning approach does not take into account the aspects of the original image, which may lead to lost color or details in the final image."
 
In the article it states "Finally, AMD says that the deep learning approach does not take into account the aspects of the original image, which may lead to lost color or details in the final image."

Alrighty then.

Guess we all just need to see how well it does.

I am hoping it's decent I'll soon have a 4k monitor so will needed this to push games with high settings.
 
If it is any good then the question is how AMD and Nvidia will fool the players to buy each of their next generations of cards? If you already have a card capable of doing "4k" 60 then that is enough.
DLSS is a good feature for sales but it goes against the planned obsolescence that these companies are using. And maybe you can manipulate things with only DLSS since it is proprietary tech but if you have an open source version that does the same thing...
 
If it is any good then the question is how AMD and Nvidia will fool the players to buy each of their next generations of cards? If you already have a card capable of doing "4k" 60 then that is enough.
DLSS is a good feature for sales but it goes against the planned obsolescence that these companies are using. And maybe you can manipulate things with only DLSS since it is proprietary tech but if you have an open source version that does the same thing...
Add a smear percentage. As the card gets older you crank up the smear percentage in drivers :D
 
In the article it states "Finally, AMD says that the deep learning approach does not take into account the aspects of the original image, which may lead to lost color or details in the final image."

Then it says.

AMD came up with a solution that uses linear and non-linear upscaling technology, that will preserve and improve fidelity of the image. It is also said that the technology will “create high-quality image approximations and achieve high framerates”.
 
F. Azor said in that interview that everybody is comparing screenshots for DLSS when they check IQ but no one is looking at IQ while in motion. I noticed that too, in motion DLSS looks like the Radeon Boost feature. :)
 
When DLSS was getting hammered, i was saying, but you don't stop playing in the middle of a game (or pause it), then shove your face right up to the screen and go, oh yeah, that tree (or whatever), does look a bit blury doesn't it :p
 
When DLSS was getting hammered, i was saying, but you don't stop playing in the middle of a game (or pause it), then shove your face right up to the screen and go, oh yeah, that tree (or whatever), does look a bit blury doesn't it :p


Depends on the game, in battlefield v it was very noticeable during gameplay. When I was using a tank I’d see people running along as infantry with a weird sorta blurred afterimage behind them.
 
You drive a car in the game, pass near a tree and say : what was that? Then you stop the car and take a look at the tree and it looks fine, it only looked blurry when the car was moving. :D
Anyway if the "news" are true, then FSR is also based on ML and neural networks.
 
F. Azor said in that interview that everybody is comparing screenshots for DLSS when they check IQ but no one is looking at IQ while in motion. I noticed that too, in motion DLSS looks like the Radeon Boost feature. :)

Here @ 56 min:
Static images vs in motion images:
And he is right.

There is a problem with motion but it certainly is nowhere near as bad as radeon boost... Also it comes largely down to the game in my experience, perhaps the fps has an effect on it all too, not to mention the panel tech. being used e.g.

- I notice the motion issues more on my oled than my 144hz ips because oled pixel response is like 0.000001ms where as ips 144hz is like 4ms response and also already has ghosting issues inherent to the panel tech.

- cyberpunk motion issues are pretty noticeable where as in control and metro, I find it very hard to see the motion issues unless deliberately looking for it and pixel peeping

Personally I use DLSS on quality even if fps are already high now because it is by far the best form of AA as proven in screenshots and videos.


I'm still going with the below as the outcome:

- dlss offering a bigger jump in perf. whilst offering better IQ overall
- FSR not having any motion issues but offers less of a performance increase
 
My only bet is that unless FSR will be flawless ( which i doubt ) it will be trashed in many reviews like it is the worst thing on Earth ( after Rocket Lake :D ).
I think FSR is a good thing for consoles and especially PS5. Being able to play every future PS5 exclusive at "4k" - 60 FPS will be a good thing. Many PS5 games are in a different league and the devs will take advantage of every feature that can help them improve the performance.
On PC's the poor people will love it and the rich people will still buy every new top of the line card from AMD/Nvidia. But we will see a lot of fights: DLSS is better - No FSR is better, or DLSS is 2.1 while FSR is only 1.0 so it is worse. Or DLSS is using tensor cores, FSR is not using dedicated hardware so it is worse. :D
 
FSR leaked demo from my sources:
better than native!
nNr.jpg
 
You drive a car in the game, pass near a tree and say : what was that? Then you stop the car and take a look at the tree and it looks fine, it only looked blurry when the car was moving. :D
Anyway if the "news" are true, then FSR is also based on ML and neural networks.

One thing I don't like about DLSS - if you are driving along and say there are intermittent vertical objects which pass through the foreground such as fence posts, some background details will be inconsistent between when you lose sight of them and they next reappear i.e. a bush might be in the same place in the background but some subtle details will be different (which personally I'm very sensitive to).

I still think the AMD solution will be a variant of temporal upscaling with some basic "AI" similar to how high end projectors do it rather than the AI approach like DLSS.
 
Back
Top Bottom