• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Just need to see developers support it.

So true. A lot has been mentioned about it's lightweight implementation and partnering with studios that have big games coming out this year (thinking EA, BF2042 + potentially more). Combined with the GPU shortage, we'll hopefully see people with cards that need an upgrade actually using the tech for themselves before the year is out.

Can easily see FSR taking the same route as Freesync with wide adoption took in the longer term.
 
Proven that this isn't just a linear scaler like bicubic with a sharpening fitler.. Like D.P keeps posting around.

They is actual work being done within the rendering pipeline to fine tune the pixels to increase the quality back up from a lower res to make it close to actual render res.

Still just a spatial scaler with a few adaptive processes added on - kind of like a reverse DSR. It is basically the kind of upscaling found in higher end projectors, etc. which is nice and all but at the end of the day compared to the better effort temporal methods (I don't count the Unreal one as it shimmers like crazy - others do far better on that front) it gives a far lower performance uplift for the same image quality. I hope this is just the first step.
 
So uh, we gonna stop hearing the usual trying to talk it down now its out for use and clearly better than the scaremongering was claiming.

No, look above you, people will just make pseudo expert statements that don't agree with the actual results and call it fact, alternative fact as the case is. Basically "don't look at it, don't observe it, just listen to me"

Like propagandists raging cope against observable facts.
 
So uh, we gonna stop hearing the usual trying to talk it down now its out for use and clearly better than the scaremongering was claiming.

Don't get why people are getting excited about it personally - I guess due to ignorance. At the same performance uplift as the ultra setting a decent temporal implementation will give pretty much identical image quality as native, with none of the softening seen with FSR. With the same image quality decrease as the ultra setting a good temporal implementation will see around double the performance uplift compared to native.
 
Hang on, where's everyone who was crying that it wouldn't work on Nvidia cards because they believed Nvidia wouldn't put any work towards it.

I've just seen it running fine on a 2070 super and IIRC Nvidia still hasn't put any work in.
 
Just added Reshade CAS to Riftbreaker and it works with FSR to make it much better imo. AMD set the sharpening too low which is why everyone sees a softer image than native. What were they thinking :confused:

I set CAS to 0.6 on both sliders in reshade for the FSR UQ screenshot. No CAS applied in native.

xWzLRv4.jpg

J5wDY97.jpg

1loyMmh.png
 
AMD set the sharpening too low which is why everyone sees a softer image than native. What were they thinking

Probably across a wider range of scenarios you get black haloing in places if you turn the sharpening up too much even with an adaptive system.
 
FSR is like a chinese solution tbh. Cheap and without great potential but it is doing an okish job and probably it will rule the world in the next few years. :D
 
FSR is like a chinese solution tbh. Cheap and without great potential but it is doing an okish job and probably it will rule the world in the next few years. :D

It's that way because it's what developers asked for. They didn't want a solution like DLSS 2.0 which is smarter, but is harder to implement being more embedded in the rendering pipeline, limited to a small subset of cards from one manufacturer, and requires submitting high res images into the Nvidia black box for pre-processing.

Devs asked for a quicker, simpler, easier-to-implement solution that works on far more systems with much less effort. For a first attempt, FSR works well within the limits of the requirements, and is likely to get better as it gets refined.
 
Don't get why people are getting excited about it personally - I guess due to ignorance.
I don't know too many things about neural networks and upscaling but can you please tell me what the neural network learns from all those high quality pics it is trained with? Exactly what is the part that the AI plays in the reconstruction. Because i was expecting something else tbh, when i look at a DLSS image i can see it has more details but idk if that is something that could not be done through temporal data without any AI involved.
For me the AI means inference, to be able to "guess" what it is missing from a picture. Like reconstructing letters and getting rid of ghosting because it understands the shape of a car or a motorcycle.
 
I don't know too many things about neural networks and upscaling but can you please tell me what the neural network learns from all those high quality pics it is trained with? Exactly what is the part that the AI plays in the reconstruction. Because i was expecting something else tbh, when i look at a DLSS image i can see it has more details but idk if that is something that could not be done through temporal data without any AI involved.
For me the AI means inference, to be able to "guess" what it is missing from a picture. Like reconstructing letters and getting rid of ghosting because it understands the shape of a car or a motorcycle.

TBH calling it AI is a bit deceptive - it is basically just training the system over and over with more and more data until you arrive at a model which produces a close enough outcome from lower resolution input. While it doesn't work quite like this but easier to explain this way it basically looks for patterns in the low resolution input then uses the closest reference matches to build intermediate detail which mimics the style of the closest matches - it is generally just nonsense output but close enough mimic of the original no one really notices.
 
I didn't notice DF saying anything worse than any other review. Indeed one mentioned that within Anno 1800 -
General loss of tree and foilage quality.
Overall loss of sharpness in the image. It almost looks as if there is a little bit of vaseline poured over it.
You can see some shimmering around the edge of the ships.
People look like blobs.

Good time for the conspiracy theorists with UFOs hitting the headlines again :p
 
TBH calling it AI is a bit deceptive - it is basically just training the system over and over with more and more data until you arrive at a model which produces a close enough outcome from lower resolution input. While it doesn't work quite like this but easier to explain this way it basically looks for patterns in the low resolution input then uses the closest reference matches to build intermediate detail which mimics the style of the closest matches - it is generally just nonsense output but close enough mimic of the original no one really notices.


everything gets called AI today when in fact AI does not exist and never has
 
Eesh, OK so my thoughts on this after watching the DigitalFoundary comparison video by Alex, who right now I consider to be one of the best people with an eye for detail and technical breakdown of visuals. https://www.youtube.com/watch?v=xkct2HBpgNY

Right off the bat I think they do a better job than I was expecting, given the constraints of what they're dealing with. Single frame using essentially post processing upscaling without access to temporal frames or much other data. It's certainly a win over general purpose up-scaling techniques in terms of quality so that's very positive. It does seem like the focus is more on edges than something like texture detail and this make sense because edges of straight lines have patterns you can easily detect/predict, where as internal texture detail is complex and can vary so much that enhancing it with post processing alone is going to be impossible without some form of additional data (from DL or subsamples or whatever). So it really feels like more of an aid to anti-aliasing than anything else.

I do think that vs DLSS 2.x Nvidia has the edge most noticeably in the more aggressive modes, this is where the deep learning really helps clean up textures and not just edges. Where as the higher quality 1440p to 4k upscaling looks somewhat comparable between the 2 and seems to be the core strength of FSR right now.

This leads me into use cases, which is tricky...to me it was always obvious that DLSS was a sister technology to ray tracing. RT was the core goal of Nvidia and they knew that rendering at more than 1080p is an impossibility. Adoption of RT was only ever going to happen if gamers could maintain their lovely 1440p or 4k resolutions. So DLSS was really invented to ease RT adoption. That comes across in DLSS use cases, typically you're taking games with RT at an internal res of 1080p and getting them up to 1440p or 4k. It has good enough upscaling from 1080p at least with DLSS 2.x to do this. It seems to be a sensible trade off for I think most people because of the improvement RT brings. However I'm not convinced it'll be commonly used anywhere else outside of getting RT playable, I certainly don't.

This is where I think this'll be a problem for AMD and FSR. They did not push RT hard in the current gen instead going for rasterization wins, they wanted reviewers to basically avoid it and treat the cards like more traditional rasterization cards because the performance just wasn't there. But I do expect them to move towards more RT cores in the next generation to catch up with Nvidia and that'll mean a push to more wide spread RT adoption and tackling the same problem as Nvidia. But the weakness is FSR is that it's pretty bad at 1080p -> 4k in terms of final quality, barely above regular upscaling it seems. What I believe will become the most common use case and primary reason for FSR in the next gen will struggle to compete. I'd be really interested to see usage data for both DLSS and FSR (as it becomes adopted) and how many people are playing without RT enabled but using upscaling of some kind. What % of people and what use cases are most common. My gut feeling on this is that it's close to zero. But it would be cool to see some real data on this rather than just speculating.

I could have sworn that FSR was aimed at general support across all games? Maybe I'm not remembering that correctly? That would have been a much bigger win for AMD over DLSS. But that opens up the question of integration into games, as Alex showed in his video game engine specific upscaling can be significantly better for the same performance cost. So for example if you're making an unreal engine game why would you bother integrating FSR when native upscaling is better?

The hope is of course that they do what Nvidia did which is continue to increase the quality over time, we want competition in this space. However I have a sneaky feeling this wont happen...Nvidia got wins because they could keep training the ML model over time, but this kind of more basic post processing it seems harder to get decent wins. The same way we've not really seen something like FXAA or SMAA improve over the years.
 
TBH calling it AI is a bit deceptive - it is basically just training the system over and over with more and more data until you arrive at a model which produces a close enough outcome from lower resolution input. While it doesn't work quite like this but easier to explain this way it basically looks for patterns in the low resolution input then uses the closest reference matches to build intermediate detail which mimics the style of the closest matches - it is generally just nonsense output but close enough mimic of the original no one really notices.

For example look at this image: The top is 1440p and the middle is 1440p DLSS Quality. And yes it has more detail in the middle but it is still useless, it is something you will expect from temporal upscaling only, you don't need AI for that.
Meanwhile the rest of the image is arguably worse than native.

https://twitter.com/CapFrameX/status/1407053897924067338/photo/1

The bottom is native something, it doesn't matter.


everything gets called AI today when in fact AI does not exist and never has
HAL 9000 is real. :p
 
Back
Top Bottom