• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

I see the graphics card sub-forum is living up to its reputation of being the most wretched hive of scum and villainy on the OCUK forums, I'd be cautious if I were you. :D

In other news, free new technology being applied to existing graphics cards is now bad in some way, a bit like that guy complaining that the free Epic games weren't a good deal on the PC Games sub-forum. It's hilarious this place. :p
 

lol that sick burn, Raja is trying to be nice so need to translate it:

We will take a look at FSR since it's open source, but at the end of the day our GPU's are designed for AI Deep Learning so our preference is for Deep Learning methods like DirectML as these improve performance and provide better visual quality than non deep learning methods, such as FSR.
 
lol that sick burn, Raja is trying to be nice so need to translate it:

We will take a look at FSR since it's open source, but at the end of the day our GPU's are designed for AI Deep Learning so our preference is for Deep Learning methods like DirectML as these improve performance and provide better visual quality than non deep learning methods, such as FSR.


Thats not what Steve at GN said at all, why did you make that up?
 
What's Steve or GN have to do with anything? We're talking about the Raja/Intel tweets

Your fake translation - you made it up, which also proves that you didnt watch Steve at GN.

Intel are supporting AMD and FSR, GN have an entire segment on it.
 
lol that sick burn, Raja is trying to be nice so need to translate it:

We will take a look at FSR since it's open source, but at the end of the day our GPU's are designed for AI Deep Learning so our preference is for Deep Learning methods like DirectML as these improve performance and provide better visual quality than non deep learning methods, such as FSR.

If OCUk Don's want to find the root cause of the trouble within AMD threads on this forum look no further.

It's getting boring now guys out with the lies and pitch forks since the announcement of FSR.

It's post like this that send threads down hill and users getting banned.
 
Your fake translation - you made it up, which also proves that you didnt watch Steve at GN.

Intel are supporting AMD and FSR, GN have an entire segment on it.


where is your evidence that Intel is supporting FSR?

All we have is a tweet saying Intel will look at it but thst Intel's hardware is well suited to a deep learning based image reconstruction technique. This is clear evidence Intel sre working on their own DLSS equivalent. They might support FSR as well,they might release their own DLSS as open source running on DirectML
 
If OCUk Don's want to find the root cause of the trouble within AMD threads on this forum look no further.

It's getting boring now guys out with the lies and pitch forks since the announcement of FSR.

It's post like this that send threads down hill and users getting banned.


vsn ypu point out one thing thst is wrong in the post?

Raja's post is public and written in plain English. It clearly states Intel hardware is good flr deep learning based suler resolution.

Either Raja believes FSR uses deep learning and therefore FSR will work well ln Intel, or Raja is stating their in house solution used deep learning (so that would be a preference over FSR).

AMD have told several people that FSR doesn't use deep learning. This could be another marketing blunder. Personally i find it hard to believe AMD would eveo bother releasing an image scaler that doesn't use either deep learning or temporal accumulation. Information theory puts a very hard limit on what is possible
 
vsn ypu point out one thing thst is wrong in the post?

Raja's post is public and written in plain English. It clearly states Intel hardware is good flr deep learning based suler resolution.

Either Raja believes FSR uses deep learning and therefore FSR will work well ln Intel, or Raja is stating their in house solution used deep learning (so that would be a preference over FSR).

AMD have told several people that FSR doesn't use deep learning. This could be another marketing blunder. Personally i find it hard to believe AMD would eveo bother releasing an image scaler that doesn't use either deep learning or temporal accumulation. Information theory puts a very hard limit on what is possible

Listen D.P I not getting into this trash talk about this and that, is it this or that etc

Why not just wait till its released June 22nd
Let it be reviewed and then you will have more information about what this is and isn't and what it can do to improve going forward.

I already have my thoughts
1. It's going to be decent if running the ultra setting going from 1440p to 4k
2. Below settings will look abit worst
3. It's not going to be DLSS 2.0 quality but it won't be far off using ultra settings
 
To be honest if I can use it on the best quality setting at 1440p and it gains 10-30fps with no side affects it's a winner


Every little helps
 
Listen D.P I not getting into this trash talk about this and that, is it this or that etc

Why not just wait till its released June 22nd
Let it be reviewed and then you will have more information about what this is and isn't and what it can do to improve going forward.

I already have my thoughts
1. It's going to be decent if running the ultra setting going from 1440p to 4k
2. Below settings will look abit worst
3. It's not going to be DLSS 2.0 quality but it won't be far off using ultra settings


Why enter the thread and have a hissy fit at a perfectly legitimate post? There is some ambiguity about exactly what Koduri meant, but it is undeniable that he talked abotu DL based techniques and conversely, provided no evidence Intel would support FSR, only that they would look at it.



Why wait unitl June the 22nd when there are known facts and released information now? To me, it is more valuable to discuss the facts, as I summarized earlier, than to make pointlessly wild predictions about FSR being close to DLSS 2.0 without any supporting evidence. That is just a guessing game. Of course we would all like FSR to give DLSS a run for its money, but why not wait until the 22nd before making such claims?

The currently released ultra quality screen shots look very poor. That is a fact. So we have to hope that AMD made some kind of mistake in distributing these images.
 
Why enter the thread and have a hissy fit at a perfectly legitimate post? There is some ambiguity about exactly what Koduri meant, but it is undeniable that he talked abotu DL based techniques and conversely, provided no evidence Intel would support FSR, only that they would look at it.



Why wait unitl June the 22nd when there are known facts and released information now? To me, it is more valuable to discuss the facts, as I summarized earlier, than to make pointlessly wild predictions about FSR being close to DLSS 2.0 without any supporting evidence. That is just a guessing game. Of course we would all like FSR to give DLSS a run for its money, but why not wait until the 22nd before making such claims?

The currently released ultra quality screen shots look very poor. That is a fact. So we have to hope that AMD made some kind of mistake in distributing these images.

Because all he as done is post lies right from his first post after the announcement.
So enough is enough it's clear what he is doing and that is to drag this thread down with lies and trash talk.

Again known facts? What facts? Some footage was shown with a release date how this totally works is still unknown if it was so simple method it would be driver level the fect it needs dev support tells you they is a lot more to this.

Ultra quality looks poor? You get all that information from a short little clip?

It looked OK to me, will need to see more before I would call it very poor.
 
Because all he as done is post lies right from his first post after the announcement.
So enough is enough it's clear what he is doing and that is to drag this thread down with lies and trash talk.

Again known facts? What facts? Some footage was shown with a release date how this totally works is still unknown if it was so simple method it would be driver level the fect it needs dev support tells you they is a lot more to this.

Ultra quality looks poor? You get all that information from a short little clip?

It looked OK to me, will need to see more before I would call it very poor.



I don't want to drag in to a debate about certain posters.

However, there are a lot of known facts surrounding FSR, DLSS, TSR and the state of the art of image scaling.
For example,
  • Deep learning is the state of the art in static image upscaling, no other algorithm or technique comes close
  • Temporal accumulation provably works very well because it combined more information than a single low resolution image. TSR shows this well
  • AMD told Anandtech that FSR does not use temporal accumulation
  • If FSR does not use temporal accumulation then it is not a replacement for TAA, and therefore the in-game TAA implementation will be needed
  • Temporal accumulation requires a much tighter engine integration to get motion vectors, which is likely why AMD has not gone down that route
  • AMD provided a press kit with uncompressed images, they look very poor.

The only thing that is somewhat speculative is if FSR does not use deep learning. If FSR doesn't use some kind of non-linear inferencing, then it is stuck with the existing linear interpolation techniques that have been used the last 4 decades.

The above points frame FSR capabilities and potential quite tightly.
 
Removed deleted quote.


Definitely looking at it"
Looking at it, but no confirmation of support. TBH, way too early for Intel to make such a decision.

- the DL capabilities of Xe HPG architecture do lend to approaches that achieve better quality and performance.
This is ambiguous. Raja could mean that since FSR uses deep learning, and Intel Xe is good at DL, then the Xe should have good performance with FSR. But there is zero evidence that FSR uses deep learning. One can easily read this statement as meaning Intel have a deep elarning alternative to FSR that would achieve better quality than a solution that uses only spatial filtering.

We will definitely try to align with open approaches to make ISVs job easier..
With this statement, we cna see that Intel will release software as open source as well. So likely the game developers will have a choice of upscaling technologies.



My interpretation of this is Intel will support FSR if FSR uses deep learning, otherwise Intel have their own DL based solution which they will release as open source. Probably soemthign based on DirectML
 
Last edited by a moderator:
To recap why Deep learning is extremely relevant to super resolution it is worth looking at the results that can be achieved with a static image.

https://medium.com/analytics-vidhya/a-review-on-super-resolution-2c78cd77885a



Bicubic is essentially what standard spatial linear filtering can achieve, what TVs and GPUs for scaling. You can improve the image a little with a sharpening filter. The middle 2 image uses deep learning, and this isn't even the state of the art.

And the reason temporal accumulation is powerful is because it samples additional information. Imagine a game engine that rendered alternative scan line, on frame 1 it renders even rows, and on frame 2 the odd rows. After frame 2 you have the complete information to render the full scene without any missing information - under the important caveat that the scene and camera are statistic. Temporal accumulation methods don't use scan line, but random sampling patterns much like MSAA used to do. The trick with temporal methods is that they have to account for motion, which is achieved using motion vectors and prediction of pixel movement

The reason DLSS 2.0 works so well is it is a combination of both temporal accumulation of image data across multiple frames, AND a state of the art DL based super resolution technique. It is rumored that even the temporal component uses a separate deep learning model to better prediction pixel movement from the motion vectors.
 
Back
Top Bottom