Caporegime
HAL 9000 is real.
The reason why these "buzzwords" get used is down to funding and marketing. Its not just "AI" but words such as "quantum","nano","smart" and so on,which get put onto mundane stuff.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
HAL 9000 is real.
And this one which is native 720p vs 1440 DLSS performance, ( because he thought it is fair to compare the two )For example look at this image:
And this one which is native 720p vs 1440 DLSS performance, ( because he thought it is fair to compare the two )
And again more info on that thing in the middle with DLSS on, but still useless. Meanwhile the quality of RT reflection in the water is atrocious with DLSS on.
https://www.capframex.com/assets/static/imageslider.html?url1=https://cxblobs.blob.core.windows.net/images/FSR_article/dlss/Metro_Exodus_EE/scene_B/DLSS%202.2/B_Metro_Exodus_DLSS_performance.png&url2=https://cxblobs.blob.core.windows.net/images/FSR_article/dlss/Metro_Exodus_EE/scene_B/DLSS%202.2/B_Metro_Exodus_720p.png&title1=DLSS performance&title2=720p
Is there any reason for why the RTreflections can't be upscaled like the rest of the image? I understand it starts from much lower number of rays and with a feature like FSR it will remain lower res no matter what. But i thought the temporal data can also help the RT reflection to be reconstructed. It is because the denoiser is also using temporal data, or what is the cause? It is the same on Quake2 with that TAA thing you keep talking about?
I posted this some pages ago :Would need more examples for the quality of RT reflections - I've not noticed anything atrocious.
With Quake 2 reflections are the same quality as the rest of the world - though with either TAA or TU for some reason reflections of alpha/transparent surfaces are shimmering like crazy (temporal hall of mirrors kind of stuff) but I think that is because the developer hasn't really bothered to implement a solution there as in the stock maps you don't really have surfaces which show it but if you create a custom map with highly reflective/mirror surfaces it is quite bad.
Main use of FSR as I see it is people on lower end hardware who'll just be happy to get a balance where they get reasonable frame rates and are resigned to the fact they are going to be compromising somewhere.
I'd be really interested to see usage data for both DLSS and FSR (as it becomes adopted) and how many people are playing without RT enabled but using upscaling of some kind.
Eesh, OK so my thoughts on this after watching the DigitalFoundary comparison video by Alex, who right now I consider to be one of the best people with an eye for detail and technical breakdown of visuals. https://www.youtube.com/watch?v=xkct2HBpgNY
Right off the bat I think they do a better job than I was expecting, given the constraints of what they're dealing with. Single frame using essentially post processing upscaling without access to temporal frames or much other data. It's certainly a win over general purpose up-scaling techniques in terms of quality so that's very positive. It does seem like the focus is more on edges than something like texture detail and this make sense because edges of straight lines have patterns you can easily detect/predict, where as internal texture detail is complex and can vary so much that enhancing it with post processing alone is going to be impossible without some form of additional data (from DL or subsamples or whatever). So it really feels like more of an aid to anti-aliasing than anything else.
I do think that vs DLSS 2.x Nvidia has the edge most noticeably in the more aggressive modes, this is where the deep learning really helps clean up textures and not just edges. Where as the higher quality 1440p to 4k upscaling looks somewhat comparable between the 2 and seems to be the core strength of FSR right now.
This leads me into use cases, which is tricky...to me it was always obvious that DLSS was a sister technology to ray tracing. RT was the core goal of Nvidia and they knew that rendering at more than 1080p is an impossibility. Adoption of RT was only ever going to happen if gamers could maintain their lovely 1440p or 4k resolutions. So DLSS was really invented to ease RT adoption. That comes across in DLSS use cases, typically you're taking games with RT at an internal res of 1080p and getting them up to 1440p or 4k. It has good enough upscaling from 1080p at least with DLSS 2.x to do this. It seems to be a sensible trade off for I think most people because of the improvement RT brings. However I'm not convinced it'll be commonly used anywhere else outside of getting RT playable, I certainly don't.
This is where I think this'll be a problem for AMD and FSR. They did not push RT hard in the current gen instead going for rasterization wins, they wanted reviewers to basically avoid it and treat the cards like more traditional rasterization cards because the performance just wasn't there. But I do expect them to move towards more RT cores in the next generation to catch up with Nvidia and that'll mean a push to more wide spread RT adoption and tackling the same problem as Nvidia. But the weakness is FSR is that it's pretty bad at 1080p -> 4k in terms of final quality, barely above regular upscaling it seems. What I believe will become the most common use case and primary reason for FSR in the next gen will struggle to compete. I'd be really interested to see usage data for both DLSS and FSR (as it becomes adopted) and how many people are playing without RT enabled but using upscaling of some kind. What % of people and what use cases are most common. My gut feeling on this is that it's close to zero. But it would be cool to see some real data on this rather than just speculating.
I could have sworn that FSR was aimed at general support across all games? Maybe I'm not remembering that correctly? That would have been a much bigger win for AMD over DLSS. But that opens up the question of integration into games, as Alex showed in his video game engine specific upscaling can be significantly better for the same performance cost. So for example if you're making an unreal engine game why would you bother integrating FSR when native upscaling is better?
The hope is of course that they do what Nvidia did which is continue to increase the quality over time, we want competition in this space. However I have a sneaky feeling this wont happen...Nvidia got wins because they could keep training the ML model over time, but this kind of more basic post processing it seems harder to get decent wins. The same way we've not really seen something like FXAA or SMAA improve over the years.
Be sure to check back once RDR2 gets DLSS.
Although I'm only targetting 1440p/60Hz I will still use DLSS where possible as it keeps the temps down. I was surprised to see the GPU fans at idle when playing Control Ultimate Edition maxed out, although it does take a cool night and the window open.
My old laptop has a 1070 with a 1080p/144Hz Gsync panel which I cap to 60. I plan on using FSR on this just to keep the temps down as long as the quality doesn't degrade too much.
DF are on Nvidia's pay role.
Tim from HUB also has a great eye for detail, he commented on the shimmering, its not something FSR adds, its something that's more clearly visible if its already there, especially if you use the lower quality FSR settings.
Its easy to exaggerate it and point at it if you want make something of it, nothing is ever perfect and with that you can always find something if you want it to be there, they don't make half as much fuss about the trails and ghosting in DLSS, if they ever bring it up at all.
A few other reviewers did, but didn't bang on it like a drum because the ghosting is an artefact of DLSS that comes with the nature of it and its not that big of a deal, that's just rational. its not fulfilling the editorial will of a paymaster.
Stop it I’m running out of signature space for your nail on the head quotes.Yep, Digital Foundry are going out of their way to ensure FSR has every opportunity to fail, it'a almost like they have some agenda. If only we had some kind of proof of their close ties and funding from Nvidia... Like their Nvidia sponsored "preview" for the RTX 3080 being "80% - 100% faster than an RTX 2080" and subsequent unbiased reviews showed the real difference to be 50%. I simply cannot fathom how people would take the word of proven liars and paid for shills.
I mean you have to be a special kind of idiot to think DF are even remotely objective. What is even more idiotic is basing your opinion on only this one single outlier review and ignoring the other postive reviews and even refusing to test the free FSR tech for yourself in a free demo.
DF are on Nvidia's pay role.
Tim from HUB also has a great eye for detail, he commented on the shimmering, its not something FSR adds, its something that's more clearly visible if its already there, especially if you use the lower quality FSR settings.
Its easy to exaggerate it and point at it if you want make something of it, nothing is ever perfect and with that you can always find something if you want it to be there, they don't make half as much fuss about the trails and ghosting in DLSS, if they ever bring it up at all.
A few other reviewers did, but didn't bang on it like a drum because the ghosting is an artefact of DLSS that comes with the nature of it and its not that big of a deal, that's just rational. its not fulfilling the editorial will of a paymaster.
Regarding DF and nvidia - people are hung up about the DF 3080 review in which Richard quite clearly stated “do not trust these figures”. He stated many times that nvidia provided the cards and settings/games to test and people should wait for the full analysis.I'd be interested to what degree DF are on Nvidia payroll and evidence for this, it's not something I've heard before. Either way the depth of analysis that Alex provides is impressive. Most recently going back to his WDL deep dive to address the claims of another forum user on ray tracing settings, this was invaluable. It's just a depth of analysis I can't find anywhere else. But his eye for things like pixel counting to decide real resolutions of things like console games using dynamic resolution is extremely high quality and welcomed.
I'm still watching other videos on FSR and watching the HUB one now, they seem to concur that high res upscaling is good, but that lower resolutions end up blurry and not very good, they also confirm my suspicions which is that it's more of an aid for AA and edge detection rather than a balanced upscaler. This explains why it's doing an OK job at near-native resolutions but then large leaps from small to high resolutions take a huge impact. The edges remain quite good throughout but the texture detail inside poly faces are suffering, and that's the harder part with upscaling, and probably why Nvidia opted for ML approach rather than post processing approach.
I'd be interested to what degree DF are on Nvidia payroll and evidence for this, it's not something I've heard before. Either way the depth of analysis that Alex provides is impressive. Most recently going back to his WDL deep dive to address the claims of another forum user on ray tracing settings, this was invaluable. It's just a depth of analysis I can't find anywhere else. But his eye for things like pixel counting to decide real resolutions of things like console games using dynamic resolution is extremely high quality and welcomed.
Can you evidence these mistakes? I owe DF nothing, but I’ve always been impressed.The issue is that he basically is argueing on multiple forums(Resetera and Reddit) and in his video,that UE TAAU is better than FSR. But the issue is he didn't bother testing UE TAAU when DLSS1.0 was out(it was there from 2018),then didn't bother testing Ultra Quality FSR,and seemingly did not see any added shimmering with UE TAAU. The whole video was rather weird WRT to UE TAAU.
But the issue is KitGuru basically tested it in Godfall,and saw UE TAAU was having a bit more sharpening,but looked worse overall due to shimmering and performed a bit worse. Then he dismissed it out of hand on another forum.
This is despite people pointing out others had also done the comparisons.Then he called out HUB for some weird reason,and basically on Reddit basically called out the entire press,that they were getting things wrong. Yet,it was a strawman because watching many of the videos,nobody was calling DLSS and FSR the same.Considering you have people with engineering backgrounds,and people with computing PhDs he is by far not the most qualified to make these kinds of hints.
DF historically made mistakes too,which have been pointed out.....yet they are poor at also recognising their own limitations.
Just because he can spend time analysing stuff,or talking a lot of technobabble does not mean anything - because in sciene and engineering that does not mean you will get a high impact paper.
If that is the case people such as Steve Burke at Gamersnexus, Dr Ian Cutress at Anandtech,etc would never be wrong. If anything Dr Ian Cutress is quite a humble guy(and won't cover up any mistakes he makes).
WOW! Let's play a game of is it DLSS or Xbox? Lmao! That's brutal.Example, exactly from Alex :
Native 4k
XboxX:
DLSS perf:
But in the video you will only see the XboxX compared with the native PC, never with the DLSS.
Someone should do a calculus so we can see how many trees died for DLSS.All that super computer time spent fettling DLSS, hardly worth it
The issue is simply this: all these technologies have strengths & weaknesses. You can focus more on the pros than the cons if you want, and then that will showcase the tech as greater than it is, like how they do with dlss, or the opposite, like how they do with FSR. So f.ex you see how they focus on that one scene in an alpha game for TAAU vs FSR comparison, but they didn't do UQ, and they didn't test other scenes, and they didn't test other games (though available), and the people who did do that came away with a more positive impression of the technology. That's why we say they're very biased against AMD, because we can see what they're doing. And it's not even just with GPUs, they do the same for CPUs, in fact they used a specific cutscene in Metro Exodus which was bugged and AMD faltered a little, and then they use that as evidence for how "Intel had more stable frametimes", meanwhile if you look at the rest of the game and other cutscenes which aren't bugged, the opposite is true! So it's a systematic thing for them, don't ask me why, but it's very clearly there.Can you evidence these mistakes? I owe DF nothing, but I’ve always been impressed.
TPU said:From a quality standpoint, I have to say I'm very positively surprised by the FSR "Ultra Quality" results. The graphics look almost as good as native. In some cases they even look better than native rendering. What makes the difference is that FSR adds a sharpening pass that helps with texture detail in some games. Unlike Fidelity FX CAS, which is quite aggressive and oversharpens fairly often, the sharpening of FSR is very subtle and almost perfect—and I'm not a fan of post-processing effects. I couldn't spot any ringing artifacts or similar problems.
Conflict of interests is a thing you know. That's why reputable outlets don't get into exclusive deals with specific vendors. Credibility is hard to create and easy to lose. DF is a great example of this.Regarding DF and nvidia - people are hung up about the DF 3080 review in which Richard quite clearly stated “do not trust these figures”. He stated many times that nvidia provided the cards and settings/games to test and people should wait for the full analysis.
Rewlly not sure why people don’t see this.
Conflict of interests is a thing you know. That's why reputable outlets don't get into exclusive deals with specific vendors. Credibility is hard to create and easy to lose. DF is a great example of this.
Don't get why people are getting excited about it personally - I guess due to ignorance. At the same performance uplift as the ultra setting a decent temporal implementation will give pretty much identical image quality as native, with none of the softening seen with FSR. With the same image quality decrease as the ultra setting a good temporal implementation will see around double the performance uplift compared to native.
Main use of FSR as I see it is people on lower end hardware who'll just be happy to get a balance where they get reasonable frame rates and are resigned to the fact they are going to be compromising somewhere.