• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

5 years sounds like a long time till you realise there has only been 2 GPU generations since then and the most recent generations hasn't even finished launching its low end cards.

Most people play games that don't need dlss and most certainly don't have RT

Or will ever need RT

to be honest my old r9-390 would be perfectly viable card for most games

:cry:

Yeah, and it's obvious they did so to show how good the 20 series performs. It made the benchmarks look good.

Well I played part of Metro Exodus with RT enabled on my 1080Ti FTW3. I also tried Quake 2 RTX and Minecraft RTX. The 20 series was the first to offer hardware RT support, but like many I chose to skip due to its poor performance. I don't remember site benchmarking Pascal with RT enabled.

The fact remains. Nvidia added software support for RT with Pascal, which debunks the idea that Nvidia never adds support for new features to older cards.
 
Holy **** - FSR isnt out yet with no publicly available white paper , but we have EXPERTS who know all about it already. /facepalm


Except we have public statements by AMD, full resolution uncompressed images released by AMD, and facts about DLSS snd the state of the art in image reconstruction and temporal super resolution.

No one has commented about anything we don't know, at least not without big caveats.

Anything you think is being discussed without evidence or facts?
 
Can someone please give a definitive answer on whether DLSS is better than native in some circumstances or not? Is it better than native when using TAA or just better overall? Can a clear sharp native image without AA being applied be worse than DLSS? Some evidence would be appreciated.


The problem is you need AA or you get jaggied and ppwpixel flickering. For most modern gsme engines, the only way to do AA is TAA (not by choice, but this is literally all you can do).

do your question doesn't make sense

And this is the real point. You will have to have TAA in any spatial-only super resolution technique. This id why DLSS 2 woekd so well, because it is better than TAA.


But as a quick rebuttal. DLSS can run st native resolution and upscale from there, before a final downample before display. This only makes sense on old games when you sre not looking for performance but extra visual fidelity. But non one bothers backporting DLSS to old games
 
Well let's hope it does a better job than TAA and is widely adopted because Days Gone has TAA because it is a console port. TAA is used as a crutch and I cringe when people say Days gone is well optimised because TAA is the spawn of the devil and needs to die :)

The problem is , if FSR is spatial only then it used the output of the underlying TAA and magnies the TAA image with its flaws. So FSR will require a really good TAA implementation to work well. DLSs replaces TAA early in the rendering pipeline
 
It was due to your suggestion that 5 years wasn't as long as 5 years because.

eo62r.jpg


And i was accused of assigning a different meaning to what people write.
 
Except we have public statements by AMD, full resolution uncompressed images released by AMD, and facts about DLSS snd the state of the art in image reconstruction and temporal super resolution.

No one has commented about anything we don't know, at least not without big caveats.

Anything you think is being discussed without evidence or facts?

and twitter comments and facebook posts and you tube videos. Intel release comments about its hardware as does nvidia - which *stunningly* turn out not to be overly true in methodology either. Actual documentation about process ratehr than simply guessing
 
Erm, I don't think you know what scientific theory is... And no, theories of relativity (both of them) haven't been "broken" - what's more, they are the most tested theories in science and worked every single time. Sans exceptions in places we don't really understand yet, like centres of black holes (we're missing a good theory of gravity), or quantum physics (which we're still far away from really understanding). Even just your GPS is relaying on them working perfectly, or it would not be able to show your location at all after just few seconds.

I suspect what you meant to say was that hypothesis must be proven before they can become a scientific theory and they're being "broken" all the time without ever becoming said theory.
maybe you should also explain the difference between theory and law.
of topic but the new dark matter maps are suggesting again that Einstein might be wrong.

in any case theory is the wrong word its hypothetically that he meant :) and yes you are right about the hypothesis.. im just a bit lazy
 
Last edited:

Allow me to break it down for you, since you are clearly struggling. Nowhere in my post did i claim that "5 years wasn't as long as 5 years". That's just some nonsense you made up in your head.

What i did do, was bring context to that 5 year time frame in terms of GPU generations. I pointed out that there has only been 2 GPU generation released in that 5 year time frame.

But I think you do know what i was trying to say, you just didn't expect to get challenged on your response and made up some nonsense to cover your arse.

Don't roll your eyes too hard they might pop out.
 
of topic but the new dark matter maps are suggesting again that Einstein might be wrong.

in any case theory is the wrong word its hypothetically that he meant :) and yes you are right about the hypothesis.. im just a bit lazy

Was Newton wrong? Or was his theory of gravity just incomplete and relatively basic, as we simply gained much more understanding over time? We already know there's much more to discover and learn about gravity and quantum physics, but if our current equations work (and they do) they aren't wrong now and won't be wrong in the future (as they'll work just as well). However, they might be at worst incomplete and describe only part of something bigger. It's a big difference.

In general (I assume you know but many people do not) science isn't about ditching/deleting past knowledge, as that would be very silly. Science discards what doesn't work and keeps only what works (scientific theory). And then, over time, science builds upon it, adding more and more understanding, refining math, expanding theories, joining smaller ones into bigger ones etc. In other words, it's building on top of older knowledge instead of replacing it fully, unlike what common person imagines. Hence, Newton was right in what he knew, his equations work just fine still, on Earth, as they always have. And the same will always be with Einstein's equations.
 
Last edited:
Allow me to break it down for you, since you are clearly struggling. Nowhere in my post did i claim that "5 years wasn't as long as 5 years". That's just some nonsense you made up in your head.

What i did do, was bring context to that 5 year time frame in terms of GPU generations. I pointed out that there has only been 2 GPU generation released in that 5 year time frame.

But I think you do know what i was trying to say, you just didn't expect to get challenged on your response and made up some nonsense to cover your arse.

Don't roll your eyes too hard they might pop out.

Ah, so 5 years sounds like a long time until it doesn't :confused: Well we had Pascal, Turing and now Ampere. Sounds like a long time, 5 years :rolleyes:
 
Ah, so 5 years sounds like a long time until it doesn't. Well we had Pascal, Turing and now Ampere. Sounds like a long time, 5 years :rolleyes:

All of that comes down to one point - time means NOTHING if there are no meaningful changes that come with it. Why would people with a relatively new Intel CPU (few generations old) need to upgrade at all? They would pretty much get no significant FPS increase in games if they just upgraded few gen old Core i5 to newest one, or Core i7 to newest one. Big cost to do it, for minimal gains. Same with Pascal->Turing. Ampere is better but it's also an unobtanium for most. Hence, my question is - what's your point again?
 
All of that comes down to one point - time means NOTHING if there are no meaningful changes that come with it. Why would people with a relatively new Intel CPU (few generations old) need to upgrade at all? They would pretty much get no significant FPS increase in games if they just upgraded few gen old Core i5 to newest one, or Core i7 to newest one. Big cost to do it, for minimal gains. Same with Pascal->Turing. Ampere is better but it's also an unobtanium for most. Hence, my question is - what's your point again?

The 1060 launched 5 years ago. How long do you want to keep a GPU when tech such as DLSS and raytracing is now almost 3 years old? Do note that Nvidia did add software support for RT on Pascal.
 
Of course it can come across as "promotional" piece if likes of DF are pointing out all the strengths of one brand and weaknesses of other brands/methods but that's not really their fault if said brand/feature is actually factually better.

Either way GN and HW have also stated the same as DF and it was GN who said dlss was "better than native" in their cyberpunk review so does that mean their piece was one-sided promotional material for nvidia?

But that's the point, they are not pointing all strengths and weaknesses of a brand in proportionality, or in other words, they are not taking the whole product into account. If they were just doing a review of Ray Tracing as technology then that would be one thing but they exclusively focus on one point to the detriment of others and that does make them a biased outlet. Add to that their exclusive promotion for Nvidia and you know what you are getting into. If GN and HW were promoting AMD Tressfx or CAS without any context around it they would rightly be called biased.

Also, I did say that DF's analysis of things in isolation is fine so your point about GN saying DLSS is better and therefore they are also biased is a red herring or perhaps you failed you understand my point. You just have to look at the thumbnails of 3080ti reviews from these three outlets to understand the difference between them. Remember that these days people cannot fudge data and stay hidden for a long time. So even a biased outlet has to adhere to some standards (which DF does) and have to work around it to promote something. It's a shame, they were one the best and trusted GPU technology review outlets at one time.
 

Right, so you do not have a point. Unless you think everyone with Pascal should just go and buy new GPU? So then, tell me - what GPU and where? :)
 
Indeed, only people dissing DLSS and saying it can’t be better than native haven’t actually used it.

In death stranding for ex i posted multiple comparisons where even in still images it looked better ( due to resolving wires, fences, edges better ) while still looking as sharp as native. In motion it was even better while offering better perf.

no one said it’s ALWAYS better than native, there’s tradeoffs sometimes its better sometimes not while always offering better perf so its a no brainer.

This on the other hand, we’ll see. Got my doubts but its funny to see the same ‘skeptical’ people praising FSR even before any actual tests have been conducted while in the past they did the opposite with DLSS, and worse, they still do it nowadays even after the tech has proved itself several times over.
I think you are defending things that people are not attacking. DLSS is good for what it is. For mid-range and entry-level cards, it's great (although 1080p DLSS is not that great according to many). But to claim DLSS as the Holy Grail of GPU technology in terms of performance and quality is wrong, not that you are doing it but some are.

Your example of Death Stranding is perhaps closer to what Minecraft RTX is i.e. too little scene complexity to show the difference between DLSS and native. Checkerboard rendering is also a tech that has proven itself for years on consoles yet it is still understood by most of the enthusiast PC community as a low-quality performance boost trick. Also, the Consensus in this thread seems to be that DLSS can be better than native with AA. Without AA, native is better.
 
Back
Top Bottom