• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia DLSS 5 months on-a win or fail?

Man of Honour
Joined
13 Oct 2006
Posts
91,128
I've not yet seen anything that has used ray tracing in a useful way with Turing GPUs - Battlefield 4 and The Division have artistically better use of GI even though its fake and TD does reflections well enough you'd have to stop and study to notice the difference.

Ray tracing, even hybrid implementations, using Turing hardware can do so much better.
 
Soldato
Joined
19 Oct 2008
Posts
5,951
Pointless trying to discuss anything that doesn't praise Nvidia with AthlonXP1800.

Nvidia can do no wrong!

They are like the PR department for Nvidia around here.
In fairness, the anti-20 series crowd are as bad :). There's so much misinformation here and if some bothered to read up more they'd see they're wrong.
A few oddities I noticed that made me chuckle this morning.
1. FFIV is just a benchmark.
2. DLSS HAS to be used with RT.
I've given up myself as I'm sure some folks are bots , proven by ignoring evidence contrary to their own thoughts :p. You get a slight pause followed by repeated same narrative.
I just sit back and enjoying gaming with a 20'er card, waiting for further improvements and next gen too :)
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
32,618
In fairness, the anti-20 series crowd are as bad :). There's so much misinformation here and if some bothered to read up more they'd see they're wrong.
A few oddities I noticed that made me chuckle this morning.
1. FFIV is just a benchmark.
2. DLSS HAS to be used with RT.
I've given up myself as I'm sure some folks are bots , proven by ignoring evidence contrary to their own thoughts :p. You get a slight pause followed by repeated same narrative


this forum never changes. The same anti-Nvidia crow will continue to post the same nonsense time and time again, no matter how absurd or obviously false. you point it out with facts and they still just keep repeating the same fake BS.


Which is all ridiculous because as with any large company Nvidia has heaps of flaws and Turing is definitely not a great release. There are loads of legitimize complaints you can make so when these anti-nvidia fans just make up garbage arguments it just weakens their position and make them look like pathetic trolls.


As an example, Given the thread topic it is plainly obvious DLSS has massive issues under some situation producing dreadful image softness which is completely inexcusable, while Turing costs significantly more. From that point alone we can conclusively state that DLSS is currently a failure. Why then the need to make up all kinds of ridiculous claims, lies and blatant nonsense? Why is it not possible to have an intelligent discussion about the current state of DLSS, the science and technology behind DLSS, and the potential for Nvidia to improve upon the current results?


From my perspective I think DLSS is a fascinating technology, with a huge amount of potential, and very solid theoretical background. It is poorly executed by Nvidia currently, but I am more interested in where the technology can take us.
It is a basic fact that Deep-learning based up-scaling provably can provide better results than tradition re-sampling and more advanced algorithmic solutions like temporal checkerboards.
A 4K image has massive amount of pixels but the total information content is far smaller diue to statistical distributions of the pixels. One can see this when compression an image (lossless or lossy). E.g, a high quality jpeg compression of a photo still looks photo realistic without perceptible image problems while being 10-15X smaller in size than RAW. That same concept on information entropy can be used in reverse to generate a high resolution image form lower resolution samples and a model that has learned most of the statistical pixel patterns.

The state of the art in DL super resolution absolutely does not have problems with quality. It will be interesting to see what Nvidia can do in the next months
 
Caporegime
Joined
18 Oct 2002
Posts
29,851
Ray tracing, even hybrid implementations, using Turing hardware can do so much better.

Yeah, the issue is that traditional rasterisation method have become SO good over time 'faking it' as such that it's just going to be extra difficult for an average RT implementation to tell much of a difference. Like anything, I expect it'll take some time to improve (and ultimately, will it actually be worth the hit).
 
Caporegime
Joined
23 Apr 2014
Posts
29,437
Location
Dominating rooms with symmetry
It's going to take years before RT does it better without a massive performance hit, by that time we'll probably be at £5000 top end gaming GPUs.

I can't see DLSS taking off either in its current form, nVidia will likely release a new version of it. SDDLSS (Super Duper Deep Learning Super Sampling).
 
Caporegime
Joined
24 Sep 2008
Posts
38,322
Location
Essex innit!
It's a trap!

Nvidia want this because they're heavily invested in the console industry, supplying the cores and gpu's and apu's and bbq's and everything that goes into them!




Oh wait...
Being Cereal for a minute, at least with consoles, you generally just plug and play and no faffing about. I will be watching the PS5 launch with interest.
 
Soldato
Joined
14 Nov 2007
Posts
16,149
Location
In the Land of Grey and Pink
Being Cereal for a minute, at least with consoles, you generally just plug and play and no faffing about. I will be watching the PS5 launch with interest.

Well I still have my PS4, use it occasionally for a couple of JRPG's I have.

However, when the next generation of consoles are released and if GPU prices haven't dropped back to some kind of normality (unlikely) then yeh, might pick one up again.

Certainly won't ever be paying a grand plus for a GPU.

Nvidia and AMD can both go and do one as far as I'm concerned.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,128
Yeah, the issue is that traditional rasterisation method have become SO good over time 'faking it' as such that it's just going to be extra difficult for an average RT implementation to tell much of a difference. Like anything, I expect it'll take some time to improve (and ultimately, will it actually be worth the hit).

It's going to take years before RT does it better without a massive performance hit, by that time we'll probably be at £5000 top end gaming GPUs.

I can't see DLSS taking off either in its current form, nVidia will likely release a new version of it. SDDLSS (Super Duper Deep Learning Super Sampling).

Thing is it is perfectly possible to use that amount of hardware to replicate Quake 2's static light mapper with a real time, higher resolution, variant with things like volumetric lighting and caustics and get viable performance and I think if people saw that they'd start to appreciate the potential better - unfortunately I've not done any serious 3D graphics programming since DX7 so between getting upto speed on that and doing an actual implementation it would probably take me around 2 years to accomplish :s
 
Caporegime
Joined
18 Oct 2002
Posts
39,309
Location
Ireland
this forum never changes. The same anti-Nvidia crow will continue to post the same nonsense time and time again,

So you're trying to tell me there's a Bird that registered on this forum to spew anti Nvidia propoganda?

Think you might want to get your water supply checked as clearly it's getting mixed with some type of hallucinogen. :p
 
Back
Top Bottom