• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
Then you won't need any game specific driver either. :)

Technically you don't is my understanding. Although you get game specific drivers for most mainstream games anyway because they bring a plethora of other game specific improvements. I'm fairly hopeful that DLSS will spread to have a fairly industry wide adoption once enough of the major game engines integrate it, this has happened in the Unreal Engine and Unity game engine, 2 very popular engines, and that means anyone making games or mods with these engines can now enjoy integration of DLSS 2.0 without need to go to Nvidia and train their game, it'll justwerk(tm).
 
Associate
Joined
20 Nov 2020
Posts
1,120
I think you are believing too much bs from Nvidia marketing team. It won't need per game implementation and driver if it is trained in non specific games, it will be a simple setting in the Nvidia driver. Enable/disable DLSS.
It will work fine on UE/Unity engines if the devs are using the UE/Unity assets but only some poor devs will do that most of the game developers will use their own assets and it will require some level of AI training.
Nvidia hope is that they will not have to pay the devs anymore to add DLSS to their games, that's why you have UE and Unity plugins. You sponsor some games to prove that the tech is good and create some hype and then hope the rest of them will add your tech for free. You make their job easier by creating plugins in the major game engines. But that has very little to do with how much AI training their games will still need for the DLSS to look fine. If they are using their own artwork then you will still need to do a lot of training.
 
Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
Let's be more precise because I think we're conflating a few things.

Game/engine specific implementation is necessary for this type of AA because it needs a certain level of integration with the rendering engine to be able to access information it needs from the rendering pipeline. It's not like, for example, FXAA which is a post-render shader based AA method which can simply take the raw output of a finished frame and then apply FXAA to it, as if it was just a flat image. With that kind of post processing AA you can write truly generic implementations because all you need access to is the final rendered image of the pipeline which the driver already has, so it can be enforced by 3rd party tools outside of the game. The level of integration required for DLSS and any other non-post processing type of AA is at the engine level. Engines like Unreal Engine now have this integration. It's not asset specific, which means anyone licencing the unreal engine can enable DLSS 2.0 in their game and no game specific driver is required, the drivers can detect the engine in use and do everything driver-side that DLSS requires to function.

On the actual Machine Learning front, which is the training of the algorithm on Nvidias super computer, this is no longer per game, it's a generic DLSS algorithm that applies to all games and so training of the algorithm isn't required on a per game or per asset basis, it can generalize well enough now to give good results on games it's never seen before. Although the more games it trains the algorithm on the better it will generalize and so I expect to make future iterations of the ML algorithm better, new games will be added into the mix. This is expected from any evolving ML algorithm, it's always in a constant state of improvement the more source data you can give it.

Nvidia already have a pipeline for DLSS which evolves past DLSS 2.0 and whose aim is to have DLSS basically running on any game that currently supports TAA, but as with any emerging technology it's requires time to grow and mature, so both quality improves over time and technical requirements drop.

Incidentally, this is partially why I think the IQ will suffer in a globally applicable type of upscaler, because such an upscaler can only really get access to the finished frame output by the engine and work on that, it cannot look at more detailed information from the pipeline such as pixel vectors, temporal information, or subpixel information. This is why all the original AA types before the abysmal post-render shader AA methods actually looked a lot better. Thinking about it from an information theory perspective there's just more raw original information going into things like MSAA with more subpixel samples. And at least with DLSS you have additional information coming from pixel vectors, temporal information and additional information coming from the trained models. If you remove these you're left with an FXAA-like implementation of upscaling, that is to say it's working with just the output image and no additional information, in a process designed to add additional information. This is why I tend to think of this in simple terms as like a photoshop filter.

Of course what would be ideal is if AMD nailed that process and did something truly unique and innovative which benefited everyone, caused more competition in the space and made games look better while running faster. That'd be amazing and I'd happily eat my shoe if they did, but there's some underlying principles here which I don't think can really be violated which make me skeptical about how this will turn out.
 
Associate
Joined
6 Dec 2013
Posts
1,878
Location
Nottingham
The AMD patent leaked has nothing to do with normal Super Resolution that AMD has previously spoken about.

AMD specifically said in February it's super resolution would not be a AI based implementation, it's a open source software that runs on any graphics cards regardless of it's architecture.

But the patent is completely different to this, the patent is very similar to how DLSS works

This is what I believe because its a freakin no brainer.

* AMD in it's rush, came out and said we've got a competition to DLSS and it runs on everything, its super easy to implement at the driver level and developers dont need to do anything.

* Gamers frothed over themselves and laughed at DLSS

* Earlier this year, AMD in its testing found that FSR looks like absolute rubbish next to DLSS (this has been confirmed by various leaks).

* AMD immediately started working on a replacement that uses AI like DLSS and thus the patent was created. This replacement most likely will not run on RDNA2 but requires an RDNA3 GPU

the amount of rubbish you post is about as grim as your name. i think you actually believe it too :D
it doesnt have to be an us and them thing.
 
Last edited:
Associate
Joined
6 Dec 2013
Posts
1,878
Location
Nottingham
Let's be more precise because I think we're conflating a few things.

Game/engine specific implementation is necessary for this type of AA because it needs a certain level of integration with the rendering engine to be able to access information it needs from the rendering pipeline. It's not like, for example, FXAA which is a post-render shader based AA method which can simply take the raw output of a finished frame and then apply FXAA to it, as if it was just a flat image. With that kind of post processing AA you can write truly generic implementations because all you need access to is the final rendered image of the pipeline which the driver already has, so it can be enforced by 3rd party tools outside of the game. The level of integration required for DLSS and any other non-post processing type of AA is at the engine level. Engines like Unreal Engine now have this integration. It's not asset specific, which means anyone licencing the unreal engine can enable DLSS 2.0 in their game and no game specific driver is required, the drivers can detect the engine in use and do everything driver-side that DLSS requires to function.

On the actual Machine Learning front, which is the training of the algorithm on Nvidias super computer, this is no longer per game, it's a generic DLSS algorithm that applies to all games and so training of the algorithm isn't required on a per game or per asset basis, it can generalize well enough now to give good results on games it's never seen before. Although the more games it trains the algorithm on the better it will generalize and so I expect to make future iterations of the ML algorithm better, new games will be added into the mix. This is expected from any evolving ML algorithm, it's always in a constant state of improvement the more source data you can give it.

Nvidia already have a pipeline for DLSS which evolves past DLSS 2.0 and whose aim is to have DLSS basically running on any game that currently supports TAA, but as with any emerging technology it's requires time to grow and mature, so both quality improves over time and technical requirements drop.

Incidentally, this is partially why I think the IQ will suffer in a globally applicable type of upscaler, because such an upscaler can only really get access to the finished frame output by the engine and work on that, it cannot look at more detailed information from the pipeline such as pixel vectors, temporal information, or subpixel information. This is why all the original AA types before the abysmal post-render shader AA methods actually looked a lot better. Thinking about it from an information theory perspective there's just more raw original information going into things like MSAA with more subpixel samples. And at least with DLSS you have additional information coming from pixel vectors, temporal information and additional information coming from the trained models. If you remove these you're left with an FXAA-like implementation of upscaling, that is to say it's working with just the output image and no additional information, in a process designed to add additional information. This is why I tend to think of this in simple terms as like a photoshop filter.

Of course what would be ideal is if AMD nailed that process and did something truly unique and innovative which benefited everyone, caused more competition in the space and made games look better while running faster. That'd be amazing and I'd happily eat my shoe if they did, but there's some underlying principles here which I don't think can really be violated which make me skeptical about how this will turn out.
people said raytracing couldnt be done on none nvidia hardware and that wasn't true. im not saying better or worse, but apparerently you NEED rt cores for it. :rolleyes:
 
Soldato
Joined
12 May 2014
Posts
5,238
people said raytracing couldnt be done on none nvidia hardware and that wasn't true. im not saying better or worse, but apparerently you NEED rt cores for it. :rolleyes:
With the way people talk, they would have you believe that if you don't have dedicated accelerator units for a function then it simply can't run.

It is funny how people seem to ignore that there are advantages and disadvantages to hardware acceleration units and act as if general purpose units(name?) don't exist.
 
Associate
Joined
24 Mar 2011
Posts
638
Location
Cambridgeshire
With the way people talk, they would have you believe that if you don't have dedicated accelerator units for a function then it simply can't run.

It is funny how people seem to ignore that there are advantages and disadvantages to hardware acceleration units and act as if general purpose units(name?) don't exist.

Depends how you play that game. A GPU in itself is already a 'hardware acceleration unit'. Having one isn't required for games to be able to work, since you could software render them on a more general purpose processor instead (Your CPU). It how we used to play games if you rewind a couple of decades. No one really considers that to be viable today, however (and rightly so).

Does anyone actually consider raytracing a game without any dedicated hardware RT units to actually be viable in real time graphics? Take a game like Minecraft RTX for example, just how would that run with no dedicated RT units? I can't imagine for a moment it would be a good experience.
 
Soldato
Joined
12 May 2014
Posts
5,238
Depends how you play that game. A GPU in itself is already a 'hardware acceleration unit'. Having one isn't required for games to be able to work, since you could software render them on a more general purpose processor instead (Your CPU). It how we used to play games if you rewind a couple of decades. No one really considers that to be viable today, however (and rightly so).

Does anyone actually consider raytracing a game without any dedicated hardware RT units to actually be viable in real time graphics? Take a game like Minecraft RTX for example, just how would that run with no dedicated RT units? I can't imagine for a moment it would be a good experience.
I have nothing against either dedicated /General purpose units each has its place.

Regarding Minecraft it is built around having dedicated hardware and not just any dedicated hardware but built around Nvidia's solution to the RT problem. Something built not to run on dedicated RT hardware will function very differently.

At the end of the day you tweak and scale your software to match the hardware available.
 
Associate
Joined
20 Nov 2020
Posts
1,120
Depends how you play that game. A GPU in itself is already a 'hardware acceleration unit'. Having one isn't required for games to be able to work, since you could software render them on a more general purpose processor instead (Your CPU). It how we used to play games if you rewind a couple of decades. No one really considers that to be viable today, however (and rightly so).

Does anyone actually consider raytracing a game without any dedicated hardware RT units to actually be viable in real time graphics? Take a game like Minecraft RTX for example, just how would that run with no dedicated RT units? I can't imagine for a moment it would be a good experience.
That does not mean that every operation a GPU can do needs its "dedicated hardware". Some tasks will use more resources than others. In the case of upscaling, it is a win from the start - rendering an image at lower res use far less resources than native rendering. It is like the cost of denoising vs having to use thousands of rays for each pixel.
 
Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
people said raytracing couldnt be done on none nvidia hardware and that wasn't true. im not saying better or worse, but apparerently you NEED rt cores for it. :rolleyes:

I personally find most of the time statements like these are just errors in communication rather than understanding.

In principle you can calculate ray tracing on any general purpose computational device. In practice the performance demands in modern games are too high and require fixed function hardware to accelerate it. Whenever I see ambiguity like this I tend to just ask for clarification in my experience people do tend to understand but were just assuming you shared the same presuppositions they did.
 
Associate
Joined
20 Nov 2020
Posts
1,120
Let me say this: An Ampere card without tensor cores but with 10% more CU, that is able to do the same operations the tensor cores are doing on the CU ( i don't know if this is the case, i know the Big Navi CU can do them ) will be much faster in native rendering and almost as fast when using DLSS as an Ampere card with tensor cores. The tensor cores were not added for DLSS, they were there because the Nvidia pro line is using them for far more things than DLSS.
 
Soldato
OP
Joined
6 Feb 2019
Posts
17,639
people said raytracing couldnt be done on none nvidia hardware and that wasn't true. im not saying better or worse, but apparerently you NEED rt cores for it. :rolleyes:


Seriously who said that, I bet you won't be able to produce a single source who said that. You just made that up, everyone knows that Ray Tracing can be done on any graphics card ever built
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
people said raytracing couldnt be done on none nvidia hardware and that wasn't true. im not saying better or worse, but apparerently you NEED rt cores for it. :rolleyes:


thsoe people are idiots.
Nvidia certainly never disd that, they did the complete opposite and had products doing Ray tracing long before the RTx cards were released
 
Associate
Joined
17 Aug 2009
Posts
1,684
at the end of the day dlss is a little mehh on first person shooters causing latency

lets just up the AMD version doesnt do the same
 
Soldato
Joined
4 Feb 2006
Posts
3,217
The AMD patent leaked has nothing to do with normal Super Resolution that AMD has previously spoken about.

AMD specifically said in February it's super resolution would not be a AI based implementation, it's a open source software that runs on any graphics cards regardless of it's architecture.

But the patent is completely different to this, the patent is very similar to how DLSS works

This is what I believe because its a freakin no brainer.

* AMD in it's rush, came out and said we've got a competition to DLSS and it runs on everything, its super easy to implement at the driver level and developers dont need to do anything.

* Gamers frothed over themselves and laughed at DLSS

* Earlier this year, AMD in its testing found that FSR looks like absolute rubbish next to DLSS (this has been confirmed by various leaks).

* AMD immediately started working on a replacement that uses AI like DLSS and thus the patent was created. This replacement most likely will not run on RDNA2 but requires an RDNA3 GPU


Your assumption can easily be debunked by the date of the patent itself. AMD filed the patent applications nearly 2 years ago so your claim that AMD found FSR looked worse than DLSS hence they had to move to AI is false. AMD has never claimed that their solution is going to be AI based but have stated that they are looking at a variety of different solutions. Whatever they come up with, FSR will have to work on standard compute units so will probably work with older gen cards too.

AMD-GSR-FSR-Patent-1200x571.png
 
Back
Top Bottom