• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DLSS, difference with other proprietary software.

Soldato
Joined
21 Oct 2002
Posts
7,424
Location
Bexhill on sea
As title, can someone here explain to me differences between DLSS and AMD's version please. I realise that Nvidia's is supposed to be the mutt's nuts and every man and his dog (see what I did there) is going on about it, along with R/T, but I'd like to know, at a BASIC level 'cos I'm old and thick, what the differences are.
Cheers in advance.
 
Soldato
Joined
24 Aug 2013
Posts
4,549
Location
Lincolnshire
AMD doesn't have any answer to DLSS, probably not until next gen.

Their current chips have ray accelerators but nothing like nvidias tensor cores to provide a similar DLSS experience.
 
Associate
Joined
19 Jun 2017
Posts
1,029
DLSS is just the beginning of a speculative out of order rendering scheme. When you render something the pixel that's painted on the screen has to go through a fixed set of stages in the same order (atleast the core steps). DLSS attempts to solve this problem by allowing the pixel to jump a few steps if it can predict with enough confidence those steps are not required thus producing breakthrough increase in performance, atleast that looks like the ultimate goal.

Right now its scope in software is limited to image rescaling using a mathematical model that's trained to associate a larger resolution image to a much lower resolution input data and works in an autoregressive sense. The model ends up predicting new pixels without having it flow through the entire pipeline. Nvidia uses its drivers and hardware architecture to implement this model in games.

However Nvidia has shared this model with MS which would now become a directx standard that could be implemented in different ways by different vendors by interfacing with a set of APIs which MS calls direct ML. AMD will use an implementation that best works with its ecosystem while NVs implementation would be straight away portable (NV shared the model).

End result would depend on how fast each vendor can compute model outputs. Nvidia seems to have an advantage here due to their ability to do one matrix MAD operation per clock, atleast that's where the general consensus is. But AMD can work around with lower IQ hacks to match the performance envelope.

The model needs periodic calibration using new sample data. Right now it's the responsibility of Nvidia but when it becomes a DX standard this can be pushed onto game devs and both vendors will just provide a Dev kit for model implementation.

Hope that helps.
 
Soldato
OP
Joined
21 Oct 2002
Posts
7,424
Location
Bexhill on sea
So its a sophisticated upscaler?
DLSS is just the beginning of a speculative out of order rendering scheme. When you render something the pixel that's painted on the screen has to go through a fixed set of stages in the same order (atleast the core steps). DLSS attempts to solve this problem by allowing the pixel to jump a few steps if it can predict with enough confidence those steps are not required thus producing breakthrough increase in performance, atleast that looks like the ultimate goal.

Right now its scope in software is limited to image rescaling using a mathematical model that's trained to associate a larger resolution image to a much lower resolution input data and works in an autoregressive sense. The model ends up predicting new pixels without having it flow through the entire pipeline. Nvidia uses its drivers and hardware architecture to implement this model in games.

However Nvidia has shared this model with MS which would now become a directx standard that could be implemented in different ways by different vendors by interfacing with a set of APIs which MS calls direct ML. AMD will use an implementation that best works with its ecosystem while NVs implementation would be straight away portable (NV shared the model).

End result would depend on how fast each vendor can compute model outputs. Nvidia seems to have an advantage here due to their ability to do one matrix MAD operation per clock, atleast that's where the general consensus is. But AMD can work around with lower IQ hacks to match the performance envelope.

The model needs periodic calibration using new sample data. Right now it's the responsibility of Nvidia but when it becomes a DX standard this can be pushed onto game devs and both vendors will just provide a Dev kit for model implementation.

Hope that helps.

And thats basic?:p
 
Soldato
OP
Joined
21 Oct 2002
Posts
7,424
Location
Bexhill on sea
The reason I ask is when playing through Crysis remastered, I was getting 55-60fps with "high" settings @ 4K, but I thought I'd give sapphire trix a go and use the resolution reducer (if thats a thing) with the render scale at 85%. Now I can run at anything up to 85fps with everything on "very high" @ 4K with no discernible decrease in image quality. This is what made me ask the question.
 
Associate
Joined
19 Jun 2017
Posts
1,029
So its a sophisticated upscaler?

Nope the rescaler is a naive interpretation .
And Nvidia used it for purely marketing reasons (just to avoid unnecessary headaches).. there are possibly some pixels that are partially processed and then update the model in some way thats not apparently obvious.

Eventually this is a new breakthrough method of rendering. You can call it rescaling..but mathematically it's more similar to dimension reduction and needs a good amount of intuition to appreciate its future impact.
 
Soldato
Joined
21 Jul 2005
Posts
20,018
Location
Officially least sunny location -Ronskistats
The reason I ask is when playing through Crysis remastered, I was getting 55-60fps with "high" settings @ 4K, but I thought I'd give sapphire trix a go and use the resolution reducer (if thats a thing) with the render scale at 85%. Now I can run at anything up to 85fps with everything on "very high" @ 4K with no discernible decrease in image quality. This is what made me ask the question.

You used the RIS feature. Read what Sapphire said about thier TriXX Boost here.

We did bang on about how good it is on these forums before, but unless you have an AMD card and use it properly, people think DLSS is the only way to enjoy gaming.
 
Soldato
Joined
18 Feb 2015
Posts
6,484
There is no alternative to DLSS from AMD, but there are plenty of game-specific alternatives already existing. To understand why you first have to understand what DLSS is, in particular 2.0 which is the one actually used nowadays.

Essentially there is no difference between DLSS and Temporal Reconstruction except for the clamping step in the TAA chain. ONLY THERE does Nvidia leverage AI for anything. The fancy shmancy "we're gonna use AI to reconstruct detail where there was none" is hocus pocus and was abandoned with DLSS 1.0 because it looked like smeared **** on a wall. Same issue you can see with AI when they reconstruct images with some models, but ofc games are infinitely more complicated to get right.

I mean this look:
Bildschirmfoto-2020-05-07-um-20.35.10.png


Info on DLSS:
https://www.youtube.com/watch?v=d5knHzv0IQE

https://twitter.com/axelgneiting/status/1335353968281653248
 
Soldato
Joined
4 Feb 2006
Posts
3,202
You used the RIS feature. Read what Sapphire said about thier TriXX Boost here.

We did bang on about how good it is on these forums before, but unless you have an AMD card and use it properly, people think DLSS is the only way to enjoy gaming.

I'm gonna have to give this a try. I've been playing Monster Hunder World and it has the Upscaling RIS feature built in and it is pretty good. I get around 20fps extra when it's enabled and the visuals are not that different from the normal 1440P mode. If Sapphire can add it in TriXX then why can't AMD make it a standard feature in the Radeon drivers??
 
Soldato
Joined
8 Jun 2018
Posts
2,827
The reason I ask is when playing through Crysis remastered, I was getting 55-60fps with "high" settings @ 4K, but I thought I'd give sapphire trix a go and use the resolution reducer (if thats a thing) with the render scale at 85%. Now I can run at anything up to 85fps with everything on "very high" @ 4K with no discernible decrease in image quality. This is what made me ask the question.

You beat Nvidia at their marketing. Even though you can reduce the IQ in game it doesn't always show the same resolution as trixx boost. Which looks better to me.
This makes it easy to get a performance boost without having to wait for AMD to update the game to support downsampling/upscaling.

Just set RIS to around 20%-30% or so and you are good to go. I can't tell a difference between trixx boost and native.
 
Soldato
Joined
21 Jul 2005
Posts
20,018
Location
Officially least sunny location -Ronskistats
I'm gonna have to give this a try. I've been playing Monster Hunder World and it has the Upscaling RIS feature built in and it is pretty good. I get around 20fps extra when it's enabled and the visuals are not that different from the normal 1440P mode. If Sapphire can add it in TriXX then why can't AMD make it a standard feature in the Radeon drivers??

FFS it is visit the link on my post earlier lol!! The game sometimes has to allow for it to kick in:
it currently only supports DirectX 12, Vulkan and DirectX 9 games...
You cannot set RIS to be enabled in certain games only using the game profile section of Radeon Settings.
but it is there to use within the driver!
 
Soldato
Joined
4 Feb 2006
Posts
3,202
I tried TriXX Boost in Titanfall2 and got a 20 fps boost at the default 85% scale. It is a nice feature for games that don't support resolution scale but many new games have such an option built in anyway.

I captured a screenshot at 1440P and also at TriXX 85% and when I pasted both screenshots into MSpaint this is what I got.

rJy5CQL.jpg

The Trixx 85% is actually captured at the lower resolution. I was thinking that Trixx upscales to 1440P but that is not what it looks like. It looks essentially like the lower resolution is just filled into the screen. Or am I totally not understanding how this is working?
 
Last edited:
Soldato
Joined
8 Jun 2018
Posts
2,827
@fs123 @pieman109
Well its more then just that, RIS is enabled and defaults to 30% which helps keep the resolution sharp.
Looking at the lettering, etc I'm not seeing any difference yet you are getting a 23 FPS boost.

Under Graphics Tab there is an option called GPU Scaling. That you can enable to upscale lower resolution to fit the display.
OR
Scaling Mode set to Full Panel to help stretch the image to fit the display.

Which ever one you believe works best or none at all.

Recap:
Enable Trixx Boost with RIS Enabled
Set RIS to 30% (increase and lower as needed)
Go to the Display Tab to either enable GPU Scale or Scaling Mode to adjust the lower resolution as needed
 
Back
Top Bottom