• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD has an answer to DLSS, DirectML Super Resolution

The image quality with DLSS 2.0 is also capable of producing a sharper than original image with more detail it should be noted.

You might still get some people arguing the more detail aspect, but it is technically correct where applicable. It may be upsampling from a lower resolution than the output resolution but its doing so using a model from the game running at 16k back at Nvidia's HQ - so in theory it make sense that DLSS could add details to the image that is not there at native 4k or native 1440p because the DLSS profile was taken from the game running at 16k. I wonder if this affect will become more pronounced in the years to come as games begin to use higher resolution texture assets.
 
When DLSS was announced i thought it was a much more relative tech than RT at this point in time. Then when we saw the results it was a proper let down due to the Blurr effect. DLSS 2.0 though is where it should have been in the first place.

Problem is DLSS is another locked in Nvidia tech so will fade out just like the others apart from the odd Nvidia title. Glad to see AMD and Microsoft are bringing in another open version which we will all benefit from. Here is hoping it's more DLSS 2.0 and not DLSS 1.0



Nvidia was a big proponent of DirectML and have led nich of the development with MS. Nvidia have had fully compatible DirectML drivers leveraging their Tensor Cores for a long time.

DirectML is simply an API, just like Direct X, the real magic is how you accelerate that API, and that is where Nvidia have a clear advantage
 
Nvidia was a big proponent of DirectML and have led nich of the development with MS. Nvidia have had fully compatible DirectML drivers leveraging their Tensor Cores for a long time.

DirectML is simply an API, just like Direct X, the real magic is how you accelerate that API, and that is where Nvidia have a clear advantage

Yep - as much as people want to think that Direct X 12's ray tracing and Direct ML won't work on Nvidia or make use of Nvidias architecture- it's not true, today's new Direct X12 ray tracing benchmark shows that and Direct ML is already supported on Turing and Ampere
 
DirectML-super-sampling.jpg


Old wine new bottle :)
 
Some of those hardcore fanboys just really can't accept that machine learning up sampling technologies are going to be a big deal in the future even if they aren't necessarily right this second, and AMD needs to be on that train...
I only really noticed recently but anything that isn't perfectly implemented the moment it's launched is called a gimmick by most
Raytracing? Gimmick, nobody cares about it. DLSS? Gimmick, never gonna be the norm only a handful of games use it. 4K? Gimmick, 1440p is the max necessary resolution. All of these are claims I've seen in the last few weeks just because the tech isn't literally perfect
 
DirectML-super-sampling.jpg


Old wine new bottle :)


Exactly, Direct ML is just a different API wrapper around yhe exact same technology Nvidia deployed with DLSS

.DirectML isn't an algorithm or a model, just a unified way of accessing hardware that can accelerate tensor operations key to ML. It is basically like Google's TensorFlow in an environment and API better suited to realtime applications.


The ability of any AMD or Nvidia card to work well with a Direct ML supported game is entirely related to the underlying hardware acceleration, of which Nvidia is on their 3rd generation tensor cores. It remains to be seen what AMD can support in hardware
 
Exactly, Direct ML is just a different API wrapper around yhe exact same technology Nvidia deployed with DLSS

.DirectML isn't an algorithm or a model, just a unified way of accessing hardware that can accelerate tensor operations key to ML. It is basically like Google's TensorFlow in an environment and API better suited to realtime applications.


The ability of any AMD or Nvidia card to work well with a Direct ML supported game is entirely related to the underlying hardware acceleration, of which Nvidia is on their 3rd generation tensor cores. It remains to be seen what AMD can support in hardware

Some details here:
https://stackoverflow.com/questions...-to-benefit-from-matrix-multiplication-on-gpu


Rx5700 XT is 8-9x faster at matrix operations than software
 
Some details here:
https://stackoverflow.com/questions...-to-benefit-from-matrix-multiplication-on-gpu


Rx5700 XT is 8-9x faster at matrix operations than software

That has nothing to do with hardware acceleration of tensor operations. That is just the age old GPU acceleration using standard compute units.


As far as i know, the 6000 series does have any dedicated tensor cores so everything will be going through the CUs. Supposedly AMD might have added a few new ops to help but yet to be see how effective this is, and the crux of the matter is if the CUs are doing the tensor operations then they aren't doing the rasterization, making a strict tradeoff in performance. The advantage of dedicated hardware is not only is it faster and more efficient, but it can be used in parallel to the CUs doing rasterization. Same applies to ray tracing.

My only point is that we know Ampere and Turing fully support Direct ML with dedicated hardware acceleration and proven performance with Nvidia at fhe forefront of this technology and working closely with MS on the API. AMD on the other hand is unknown, little prior experience, and seemingly no/minimal hardware support. That doesn't mean AMD's solution will be clearly worse, just unknown and different tradeoffs.
 
^ And ultimately AMD's lack of comparison of any hardware based acceleration for Tensor core features can only mean it's too early for AMD to yield any meaningful numbers against nVidia's much more matured generation hardware for these operations.

I think people need to keep in mind RT isn't just for reflections either. Games like Shadow of the Tomb Raider utilise RT for more natural looking shadows (among other things) too. I've seen a lot of comments saying stuff like RT isn't a big deal current;y when in fact it is and has been for a couple of years.
 
I've seen a lot of comments saying stuff like RT isn't a big deal current;y when in fact it is and has been for a couple of years.
Real time ray tracing, which has been the holy grail for longer than most of the people who say this have been capable of abstract thought, is suddenly just a gimmick
 
^ And ultimately AMD's lack of comparison of any hardware based acceleration for Tensor core features can only mean it's too early for AMD to yield any meaningful numbers against nVidia's much more matured generation hardware for these operations.

I think people need to keep in mind RT isn't just for reflections either. Games like Shadow of the Tomb Raider utilise RT for more natural looking shadows (among other things) too. I've seen a lot of comments saying stuff like RT isn't a big deal current;y when in fact it is and has been for a couple of years.

NOT A BIG THING.

It is a thing when its mainstream which means when 80% of costumers buy cards at below $250.
Its why I find these comments so funny with people having no idea what they talk about.
Whenever Real tracing can be Done on PC for cards below $250 then its a thing.

Now its a Joke told by Jensen and then fudge spread by less informed
 
Super TLDR is that you use machine learning to provide a better upscaled image than you could otherwise achieve from low resolution source material. In very simple terms the algorithm learns what stuff should look like, and can use that knowledge to reconstruct high resolution images from lower resolution ones... imagine you were good at art (maybe you are lol) and you were shown a low res picture of a person's face... using your learned knowledge of what people look like, you could then draw a much higher fidelity picture from the original lower fidelity image by filling in the missing information yourself and it would probably look pretty accurate.

The general idea is that you can for example play at something that looks an awful lot like native 4k while only actually rendering at 1440p for example, which is obviously much easier to achieve high frame rates with - especially when you start to take the relatively poor ray tracing performance of current GPUs into account for example.

Less TLDR is that DLSS is Nvidia's own proprietary version of AI based upscaling. Originally for DLSS in particular this required training the algorithm using very high (16k I believe) source material from the game in question, and the AI would then learn to use more sparsely rendered data from lower resolution render targets and boost the resolution by filling in the blanks. More recently DLSS2.0 significantly altered how this works and is instead trained on a generic neural net which ditches the per game training requirement and works with TAA and motion vectors instead. It still requires per game implementation at this time, but is no where near as resource intensive.

DirectML super resolution I stand to be corrected but I don't believe we know much about yet in terms of the nuts and bolts of how it works, but one could take a stab at it being some sort of generic neural net TAA based solution too. DirectML itself though is much broader than just super resolution/DLSS and aims to incorporate a larger range of machine learning based systems for gaming which is pretty cool.

There are also others, FRL is also developing machine learning upsampling techniques, there are open sourced variants using GAN style training.

Or, for a real TLDR, using "AI" to do the CSI "Enhance" meme on lower resolution content. :)
 
As far as i know, the 6000 series does have any dedicated tensor cores so everything will be going through the CUs.

I had skimmed through navi white paper a while ago.. it seems the CU is a SIMD unit that can do a lot more than fp32 MAD.. if commands are being dispatched optimally it would still count as pseudo acceleration

Real time ray tracing, which has been the holy grail for longer than most of the people who say this have been capable of

The problem scales exponentially, you don't want to be solving such problems.. i would prefer a waiting game on that
 
The image quality with DLSS 2.0 is also capable of producing a sharper than original image with more detail it should be noted.

Naaaaaaaaaaaaaaaaaaaaaaaah. Just because they add more sharpening to it in Control doesn't mean anything, I can add CAS at any time in any game.

jyyng3mx7xw51.png

UeykcQW.jpg.png
 
Naaaaaaaaaaaaaaaaaaaaaaaah. Just because they add more sharpening to it in Control doesn't mean anything, I can add CAS at any time in any game

That's how hardware would efficiently scale in future. Right now we are just getting to sample it, but it needs to begin some where.

Intel seems to have patented a highly speculative method of graphics rendering further building on this initial use case.Next few years should be exciting
 
Back
Top Bottom