• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
It's quite simple - ... a weaker NVIDIA card may represent better value proposition if you can gain superior FPS on a DLSS title.

PMSL, I missed this one. I was serious until you actually read this part. Jensen does not do weak and better value, EVER.
 
Conversely, you can't treat DLSS as "value add" if you've actually paid for the transistors, they're not a freebie.

What if adding those gives you disproportionate return on investment though? What if, say (and this seems to be borne out by example) you're paying less than a quarter of the price of the card for tensor cores and you get double or triple the frame rate?

For a bazillion years people would just turn down graphical settings to get the performance they want, but suddenly they're willing to spend a premium to have their image quality reduced because of fancy transistors?

It's not supposed to be a quality compromise, that's the point. Like I said whether you believe the hype or not is up to you, but the point of it is to get better frame rates at high res and with high quality.

And heaven forfend your choice of resolution and frame rate is rendered natively without any need to upscale.

Sure, that's what we all want. But if DLSS can more-or-less get us there, what's not to like? It doesn't look like either AMD are getting us to 4k@120 with RT this generation without these sorts of techniques.

I'm not trying to say here that it's a good or bad thing when it actually hits the metal, I'm not boosting team green, and I'm not making any claims one way or another about what the quality of delivered result of DLSS is, I don't know, I haven't done the tests, it might be *****. But you asked what the point of it, and that's what it is
 
To clarify, I know what DLSS is, but the question is legitimate.


And this sums up precisely why I just don't get the hype around DLSS. It's something that needs to be explicitly implemented by developers in order to use a bunch of transistors I've paid for, otherwise they're going unused. One could say I've actually wasted a chunk of my money if I don't play any of those 14 games listed above. I am literally seeing a drop in the performance level of my shiny Nvidia purchase if devs do not implement it. Conversely, you can't treat DLSS as "value add" if you've actually paid for the transistors, they're not a freebie.

And this is also why I refute the claim of "AMD needs a counter" because they already have. 2, in fact. FidelityFX requires dev implementation, but there is also the driver-level Radeon Image Sharpening. That works on every game. And both don't require dedicated hardware to do so. And heaven forfend your choice of resolution and frame rate is rendered natively without any need to upscale. I don't want "near native quality" for the money I spend (especially if I'm also at the mercy of devs implementing software to render "near" native), I want "actually native quality".

For a bazillion years people would just turn down graphical settings to get the performance they want, but suddenly they're willing to spend a premium to have their image quality reduced because of fancy transistors?


Nvidia already have their own sharpening which applies to more games than amds.

If amd have more rasterisation, awesome... but honestly what’s the odds of them beating the 3080 in this...
 
This is another thing that irks me: the logic is sound but the implementation is backwards.

DLSS can boost FPS significantly
DLSS requires tensor cores

And yet

The cards that would benefit most from DLSS performance uplift have fewer tensor cores than cards that don't need them (as much).

Really :confused:

And if DLSS can really push the performance of a lower card up a few tiers, where is the incentive for consumers to actually buy a more expensive one? Oh yeah, gimp the transistors that actually allow such uplift. Because why would Nvidia - or any other company - literally give you the means to not spend more money now or in the future?


True. I do feel dlss is almost too good so Nvidia will not want it to go mainstream as it stops users from upgrading
 
It's not supposed to be a quality compromise, that's the point. Like I said whether you believe the hype or not is up to you, but the point of it is to get better frame rates at high res and with high quality.
But it IS a quality compromise, and a paid premium for it too. THAT is the point. If you, as a user, are prepared to take a fidelity hit to boost performance then it's surely a more logical choice to turn down a few settings. It makes absolutely no sense to pay a premium for silicon that will make that choice for you, but only if a developer implements it. It's just totally bizarre thought process to me.

I want 1440p60, I buy card X that will do it natively. I want 1440p60, but can only afford card Y that will do 1080p60 so I turn down settings. If I could afford the premium for Tensor cores that are barely even utilised then I'd just get card X to begin with.

I'm not trying to say here that it's a good or bad thing, I'm not boosting team green, and I'm not making any claims one way or another about what the actual delivered result of DLSS is. But you asked what the point of it, and that's what it is
And I thank you.

The point of all this is my counter to the argument "AMD must match DLSS". I say they don't need to because there is nothing to counter. Between FidelityFX and Radeon Image Sharpening, AMD already have 2 features to upscale. The adoption of DirectML is a third. If DLSS is only for upscaling then AMD have had a counter for it for over a year already.

The question though is denoising of RT, but from what I recall the tensor cores don't even denoise real-time RT.
 
THAT is the point. If you, as a user, are prepared to take a fidelity hit to boost performance then it's surely a more logical choice to turn down a few settings.

I eagerly await your series of analysis postings showing that 'turning down a few settings' can equal DLSS framerates and quality.

Until then I'm going to keep an open mind.
 
I do find it ironic,PC gamers on many forums slagged consoles off for "cheating" with upscaling last generation,but now PC has it,its the "in" thing! Welcome to our new upscaling overlords!

:p
 
A sharpening filter isn't intelligent. It simply applies a filter against what's already there. It does know or care what the image it. It sees a data set and applies it's pre programmed algorithm. Sharpening also had side effects ranging from shimmering, crawling and ringing.

MLAA is aware of what the actual image is, down res it (the performance aspect) and upscale it, sometimes even adding in more information if the native asset has room for improvement. It's also adaptive meaning it has the ability to learn and optimize on a per scene basis.

DLSS 1.0 was useless and I ragged on it in the DLSS thread but 2.0+ is absolutely fantastic. Nvidia however needs to do a better job in ensuring that majority of the upcoming titles have it from Day 1. Down the road, hopefully it becomes a drive level feature.

The saving grace for AMD if they don't a direct competitor is that it's currently in a small list of games. However if that list grows notably then it puts more pressure on AMD.
 
AI upscaling, which in effect allows for decent framerates at high resolutions with fewer shaders by having the shaders and RTX render at a lower res and offloading some of the detail work to the tensor cores.

Whether you think that is good, or whether you think it's "cheating" or a scam as some here seem to, that's the point of it. I thought people were saying AMD had a similar feature?

So, effectively what you're saying is, Nvidia can give you a less powerful GPU and make up the difference in software but charge you more for the privilege :P
 
A sharpening filter isn't intelligent. It simply applies a filter against what's already there. It does know or care what the image it. It sees a data set and applies it's pre programmed algorithm. Sharpening also had side effects ranging from shimmering, crawling and ringing.

MLAA is aware of what the actual image is, down res it (the performance aspect) and upscale it, sometimes even adding in more information if the native asset has room for improvement. It's also adaptive meaning it has the ability to learn and optimize on a per scene basis.

DLSS 1.0 was useless and I ragged on it in the DLSS thread but 2.0+ is absolutely fantastic. Nvidia however needs to do a better job in ensuring that majority of the upcoming titles have it from Day 1. Down the road, hopefully it becomes a drive level feature.

The saving grace for AMD if they don't a direct competitor is that it's currently in a small list of games. However if that list grows notably then it puts more pressure on AMD.

I'm not a fan of DLSS but the results are an ocean away from older techniques based around simple extrapolation type upscaling even with a semi-content aware filter + sharpening.

Screenshots don't really do it justice both negative and positive either - DLSS doesn't seem to have entirely nailed down motion in scenes and newer incarnations sometimes I find the result has a ringing from sharpening type impact on the final image which I'm not a fan of even though it technically looks as good or better than native. I will only accept it personally in situations where it allows for higher quality implementations of ray tracing techniques used throughout the scene rather than just on reflections or certain shadows, etc.
 
So, effectively what you're saying is, Nvidia can give you a less powerful GPU and make up the difference in software but charge you more for the privilege :p

If it works well, why not?

Seriously, if I can't get 4k/120 at a high quality on *any* hardware, nvidia or AMD, without techniques like this, but I can get pretty damn close with them what's not to like?

Really not sure what the objections are here.
 
I'm not a fan of DLSS but the results are an ocean away from older techniques based around simple extrapolation type upscaling even with a semi-content aware filter + sharpening.

Screenshots don't really do it justice both negative and positive either - DLSS doesn't seem to have entirely nailed down motion in scenes and newer incarnations sometimes I find the result has a ringing from sharpening type impact on the final image which I'm not a fan of even though it technically looks as good or better than native. I will only accept it personally in situations where it allows for higher quality implementations of ray tracing techniques used throughout the scene rather than just on reflections or certain shadows, etc.

One of the main reasons they need to get it into more games is so they have a higher sampling database the ML algorithms to cycle against and self tune. The larger and more varied the dataset the faster and better it'll perform. As someone who's had calibrated displayed since the kuro days, I very much prefer pristine PQ. I'd personally prefer if we had the ability to disable certain 'tools' dlss is allowed to use on an image, like sharpening, but that probably a long time away.
 
Can you show me the artifacts here? 1440p dlss 2.0 vs 4k native.

https://www.eurogamer.net/articles/digitalfoundry-2020-control-dlss-2-dot-zero-analysis

I have a strong feeling the amount of tangible improvements will drown out any artifacts you find.

This was linked from AT forums,but you can see it up close in the DF video:
https://forums.anandtech.com/attachments/dlss-jpg.28992/

It also blatantly uses sharpening - looks sharper than stuff with FXAA/TAA,its because its use local contrast enhancement,ie,sharpening. You can see the higher contrast on detected edges.

I knew people who worked on machine learning based techniques for image reconstruction,and image sharpening was part of the pipeline.There is also temporal artefacts with certain particle effects which DF said they saw,as the machine learning aspects find it hard to predict pseudo-random effects.

In fact,if you have any interest in image editing,upscaling has existed in various forms - native images from ILCs are generally quite soft. Yet many smartphones actually do some degree of interpolation and sharpening,which makes them look better than ILCs to the average person,whilst papering over lack of detail. In fact prior to Nvidia,etc many smartphone SOCs,such as the Apple ones,have "machine learning logic" integrated onto the SOCs. Image upscaling and processing is actually one of the main uses of this logic.

Another use of the logic is for machine learning assisted computer vision,ie, pattern recognition.

One of the main reasons they need to get it into more games is so they have a higher sampling database the ML algorithms to cycle against and self tune. The larger and more varied the dataset the faster and better it'll perform. As someone who's had calibrated displayed since the kuro days, I very much prefer pristine PQ. I'd personally prefer if we had the ability to disable certain 'tools' dlss is allowed to use on an image, like sharpening, but that probably a long time away.

That had an impact on DLSS1.0 which was game specific- DLSS2.0 isn't the same. Its probably being trained on specific scene types not games per se,and working off the image reconstruction stuff they have been working on for a few years.

Needing to train on specific games,is not really a good way forward,if you want it to be a thing which needs no dev input. Having it recognise various scenes,and then applying relevant algorithms to that scene,would make more sense.
 
Last edited:
This was linked from AT forums,but you can see it up close in the DF video:
https://forums.anandtech.com/attachments/dlss-jpg.28992/

There is also temporal artefacts with certain particle effects which DF said they saw,as the machine learning aspects find it hard to predict pseudo-random effects.

It also blatantly uses sharpening - looks sharper than stuff with FXAA/TAA,its because its use local contrast enhancement,ie,sharpening. I knew people who worked on machine learning based techniques for image reconstruction,and image sharpening was part of the pipeline.

In fact,if you have any interest in image editing,upscaling has existed in various forms - native images from ILCs are generally quite soft. Yet many smartphones actually do some degree of interpolation and sharpening,which makes them look better than ILCs to the average person,whilst papering over lack of detail. In fact prior to Nvidia,etc many smartphone SOCs,such as the Apple ones,have "machine learning logic" integrated onto the SOCs. Image upscaling and processing is actually one of the main uses of this logic.

Another use of the logic is for machine learning assisted computer vision,ie, pattern recognition.



That had an impact on DLSS1.0 which was game specific- DLSS2.0 isn't the same. Its probably being trained on specific scene types not games per se,and working off the image reconstruction stuff they have been working on for a few years.

2.0 is still the same on the back end in that more data means better execution going forward. It's no longer game specific but still needs new data points to build it's library. They then send the updated instructions locally via drivers and updates for the tensor cores to deploy. If it wasn't the case, you'd just see it at the driver level. https://www.nvidia.com/en-gb/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/ They explain it well here. "The NVIDIA DLSS 2.0 Architecture"

An interesting test down the road would be to test an old driver vs one year later to really see how it's evolved against the same title. If nvidia is truthful, the old title on the new driver should have a better implementation. If not, perhaps it's not as ubiquitous as they claim.

As I mentioned, being able to disable edge/contrast enhancement as an end user tool down the road would be ideal. However, DLSS is still able to add detail to a pic via upscaling as texture quality on games can be hit n miss.
 
Also the GTX1060 share is both 3GB/6GB models and the mobile versions which have all the same names. If you add the RX480/RX580/RX590/RX470/RX570 numbers together it comes to nearly 5% still not as good as Nvidia but still reasonable share, especially as AMD laptop share is non-existent. The problem is RDNA didn't really replace Polaris that well.

If you look at sales figures from Mindfactory the 5700 XT keeps up sales with the 2070 super and sold more than the 2070 did back in july 2019. The 2080 Ti has been around 1200-1600 sold per 10000 5700 XT's sold roughly. So how is it then possible for the 2080ti to have a higher share on Steam survey unless the survey isn't equally done on all machines resulting in a garbage and unusable result?

V7Ll-qN6c9u7J2xW9n_5q5AusmeB6dCZZdnP8MVraAA.png
 
Status
Not open for further replies.
Back
Top Bottom