• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Anybody else resenting AMD because of DLSS?

Status
Not open for further replies.
Soldato
Joined
17 Aug 2009
Posts
10,719
I don't think it has a long future.

There is a goal to make a larger picture out of a smaller one and there are diminishing gains.

If AMD pushes out an open, good enough, upscaling tech which everyone can get stuck into then the value of running a proprietary upscaling tech alongside it goes right down the toilet.
 
Caporegime
Joined
4 Jun 2009
Posts
31,046
Most of us probably have screens with technology in them developed by AMD, i do. we all have Tessellation and Vulkan capable GPU's, thanks to AMD. And we are all quite happy.

DLSS will not be missed.

Monitors have tech "developed" by AMD? What's this? If you are referring to "freesync", AMD just enabled the "adaptive sync" feature via their drivers, the same way nvidia did with "adaptive sync/freesync" monitors once their gpus had the required hardware for it.

Didn't realise amd developed tessellation either? Got a link? Pretty funny if so given that one of their generations of cards were beyond terrible compared to nvidia...

Will agree to the vulkan/mantle part though, amd was a massive player here.

I am not saying that DLSS will die unlike what others think. It may be the case that Nvidia will make it DirectML compatible because it is not that hard and it could become the new standard of upscaling.
The thing with UE4 plugin is not such a great news as many think. It shows that Nvidia is not willing to spend more money and human resources for devs to enable DLSS in their games. Or at least they will try to get the feature for free before paying.
And some devs may do that but if AMD brings their own upscaling and it is any good and open source, then the devs will put pressure on Nvidia because they won't work for free to put two codes inside their games. This is how DLSS will die ( if it will die and not transform into the next standard ).


The thing is we don't have too many new released games where AMD or Nvidia were not involved. One can think that the gap shown in AC Valhalla is the real gap between the two cards while someone else can think that the gap shown in The Medium is the real gap. We will wait and see, i am sure Nvidia will be involved in a lot of major releases on PC so their cards should be fine most of the time. But let's see the games where they don't get involved and also are not sponsored by AMD.

Here's the problem though, no one knows until amds/Microsoft alternative is actually out there for PC games to use..... Until then, DLSS is all "PC gamers" have.

Also, "most" developers are lazy (or rather are constantly pushed to deliver a MVP build in an unrealistic timeframe hence why so much of the game industry revolves around "crunch time") and it is all about trying to take the quickest/easiest path so as someone said, a lot of developers will want to use things like dlss/directml if it means saving time and money therefore if dlss is a case of ticking a box (as apparently shown in a video for unreal engine?), that's going to be there go to, not a solution where they will have to spend time coding/implementing said feature, testing etc. especially on the PC version where the user base will be lesser compared to console.

Also, another factor people are forgetting, the pc version of games often have different developers and maybe an entirely different studio working on it compared to the console version as the skillset required for each platform is different.

All in all, it is a bit pointless guessing how this will all end as ultimately it comes down to contracts, time and money, which is dictated by the stakeholders and not the consumers i.e. not what might be "technically" better for us, the "pc gamers".



There's plenty of games where amd nor nvidia haven't been involved since the PS 4/xbox one launched.
 
Soldato
Joined
12 May 2014
Posts
5,236
I don't resent AMD for constantly peeing on Nvidia's camp fires.

I do resent Nvidia constantly trying to lock me into their hardware, i'm grateful AMD are there to frustrate Nvidia in that, even if it is for entirely their own reasons.

DLSS will be dead soon enough, good riddance.
Nvidia will just rename directml to DLSS compatible like they did with G sync compatible branding:p
 
Soldato
Joined
11 Sep 2007
Posts
5,740
Location
from the internet
Monitors have tech "developed" by AMD? What's this? If you are referring to "freesync", AMD just enabled the "adaptive sync" feature via their drivers, the same way nvidia did with "adaptive sync/freesync" monitors once their gpus had the required hardware for it.

Didn't realise amd developed tessellation either? Got a link? Pretty funny if so given that one of their generations of cards were beyond terrible compared to nvidia...

Will agree to the vulkan/mantle part though, amd was a massive player here.

I'm pretty sure this is actually true (edit: the bit about tessellation I mean) - or at least a sort of partnership between AMD and Microsoft brought it about because its first application was in the Xbox 360. The HD2900 XT had tessellation hardware as well, but not very much used it because developers had to use non-standard extensions in DX10 to get it to work. Although even on the 360 it wasn't really taken advantage of that well, in fact I think the highest profile implementation was the hyper-realistic er... Viva Pinata. PDF link: https://developer.amd.com/wordpress/media/2012/10/Boulton-PinataTessellation(Siggraph07).pdf
 
Last edited:
Soldato
Joined
28 May 2007
Posts
10,071
Didn't realise amd developed tessellation either? Got a link? Pretty funny if so given that one of their generations of cards were beyond terrible compared to nvidia...

ATI TruForm was a brand by ATI (now AMD) for a SIP block capable of doing a graphics procedure called tessellation in computer hardware. ATI TruForm was included into Radeon 8500 (available from August 2001 on) and newer products.[1]

The successor of the SIP block branded "ATI TruForm" was included into Radeon HD 2000 series (available from June 2007 on) and newer products: hardware tessellation with TeraScale.

Support for hardware tessellation only became mandatory in Direct3D 11 and OpenGL 4. Tessellation as defined in those APIs is only supported by newer TeraScale 2 (VLIW5) products introduced in September 2009 and GCN-based products (available from January 2012 on). The GCN SIP block carrying out the tessellation is the "Geometric processor".

https://en.wikipedia.org/wiki/ATI_TruForm
 
Caporegime
Joined
4 Jun 2009
Posts
31,046
I'm pretty sure this is actually true (edit: the bit about tessellation I mean) - or at least a sort of partnership between AMD and Microsoft brought it about because its first application was in the Xbox 360. The HD2900 XT had tessellation hardware as well, but not very much used it because developers had to use non-standard extensions in DX10 to get it to work. Although even on the 360 it wasn't really taken advantage of that well, in fact I think the highest profile implementation was the hyper-realistic er... Viva Pinata. PDF link: https://developer.amd.com/wordpress/media/2012/10/Boulton-PinataTessellation(Siggraph07).pdf


Interesting.

Definitely laughable that nvidia bested amd later on down the line at their own tech! :p i.e. in crysis 2 and witcher 3, although nvidia most certainly applied pressure here to "over-apply" the tessellation effects to really show amds weakness
 
Associate
Joined
8 Oct 2020
Posts
2,333
Hopefully because there will be some vendor independent alternative. But I don't think DLSS is going anywhere, I think it'll coexist with whatever the cross-platform alternative is as a premium version with probably slightly better quality and performance. More of a value-add than a need-to-have feature. Seeing as Nvidia can farm it out to the tensor cores which is just hardware AMD doesn't have, it seems like there's going to be room to make DLSS a more competitive option for a while yet.

DLSS definitely won't die, and I honestly can't see AMD doing better on their first attempt both on the hardware and software side. It'll be interesting to see whether Nvidia can migrate their model to run on the generic platform without a hit to performance, in which case it would be easier for developers to implement for both sets of hardware.
 
Soldato
Joined
28 May 2007
Posts
10,071
DLSS definitely won't die, and I honestly can't see AMD doing better on their first attempt both on the hardware and software side. It'll be interesting to see whether Nvidia can migrate their model to run on the generic platform without a hit to performance, in which case it would be easier for developers to implement for both sets of hardware.

Why can't AMD do better? They are clearly not rushing this unlike DLSS 1.0 so i expect it to be way better than that. AMD know full well they can't release this until it's good. Nvidia on the other hand get away with murder as Dlss 1.0 proved. They got heat but it never stopped there cards selling. AMD can ill afford any bad press in comparison. I reckon it will be up to Dlss 2.0 standard at least as they know Dlss 3.0 is most likely the real competitor.
 
Associate
Joined
8 Oct 2020
Posts
2,333
Why can't AMD do better? They are clearly not rushing this unlike DLSS 1.0 so i expect it to be way better than that. AMD know full well they can't release this until it's good. Nvidia on the other hand get away with murder as Dlss 1.0 proved. They got heat but it never stopped there cards selling. AMD can ill afford any bad press in comparison. I reckon it will be up to Dlss 2.0 standard at least as they know Dlss 3.0 is most likely the real competitor.

Because 2.0 is really good.

I'm not saying they won't, but Nvidia are bigger and have a lot more experience in ML, and this isn't an easy thing to do right. I'd imagine that their first attempt will be relatively safe i.e. it'll be like a DLSS 1.5.
 
Soldato
Joined
11 Sep 2007
Posts
5,740
Location
from the internet
Why can't AMD do better? They are clearly not rushing this unlike DLSS 1.0 so i expect it to be way better than that. AMD know full well they can't release this until it's good. Nvidia on the other hand get away with murder as Dlss 1.0 proved. They got heat but it never stopped there cards selling. AMD can ill afford any bad press in comparison. I reckon it will be up to Dlss 2.0 standard at least as they know Dlss 3.0 is most likely the real competitor.

I think it's likely that Nvidia already set the benchmark for DLSS on regular shader cores with DLSS '1.9' in Control, and that had some noticeable visual artifacting as well as having to be trained on a per-game basis. AMD might do better than that but probably not by much just because of hardware limits.
 
Caporegime
Joined
17 Mar 2012
Posts
47,661
Location
ARC-L1, Stanton System
Because 2.0 is really good.

I'm not saying they won't, but Nvidia are bigger and have a lot more experience in ML, and this isn't an easy thing to do right. I'd imagine that their first attempt will be relatively safe i.e. it'll be like a DLSS 1.5.

While covering your arse with the first sentence you're certainly implying DiectML can't be as good because Nvidia are bigger than AMD.

You would think people would learn by now that just because a company is bigger doesn't mean they are automatically better, Intel are bigger than Nvidia and AMD put together and Intel's CPU's are a joke, a meme compared with AMD's, and they have had more than enough time to get over the shock of Ryzen.
 
Associate
Joined
8 Oct 2020
Posts
2,333
While covering your arse with the first sentence you're certainly implying DiectML can't be as good because Nvidia are bigger than AMD.

You would think people would learn by now that just because a company is bigger doesn't mean they are automatically better, Intel are bigger than Nvidia and AMD put together and Intel's CPU's are a joke, a meme compared with AMD's

Bigger and a major player/more experienced in the ML space; this is relatively new ground for AMD. There's a big difference when comparing Intel vs AMD to Nvidia vs AMD - Nvidia realised they messed up with Turing and changed the architecture for Ampere, which showed a positive way forward.

Like I said, AMD have shown that they can do things right, so there's no reason to assume it'll be trash but they have far fewer resources to play and can only put so much effort into each feature.
 
Caporegime
Joined
17 Mar 2012
Posts
47,661
Location
ARC-L1, Stanton System
Bigger and a major player/more experienced in the ML space; this is relatively new ground for AMD. There's a big difference when comparing Intel vs AMD to Nvidia vs AMD - Nvidia realised they messed up with Turing and changed the architecture for Ampere, which showed a positive way forward.

Like I said, AMD have shown that they can do things right, so there's no reason to assume it'll be trash but they have far fewer resources to play and can only put so much effort into each feature.

I'll agree talent accounts for nothing if you don't have any money for R&D, AMD are no longer in that situation, they have gone from Vega to RDMA1 to something no one thought they could do, match Nvidia's best rasterization performance with a much small die and two thirds the power consumption, in terms of that right now AMD make Nvidia look... well not bad but certainly worse.

Yes the RT performance is way down but then this is games with RTX designed to run on Nvidia GPU's, only time will tell what AMD's true Ray Tracing throughput actually is. IMO it isn't as good as Nvidia's second generation but also no where near as bad as it appears in these RTX games. for a start RDNA2 and Ampere use completely different ASync Engines, what is designed for one will run like crap on the other.

I'm pretty confident AMD will figure Super Sampling Up Scaling out, they are good at what they do, when they have some money for R&D.
 
Associate
Joined
8 Oct 2020
Posts
2,333
I'll agree talent accounts for nothing if you don't have any money for R&D, AMD are no longer in that situation, they have gone from Vega to RDMA1 to something no one thought they could do, match Nvidia's best rasterization performance with a much small die and two thirds the power consumption, in terms of that right now AMD make Nvidia look... well not bad but certainly worse.

Yes the RT performance is way down but then this is games with RTX designed to run on Nvidia GPU's, only time will tell what AMD's true Ray Tracing throughput actually is. IMO it isn't as good as Nvidia's second generation but also no where near as bad as it appears in these RTX games. for a start RDNA2 and Ampere use completely different ASync Engines, what is designed for one will run like crap on the other.

I'm pretty confident AMD will figure Super Sampling Up Scaling out, they are good at what they do, when they have some money for R&D.

Agreed, I think it’s realistic to expect that they won’t be able to fully catch-up in just a year, but what they’ve already achieved is extremely impressive.

I’m expecting big things from both RDNA3 and Lovelace.
 
Associate
Joined
13 Jul 2020
Posts
500
Why can't AMD do better? They are clearly not rushing this unlike DLSS 1.0 so i expect it to be way better than that. AMD know full well they can't release this until it's good. Nvidia on the other hand get away with murder as Dlss 1.0 proved. They got heat but it never stopped there cards selling. AMD can ill afford any bad press in comparison. I reckon it will be up to Dlss 2.0 standard at least as they know Dlss 3.0 is most likely the real competitor.

Actually, they could release it in a worse state than DLSS and all the amd fanboys/ apologists would eat it up like its better than 2x native.

Proof? Same people who were going on about how fidelityfx upscaling and a sharpen filter are better than DLSS.

The ignorance and personal bias runs too deep to care about facts.

so ...
 
Soldato
Joined
6 Feb 2019
Posts
17,595
Nvidia will just rename directml to DLSS compatible like they did with G sync compatible branding:p

Why fix what's not broken aye - everyone now associates image reconstruction with DLSS so it makes sense to renamed DirectML as DLSS, consumers benefit
 
Status
Not open for further replies.
Back
Top Bottom