• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Anybody else resenting AMD because of DLSS?

Status
Not open for further replies.
Give over, you have no idea how or what AMD is doing with theirs or how much real tangible difference in any direction is going to exist.

This is all because... it hasn't seen the light of day... so making claims is a joke.

We all know physical hardware is better than software doing the same process on top of existing hardware thus causing an overhead and less power for the rest of the job.

Isnt DLSS style cheating just temporary anyway until the hardware gets more powerful.

Well it's not "cheating", it's allowing almost the same image quality by utilising a lower internal resolution, so you're getting 4k quality using creative methods that improve performance.

it'll never go away, because 8k, 16k and ever demanding game titles.
 
We all know physical hardware is better than software doing the same process on top of existing hardware thus causing an overhead and less power for the rest of the job.



Well it's not "cheating", it's allowing almost the same image quality by utilising a lower internal resolution, so you're getting 4k quality using creative methods that improve performance.

it'll never go away, because 8k, 16k and ever demanding game titles.
It will not be hardware vs software but extra hardware vs hardware. The tensor cores are a dumbed down version of normal cores AMD has. But they are extra hardware so that will mean that when using the same tech Nvidia will have an advantage in front of AMD. For example if both will use DLSS or SR.

But since these will be two different solutions how can you compare them? Not to mention saying that Nvidia is already a lot better? :D
How can you compare SR vs DLSS? Based on FPS gains? Based on IQ? Who's going to be the judge? Unless there is a standard, then everything is an opinion, some will say DLSS is better no matter what while others will say SR performance is better than DLSS quality. :)
 
We all know physical hardware is better than software doing the same process on top of existing hardware thus causing an overhead and less power for the rest of the job.

We all know that making claims on results through guaranteed different means is hot air when one of the results doesn't exist.
 
It will not be hardware vs software but extra hardware vs hardware. The tensor cores are a dumbed down version of normal cores AMD has. But they are extra hardware so that will mean that when using the same tech Nvidia will have an advantage in front of AMD. For example if both will use DLSS or SR.

But since these will be two different solutions how can you compare them? Not to mention saying that Nvidia is already a lot better? :D
How can you compare SR vs DLSS? Based on FPS gains? Based on IQ? Who's going to be the judge? Unless there is a standard, then everything is an opinion, some will say DLSS is better no matter what while others will say SR performance is better than DLSS quality. :)

Pretty much hardware vs software isn't it, since DLSS is utilising a portion of the chip specifically set aside for workloads such as DLSS, however AMD hasn't specifically set aside a porition of the chip for such workloads, therefore the performance cost comes in.

If you want to be specific, it's software running on already utilised hardware, whilst with Nvidia it's software running on dedicated hardware.

The only way I would view AMD's alternative as a better option is if there isn't any more of a performance impact than DLSS, and at the same time, the IQ brings it to better than native resolution.

I think that's going to be hard ask of AMD to do when they've not even got a dedicated piece of hardware for the job.

We all know that making claims on results through guaranteed different means is hot air when one of the results doesn't exist.

Do I have to keep repeating myself in response to you?

As I said, and I'm not going to reiterate it in any other way again:

We all know dedicate hardware vs existing hardware will allow higher performance as the workload isn't going to reduce the amount of allocated time to the tasks already at hand. The less time the cores have to do their existing job the slower the job will get done. Unless you bring in more cores, which is kind of hard to do without building a new chip.
 
Last edited:
So unless you'll hear DF saying that SR is better than native, you will think DLSS is better because they said it is better than native. Even if most people will tell you right away when DLSS is used because it blurs a lot of things in the image. :)

What you say about dedicated vs running on the same cores is ok if we talk about the same solution. As i told you, if both were using DLSS then Nvidia will have an advantage you can see. More FPS at the same IQ. But since these are different techs why do you think some people will not tell you that AMD performance mode is better IQ than DLSS quality mode and you get more FPS? :)
 
So how do you think this new tech will work? Magic cloud of computation?
Upscaling is much easier workload comparing with native rendering. So gaining some FPS should not be a problem. How good it is...that is up to each of us to see, listen to the tech influencers and believe. I don't like DLSS and i won't like AMD SR either. Especially if i have to run it on a very expensive card to get some decent FPS.
Give me native res or give me death! :D
 
Do I have to keep repeating myself in response to you?

As I said, and I'm not going to reiterate it in any other way again:

We all know dedicate hardware vs existing hardware will allow higher performance as the workload isn't going to reduce the amount of allocated time to the tasks already at hand. The less time the cores have to do their existing job the slower the job will get done. Unless you bring in more cores, which is kind of hard to do without building a new chip.

All I see is you doubling down on a hot air claim that DLSS is looking better than AMDs unreleased answer.

DLSS 1.0 used dedicated hardware and looked so bad that they started again. It looked especially bad because AMD was offering simpler tools with a better result.

So if you think that pitching the "DEDICATED HARDWARE" theory is some kind of unassailable argument, well it isn't and there's historical evidence on this exact matter.
 
All I see is you doubling down on a hot air claim that DLSS is looking better than AMDs unreleased answer.

DLSS 1.0 used dedicated hardware and looked so bad that they started again. It looked especially bad because AMD was offering simpler tools with a better result.

So if you think that pitching the "DEDICATED HARDWARE" theory is some kind of unassailable argument, well it isn't and there's historical evidence on this exact matter.

You obviously don't understand the concept of dedicated hardware, so I'll let you work that out yourself.
 
To the person posting Control screenshots with the LOD issues, these are not related to DLSS. There are fixes you need to correct this through mods/config files.

And to the screenshot comparisons, the real kicker is the FPS gain. Not sure that was posted?

I've been really impressed using it with Control. Even if you think there is a slight visual hit or some artefacts, the FPS boost is the winner.
 
All I see is you doubling down on a hot air claim that DLSS is looking better than AMDs unreleased answer.

DLSS 1.0 used dedicated hardware and looked so bad that they started again. It looked especially bad because AMD was offering simpler tools with a better result.

So if you think that pitching the "DEDICATED HARDWARE" theory is some kind of unassailable argument, well it isn't and there's historical evidence on this exact matter.

DLSS doesn't require dedicated hardware; it's a software solution.

I assume AMD cards will handle AI upscaling worse than Nvidia cards as they'll need to use their general processing cores, which will take performance away from other tasks.
 
You obviously don't understand the concept of dedicated hardware, so I'll let you work that out yourself.

Real cute, deny and reflect.

You want to set up fantasy goalposts that if the same work is done etc etc meanwhile back in reality that demonstrably didn't translate to a good result for DLSS 1.0.

Implementation matters and just like everyone else you haven't seen the implementation you're saying DLSS is better than. Hence. Hot air.
 
Real cute, deny and reflect.

You want to set up fantasy goalposts that if the same work is done etc etc meanwhile back in reality that demonstrably didn't translate to a good result for DLSS 1.0.

You want to set up fantasy goalposts.

Tell you what we'll just ditch the GPU and render the game on the CPU, dedicated hardware doesn't do anything.

DLSS doesn't require dedicated hardware; it's a software solution.

I assume AMD cards will handle AI upscaling worse than Nvidia cards as they'll need to use their general processing cores, which will take performance away from other tasks.

It won't because of the magic core tree that keeps dropping new cores on people's GPUs.
 
DLSS doesn't require dedicated hardware; it's a software solution.
Willhub is talking about the tensor cores.

Real cute, deny and reflect.

You want to set up fantasy goalposts that if the same work is done etc etc meanwhile back in reality that demonstrably didn't translate to a good result for DLSS 1.0.

Implementation matters and just like everyone else you haven't seen the implementation you're saying DLSS is better than. Hence. Hot air.
You want to set up fantasy goalposts.

Tell you what we'll just ditch the GPU and render the game on the CPU, dedicated hardware doesn't do anything.

Funny how your quote cuts out the bit about implementation mattering and it's the answer to your quadrupling down answer. I put it back in though.
 
Willhub is talking about the tensor cores.

Yea, those are dedicated to AI operations, I'm just saying that it has no baring on the quality of DLSS.

If DLSS 1.0 ran on the normal processing cores, it still would have looked bad; not that it matters anymore as DLSS was overhauled for 2.0.
 
Yea, those are dedicated to AI operations, I'm just saying that it has no baring on the quality of DLSS.

If DLSS 1.0 ran on the normal processing cores, it still would have looked bad; not that it matters anymore as DLSS was overhauled for 2.0.

Right but this is an argy bargy that having those tensor cores is or isn't a solid argument that the desired result will be better than what AMD hasn't released yet.
 
messy argument.

Tensor cores arent just 'AI' cores, they are designed to accelerate deep learning, thanks to being very good at processing matrices. This also means they happen to be ideal for ....Deep Learning Super Sampling. Yeah you could do it without tensor cores but not without a MASSIVE hit in performance and probably IQ to, if you wanted any chance of making it work of course.

AMD does things differently, of course. The technical details are out there for people to read but as far as the end result goes, DLSS 3.0 doesnt exist outside of any testing environment (and if it does, there's zero news on it) and AMD super res hasn't been seen by any member of the public yet. so...i dunno why people asre even bothering with that argument. I will say though, expect the 'up to 100% faster' rumours swirling around AMD's solution to be complete balls as i have every expectation that that 100% faster will come with one slight caveat...it wont look as good.
 
May as well say that all Post processing options, blurring/softening images for potato resolutions is cheating also.
Cheating and post processing is also playing games in our monitors until we have retina implants that allow us to play games at eye native resolution of 576 megapixels. :rolleyes:
 
Status
Not open for further replies.
Back
Top Bottom