• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD FSR 3.0 has exposed the ugly truth about most PC gamers

Native is meaningless. I agree a particular resolution is a particular resolution, and image quality is a particular image quality.

We both know that when people say DLSS brings out detail you wouldn't see in a 4k render, it is because of super sampling.

In fact SSAA 2x and 4x is exactly that, without even needing deep learning. Just give up, you failed to understand what DLSS is.

I think you've failed to understand what Temporal upscaling is. As I posted before, DLSS, FSR2, XeSS are all using data from multiple frames to create a frame. By their very nature they will show an amalgamation of a few frames which is why they tend to show an apparent increase in detail. They technique is not specific to DLSS as many on here are trying to make it look like.
 
It's because 99% of the time DLSS does it the best, that isn't an opinion it is a fact. Only RoBoCop is the exception, an Intel sponsored game that has better super resolution scaling than FSR or DLSS or TAA/TAU. The same now applies to Frame Gen as AMD still has the frametime issues with FSR3.

The funniest part is that not even AMD sponsored games with FSR can beat DLSS as evidenced by, oh... every game that's AMD sponsored and someone releases a DLSS injection mod for it.
 
The funniest part is the majority of NV users are running native

Maybe some don't have access to DLSS, maybe they personally prefer native because they tried both and didn't enjoy the DLSS experience for reasons, or they don't have access to
DLSS_Goggles.png
 
Last edited:
Based on more input experience than the few.
Sure... Meanwhile in the actual real world:


The way people think about video games is changing, too. Today, 83% of GeForce RTX 40 Series desktop gamers with RTX-capable games enable ray tracing, and 79% turn on DLSS, showcasing the widespread adoption of these revolutionary technologies.1

They’re also widely adopted among prior RTX 30 Series and 20 Series owners; 56% and 43% turn on ray tracing, while 71% and 68% turn on DLSS, respectively.

And also articled by ARS back in April:

As for DLSS, Nvidia's AI-accelerated upscaling and frame-generation tool for games that support it, Nvidia writes that 79 percent of 40 series, 71 percent of 30 series, and 68 percent of 20 series owners turned the feature on.
 
Based on what evidence, a poll or actual evidence showing one literally is better than the other :cry:
No one (afaik) questions if/that DLSS is a superior upscaling technique, people (I am) saying it's not better than native because of basic maths.

The native image is 100% accurate, every pixel is exactly where it's meant to be, if you upscale a lower resolution image and it does not replicate the native image 100% accurately, if even one pixel differs it's failed to recreate the native image. Basic maths tells us it's not better.
Sure... Meanwhile in the actual real world:
Your evidence is Nvidia themselves, what next? Are you going to start saying GPU manufactures claims of performance increases are credible.

As much as you and others want to make this about Nvidia vs AMD it's not, need i remind you of the OP claim that 'everyone' was moaning about upscaling but now 'everyone' is onboard with it because AMD and how plenty of people have pointed out the flaw in that logical, that people have not changed their minds simply because of FSR. That the majority of people who said upscaling is not as good as native are saying the exact same thing irregardless of what's doing the upscaling.
 
Last edited:
He stated the majority of nvidia users, which assumes an nvidia user would be a fan of nvidia features, and Nvidia themselves did the legwork in finding out how many of their users are using said features. There isn't anything that highlights an issue with the findings either, it's from April last year, if there was an issue, someone online would have pointed it out long ago, which they have not.

It's pretty clear that upscaling as a whole was crap for a long time, things have massively changed and it is no longer crap and hasn't been for a long time. But as is typically the case with online views, the initial bad taste never leaves the mouths of those who like to be most vocal (always focusing on negativity). It's the same with games that released in a poor state but x-amount of time later completed their redemption to end up being something really good, meanwhile the haters still hate because the original experience is all they want to keep in mind.

It's really as simple as that.
 
Nope...
You know, I wasn’t planning to publish an article like this. However, I can’t believe how people have immediately switched sides the moment AMD released FSR 3.0 Frame Generation. From comments like “I don’t want fake frames“, we went, in the blink of an eye, to “My RTX2060 lives again, FSR 3.0 doubled my performance“. And this right here exposed the ugly truth about most PC gamers.
Most PC gamers are whiny little kids with zero knowledge who pretend to know things. Now I’m not saying that all PC gamers are clueless or whiny. However, a big part of the PC community acts like that, especially today with the rapid rise of social media. Angry reactions can attract more views on both X/Twitter and YouTube, and some are trying to increase their audience this way. And, like it or not, it works.
When NVIDIA released DLSS 3, everyone was calling Frame Generation a total flop. Everyone was against those “fake frames“. Hell, that line became an industry joke. Not only that but everyone and their mother had an RTX40 series GPU. Everyone tested DLSS 3. And everyone disliked it, claiming that they were feeling the extra input latency. Because yes, on the Internet, everyone has the latest and greatest GPU. Trust me, bro.
He is quiet clearly saying "Most PC gamers", "all PC gamers", "the PC community", "everyone was calling Frame Generation a total flop", "Everyone was against those “fake frames“".

things have massively changed and it is no longer crap and hasn't been for a long time.
It seems some people believe that it's better than an exact copy, that it's better than a 100% accurate recreation.
 
Last edited:
I know a lot will disagree with me but I just can't help but see upscaling and frame generation technologies as an excuse for both sides to get lazy on actual advancement. Why keep pushing the hardware when you can create software to "improve performance" instead?

Part of the reason they have had to resort to this is because moore's law is dead is actually somewhat true, without these software "tricks", you wouldn't be getting much, if any more performance than if they were to just focus on hardware. That and as stated, one of the biggest reasons to look to software methods now is because it's a far more efficient process in a number of ways rather than always just trying to brute force max settings @ true 4k RT/PT etc. Without such software tricks, real time ray/path tracing would still be a decade of probably, so it's making the impossible possible today.

And again, I'm not sure why people think making software is cheap/free and also an easy thing to do? It really isn't, as shown by the heap of **** optimised games throughout 2023 that magically become better after patches.....

It's not like nvidia and amd are just putting in less effort/money, they both increased their R&D budget massively for 2023:

KQkxob6h.png


dKfFxwgh.png
 
Part of the reason they have had to resort to this is because moore's law is dead is actually somewhat true, without these software "tricks", you wouldn't be getting much, if any more performance than if they were to just focus on hardware.

A video I watched about 3-4 years included a talk with a seasoned computer engineer talking of hardware needing to become obscenely powerful gen on gen to start making impressive jumps otherwise it would just be iterative improvements that have to rely on software tricks... and well... here we are, DLSS, Frame Generation, FSR, XeSS etc....
 
Last edited:
A video I watched about 3-4 years included a talk with a seasoned computer engineer talking of hardware needing to become obscenely powerful gen on gen to start making impressive jumps otherwise it would just be iterative improvements that have to rely on software tricks... and well... here we are, DLSS, Frame Generation, FSR, XeSS etc....

Yup it's probably why their R&D has jumped considerably as they're having to find alternatives to keep the performance improvements going otherwise no one going to buy/upgrade to anything new.

Like I've always said before, it's the experience and end result I care about, if xyz can achieve what I deem as being good then it's good with me, if it isn't, well I simply won't use/buy said product/feature. Most people are paying for streaming services, which is arguably a software service and buying into smartphones for their "software" so why not do the same for other products you use?
 
Last edited:
Yup it's probably why their R&D has jumped considerably as they're having to find alternatives to keep the performance improvements going otherwise no one going to buy/upgrade to anything new.

Like I've always said before, it's the experience and end result I care about, if xyz can achieve what I deem as being good then it's good with me, if it isn't, well I simply won't use/buy said product/feature. Most people are paying for streaming services, which is arguably a software service and buying into smartphones for their "software" so why not do the same for other products you use?

1 of the things this hardware engineer said was we need to go beyond the simple transistor and start looking towards quantum computing, For us ordinary folk that is a long way away but that is the next step that needs to happen for BIG improvements that don't constantly need software tricks.

Fundamentally we aren't much more advanced than we were 50 years ago, Yes we can now process 1's and 0's faster but it's still all transistor and binary based just shrunk down.
 
Last edited:
I know a lot will disagree with me but I just can't help but see upscaling and frame generation technologies as an excuse for both sides to get lazy on actual advancement. Why keep pushing the hardware when you can create software to "improve performance" instead?
I don't like seeing the term 'lazy' thrown around when it comes to game development because the devs themselves are always under constant pressure from their publishers in the form of budgets and deadlines - it *is* however becoming clear that upscaling is increasingly being used as a crutch to push the performance of console games into 'playable' (i.e. 30fps) territory.

Recent releases on the Series S have been as low as 540p internal resolution (not far shy of what the PS2 was putting out) and there's several examples of PS5 and Series X games that run as low as 720p (before upscaling) in their performance modes in an attempt to hit 60fps. The end result is a smeary image that lacks the clarity of consoles just a generation or two earlier.

Sadly, there really is no solution to this, other than to not buy these games because if you do, you're telling the publishers that image quality doesn't matter to you.

I suspect it won't be long before we see our first console game using FSR frame generation for its 'performance mode' whilst still actually running at 20-30fps - and if the game sells well enough, other devs will soon follow suit.
 
I don't like seeing the term 'lazy' thrown around when it comes to game development because the devs themselves are always under constant pressure from their publishers in the form of budgets and deadlines - it *is* however becoming clear that upscaling is increasingly being used as a crutch to push the performance of console games into 'playable' (i.e. 30fps) territory.

Recent releases on the Series S have been as low as 540p internal resolution (not far shy of what the PS2 was putting out) and there's several examples of PS5 and Series X games that run as low as 720p (before upscaling) in their performance modes in an attempt to hit 60fps. The end result is a smeary image that lacks the clarity of consoles just a generation or two earlier.

Sadly, there really is no solution to this, other than to not buy these games because if you do, you're telling the publishers that image quality doesn't matter to you.

I suspect it won't be long before we see our first console game using FSR frame generation for its 'performance mode' whilst still actually running at 20-30fps - and if the game sells well enough, other devs will soon follow suit.

Yup agree on the whole except for image quality bit ;) :p

Development in general across all industries now is pretty **** tbh especially with ai in the picture now, it's always about getting to the market faster each time with no thought/care for building a well QC product, the stakeholders just want to get it over the line and out there with no care for what they are delivering (obviously there are a few exceptions but it's becoming rare in the day of "on demand"), it's very much a deal with the consequences after. There are practices, which seeks to improve the overall quality and speed up testing and delivery, generally what DevOps engineers aim to build out but problem is, this is often poorly implemented in most companies and if the software engineers complain that such "processes/systems" are slowing them down in delivering xyz features (because it's not passing tests, which means they have to go and spend more time resolving the issues before they can deliver xyz feature), stakeholders will want xyz processes put in place removed in order to meet the deadline without any care for why such processes were put there in the first place.

Optimisation/Quality is always the last thing to get looked at, I can't see this ever changing until there is a shift in how companies operate and change what they value.
 
I know a lot will disagree with me but I just can't help but see upscaling and frame generation technologies as an excuse for both sides to get lazy on actual advancement. Why keep pushing the hardware when you can create software to "improve performance" instead?

I don't see that at all. What I see keeping the huge leaps of performance going every gen is getting harder and more expensive.

If there is a way to achieve improvements via software also, why leave that on the table?

If AMD came up with a software solution that gave 200% improvement to cards and you needed LtMatt and mrk to use their image comparison tools and do pixel peeping to see the difference. Would you just ignore it and say AMD made it as an excuse and they are getting lazy?

I would congratulating them.
 
Like I've always said before, it's the experience and end result I care about, if xyz can achieve what I deem as being good then it's good with me, if it isn't, well I simply won't use/buy said product/feature.

Exactly. Who cares how. End result is what matters. If I personally like the end result I don't care how it is done.

That said, I really do not like the idea of having to have a live Internet connection for ai to do some of the work elsewhere. Can see Nvidia releasing something like this next. I want everything done locally on my pc, otherwise might as well get a virtual gpu like you did with a 4080 Nexus :p
 
Exactly. Who cares how. End result is what matters. If I personally like the end result I don't care how it is done.

That said, I really do not like the idea of having to have a live Internet connection for ai to do some of the work elsewhere. Can see Nvidia releasing something like this next. I want everything done locally on my pc, otherwise might as well get a virtual gpu like you did with a 4080 Nexus :p
Don't worry, it will still be done locally, as long as you have a nvidia gpu :p

I do recall of seeing something about hardware being used for windows copilot for acceleration and nvidia gpus though so probably will become a thing in games too for any ai specific workloads.
 
Back
Top Bottom