• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD's FSR3 possibly next month ?

Suspended
Joined
17 Mar 2012
Posts
48,333
Location
ARC-L1, Stanton System
Another Marketing Disaster.

I think AMD's involvement with Bethesda was late and more Bethesda or Microsoft initiated, The one thing AMD are really good at in software is game engine optimisation and Bethesda needed that, but they wouldn't have realised that until they started proof testing the game, that happens late stage.

FSR 2, note its not even FSR 2.2, looks like a late and poorly executed stitch in, last minute, when its done well and done by AMD its just as good as DLSS, FSR 2 in SF is bad, its like someone installed ReShade with it in.
 
Soldato
Joined
22 May 2010
Posts
12,229
Location
Minibotpc
Starfield could have been the poster child for FSR 3 given how demanding it is. AMD really should have made sure it was ready for release at the same time as Starfield, can't help but feel AMD missed an open goal here.

That ship has sailed now unfortunately. Modders are the way forward until Beth fixes it.
 
Associate
Joined
3 May 2021
Posts
1,232
Location
Italy
You can't compare audio streams to visual streams. You can't reconstruct audio layers the same way you can a predictive visual model like an image which looks identical or better than the native version. Plus with compressed audio at a certain bitrate, the human ear is incapable of telling the difference anyway, whereas image reconstruction is only limited to the vision capability of each individual person.
I hope you aren't arguing that audio reconstruction is computationally more complex than video, if you think that I'll drop the discussion because it would be pointless to continue.
Also, you make it sound that everyone hears the same way (ha!) while they have different visual sensitivity (to which of course I agree), so that's another problematic point.

Predictive models are not magic, they are basically applied statistics and the more computationally complex problem you have the harder it is to reconstruct.
What upscalers achieve is basically a more visually pleasant distortion than poorly implemented anti aliasing, however this is more about poor original implementation than real upscaler effectiveness. The only thing mathematically better than native is supersampling (aka rendering at an higher resolution and downscaling to your native one), everything else is basically the video equivalent of Bose speakers, which they basically distort audio in something many people find "concert-like" but it's not the real thing.

My opinion is that the industry did a collective disaster by choosing to push higher resolution instead of HDR, upscalers are basically GPU manifacturers admitting their inability of making decently priced 4k GPUs (especially with RT involved!) and giving developers yet another excuse to skimp on optimization.

Remember that most users have performance between a 1060 and a 3060 and play 1080p, anything else is a small minority, with 4k being tiny.
 

mrk

mrk

Man of Honour
Joined
18 Oct 2002
Posts
101,012
Location
South Coast
Another Marketing Disaster.



Well you can compare it to visual streams. I am interested in photography. I could go and take a picture of a complex scene with dSLR or mirrorless camera at 24MP. This is an image formed by light reflecting off the object onto the sensor. You can take a picture with film and get and the same image.

Then I could take the same scene at 6MP,and use some of the machine learning algorithms to reconstruct it to 24MP. Are you going to get a better than a native version of a complex scene? No you will be getting a "version" of the image which is an approximation of the image. You can always do a simple subtractive comparison to see if they are exactly the same.

Even with modern digital sensors,the colour is predictive(colour arrays)- but in scientific imaging I worked in years ago,we would rather use full colour filters because it causes issues(even most space probes use colour filters).

The same goes if you took 4K video,which has random movements of people,etc in it. You could do reconstruction there and even frame insertion,but machine algorithms will still have to predict what is happen. They are not so great at random movement. I remember friends working on machine learning years ago - there are still fundamental limitations to what you can do.

Its getting better,but again its always a good estimate.
You've just highlighted my point.

Reconstructing a higher 24MP image from the source 6MP image won't result in a better than 24MP native image, at best it will be very close but not quite up to scrutiny.

That's the equivalent of enabling DLSS/FSR/XeSS at 1080p resolution which works with a much lower internal resolution to get a 1080p reconstructed output. There isn't enough pixel data for the AI to work with to reconstruct a meaningfully detailed image that is as good or better than native.

With a game though there is plenty of data for upscaling to work extremely well if the internal resolution is sufficiently high, 1440p is the minimum in this generation of all the upscalers, and at 4k it's a no brainer because there's vastly more pixel data to reconstruct with. Again, side by side examples prove this in plenty of games.

And for all that to be evident clearly, the developer has to put time into the upscaler implementation as well so that there are no shimmering reflections or whatever. Ratchet & Clank is a prime example of upscaling done right. The image output is sharp and full of amazing detail using DLSS even in Balanced mode, just as good as native resolution at 1440p or above, plenty of us have posted the screenshots in the relevant threads so no need to repeat here as it's a back and forth that has been done to death and proven to be sound in favour of upscaling.

Edit*
Also at 1080p the game is much more CPU intensive, so more people are inclined to enable upscaling at this resolution resulting in a catch-22, get worse image quality/stability to gain better fps, or keep native 1080p and get worse fps anyway because of the CPU bound nature of this resolution - Since most gamers out there are still at 1080p, and as such more than likely aren't on higher end CPUs either (Steam survey shows this). I agree the industry hasn't gone about this the right way to cater to the masses. You either have high end hardware and benefit from a higher baseline fps which is then even higher with upscaling, or you have lower end HW and enable upscaling to offload CPU time to gain better fps, but at the expense of image stability because the baseline resolution is 1080p. Nobody really wins below the high end segment which as noted is tiny.
 
Last edited:
Associate
Joined
5 Aug 2023
Posts
728
Location
Earth
Its baked in now for both, you get less shaders and more image downgrading to make up for it.

Because like sheepole we affirm everything Nvidia do in downgrading our quality of GPU's and then criticise AMD for not being as good as Nvidia at it.

We should have said very loudly and clearly no to DLSS the moment it appeard, instead we criticised AMD for not having it, i knew at the time this would lead to slower more expensive GPU's, i warned of it.

Now here we are, i have a first gen DLSS card, for the same $500 it cost me i get 20% more 'native' performance, 4 years later.

Yep, its a scam. You could see it coming a mile off, but you've got people evangelizing for it, and against their own interests...

There was a time when people naturally distrusted and saw big business as their enemy because it's obvious that in the end, your best interests conflict with what's best for their balance sheet. But now, people think act like these tech corps are their fwend.
 
Last edited:
Soldato
Joined
9 Nov 2009
Posts
24,918
Location
Planet Earth
lol, I wouldn't go that far but it's certainly a missed oppatunity.
I think AMD's involvement with Bethesda was late and more Bethesda or Microsoft initiated, The one thing AMD are really good at in software is game engine optimisation and Bethesda needed that, but they wouldn't have realised that until they started proof testing the game, that happens late stage.

FSR 2, note its not even FSR 2.2, looks like a late and poorly executed stitch in, last minute, when its done well and done by AMD its just as good as DLSS, FSR 2 in SF is bad, its like someone installed ReShade with it in.

It is,because it allowed Nvidia marketing to get out and make Starfield all about them,like they did with HL2.

I simply don't understand after 20 years of Nvidia doing this,AMD marketing hasn't a clue how they work.

I hope you aren't arguing that audio reconstruction is computationally more complex than video, if you think that I'll drop the discussion because it would be pointless to continue.
Also, you make it sound that everyone hears the same way (ha!) while they have different visual sensitivity (to which of course I agree), so that's another problematic point.

Predictive models are not magic, they are basically applied statistics and the more computationally complex problem you have the harder it is to reconstruct.
What upscalers achieve is basically a more visually pleasant distortion than poorly implemented anti aliasing, however this is more about poor original implementation than real upscaler effectiveness. The only thing mathematically better than native is supersampling (aka rendering at an higher resolution and downscaling to your native one), everything else is basically the video equivalent of Bose speakers, which they basically distort audio in something many people find "concert-like" but it's not the real thing.

My opinion is that the industry did a collective disaster by choosing to push higher resolution instead of HDR, upscalers are basically GPU manifacturers admitting their inability of making decently priced 4k GPUs (especially with RT involved!) and giving developers yet another excuse to skimp on optimization.

Remember that most users have performance between a 1060 and a 3060 and play 1080p, anything else is a small minority, with 4k being tiny.

We have poor AA because of poor generational improvements in mainstream dGPUs. Many of us who are more mainstream gamers,have noticed this.

So PC games brought over console orientated AA methods,etc.

The whole "better than native" narrative was spun from DLSS1 FFS,so the Turing sidegrade generation could be sold as a performance upgrade.

They are trying to normalise upscaling as the same as native rendering,frame generation the same as native generation so they can sell you less hardware for more money.

Is it no wonder when we have trash like the RTX4060TI/RTX4060/RX7600. Three years after consoles have come out and huge amounts of the Steam top 10 are not even beating economy hardware consoles.
You've just highlighted my point.

Reconstructing a higher 24MP image from the source 6MP image won't result in a better than 24MP native image, at best it will be very close but not quite up to scrutiny.

That's the equivalent of enabling DLSS/FSR/XeSS at 1080p resolution which works with a much lower internal resolution to get a 1080p reconstructed output. There isn't enough pixel data for the AI to work with to reconstruct a meaningfully detailed image that is as good or better than native.

With a game though there is plenty of data for upscaling to work extremely well if the internal resolution is sufficiently high, 1440p is the minimum in this generation of all the upscalers, and at 4k it's a no brainer because there's vastly more pixel data to reconstruct with. Again, side by side examples prove this in plenty of games.

And for all that to be evident clearly, the developer has to put time into the upscaler implementation as well so that there are no shimmering reflections or whatever. Ratchet & Clank is a prime example of upscaling done right. The image output is sharp and full of amazing detail using DLSS even in Balanced mode, just as good as native resolution at 1440p or above, plenty of us have posted the screenshots in the relevant threads so no need to repeat here as it's a back and forth that has been done to death and proven to be sound in favour of upscaling.

It won't matter even if you have a 12MP image and upscale to 24MP. The reality is that it would not be the same as native because you can't recreate more detail than the sensor and lens can capture.The only way to recreate more detail would be to get a better sensor and lens combination and take the picture again. Can it do a good estimation? Sure - but remember the marketing will always try and fish out the best output and present it at lowish resolution so you can't see all the slight issues. Plus always they use hand tailored examples on one image - not the general purpose upscalers we get(which won't be as good).

I worked in technical imaging,and with people in machine learning many years ago. Even with training the algorithms,its still an estimation of what the "native" output will be. In this case the native output will be what the gaming engine produces. Do a subtractive per image operation and you will see it will be different.

Plus talking about sharpness - it isn't really a technical term although most of us use it(me too) and is an effect. What it is a visual trick,exploiting human vision oddities where we can perceive high contrast dropoffs. It uses edge detection to find edfgs,and apply a steep contrast gradient.

This is why smartphone images can look "better" than from a medium format digital camera. They apply a lot of contrast based effects like sharpening to appear pleasing,but close up detail is lacking. But the medium format digital image is the more accurate and detailed image. For commercial photography you want that.

The big issue is not that upscaling is there - who wouldn't want more options? Its the marketing which annoys me.

Then the marketing can spin the RTX4060TI is a next generation improvement,and if I want to get something better than what I have I have to spend even more.
 
Last edited:
Suspended
Joined
17 Mar 2012
Posts
48,333
Location
ARC-L1, Stanton System
Yep, its a scam. You could see it coming a mile off, but you've got people evangelizing for it, and against their own interests...

There was a time when people naturally distrusted and saw big business as their natural enemy, because it's obvious that in the end, your best interests conflict with what's best for their balance sheet. But now, people think act like these tech corps are their fwend.

That's so frustrating, at this point i'm so frustrated with it i'm on the accelerationist side of it, make the 5080 $1500 for 20% gain.... at what point are people going to wake up and realise "i've been such an idiot"
 
Soldato
Joined
25 Sep 2009
Posts
9,723
Location
Billericay, UK
I think AMD's involvement with Bethesda was late and more Bethesda or Microsoft initiated, The one thing AMD are really good at in software is game engine optimisation and Bethesda needed that, but they wouldn't have realised that until they started proof testing the game, that happens late stage.

FSR 2, note its not even FSR 2.2, looks like a late and poorly executed stitch in, last minute, when its done well and done by AMD its just as good as DLSS, FSR 2 in SF is bad, its like someone installed ReShade with it in.
I agree, looking at the screenshots the FSR implantation looks pretty bad especially at 1080p, the kicker though is you don't even get that much of an performance uplift (see HUB recent benchmarks).
 
Suspended
Joined
17 Mar 2012
Posts
48,333
Location
ARC-L1, Stanton System
It is,because it allowed Nvidia marketing to get and make Starfield all about them,like they did with HL2.

I simply don't understand after 20 years of Nvidia doing this,AMD marketing hasn't a clue how they work.

I just want them to give me something compelling and then i'll give them my money in exchange.

I don't want them it be skilled in convincing me of it, i'm a free thinker, i'm capable of constructing my own reasoning, i don't need them to do that for me, DLSS is dog ####!

AMD know exactly what Nvidia are doing but what can they actually do about it? Nvidia seed the tech press with BS about AMD doing bad things and they run with it like the useful idiots that they are.

All AMD can do is not give them any oxygen, ignore them, which they did.
The problem is the tech press being useful idiots.
 
Soldato
Joined
6 Aug 2009
Posts
7,091
I agree, looking at the screenshots the FSR implantation looks pretty bad especially at 1080p, the kicker though is you don't even get that much of an performance uplift (see HUB recent benchmarks).
It seems the real issue is Starfield's engine and the optimisation is rubbish.
 
Soldato
Joined
9 Nov 2009
Posts
24,918
Location
Planet Earth
Honestly they can sell it as better than reality for all that I care,when I come to think about it. Why it annoys me so much,is because they use it to upsell rubbish. Utter trash like the RTX4060TI/RTX4060/RX7600 are being sold for much higher than they deserve(especially the Nvidia cards) because they are using upscaling to compensate for actual generational progress. The RTX4070 should be the RTX4060TI,RX7800XT should be the RX7700XT and so on.

Because of this,any of us who want to have a noticeable upgrade are spending more and more and more.

I just want them to give me something compelling and then i'll give them my money in exchange.

I don't want them it be skilled in convincing me of it, i'm a free thinker, i'm capable of constructing my own reasoning, i don't need them to do that for me, DLSS is dog ####!

AMD know exactly what Nvidia are doing but what can they actually do about it? Nvidia seed the tech press with BS about AMD doing bad things and they run with it like the useful idiots that they are.

All AMD can do is not give them any oxygen, ignore them, which they did.
The problem is the tech press being useful idiots.

AMD can do plenty to get ahead of the narratives and be proactive. 20 years of PR have shown Nvidia marketing to not only be excellent,but also very proactive it what it does. Nvidia understood the power of social media/internet narratives 20 years ago. Lots of the negative memes about AMD/ATI came out of social media narratives,which might not have been entirely true but stuck. Even the tech press is the way it is because Nvidia made sure it set the rules and they know they will use every trick to win.

There is zero excuse for AMD as a company to be not on the ball like this. This is their job. If people on a forum can see these thing coming a mile away,how can they not be aware? It's costing them money.
 
Soldato
Joined
25 Sep 2009
Posts
9,723
Location
Billericay, UK
It seems the real issue is Starfield's engine and the optimisation is rubbish.
I would love to know what's going on with the engine. There's no ray tracing or silly amounts of tessellation going on and the textures have clearly been configured so they play nicely with 8gb cards ye the game is more demanding the recent Harry Potter game, Cyberpunk 2077 and The Last of Us. I wonder if the engine is using any sort of Primitive Shader/Mesh Shaders (so the card doesn't render anything the player can't see) or A-synchronous compute? I know the shadows are broken and can really hurt performance but that can't be the only reason.
 

mrk

mrk

Man of Honour
Joined
18 Oct 2002
Posts
101,012
Location
South Coast
It is,because it allowed Nvidia marketing to get out and make Starfield all about them,like they did with HL2.

I simply don't understand after 20 years of Nvidia doing this,AMD marketing hasn't a clue how they work.



We have poor AA because of poor generational improvements in mainstream dGPUs. Many of us who are more mainstream gamers,have noticed this.

So PC games brought over console orientated AA methods,etc.

The whole "better than native" narrative was spun from DLSS1 FFS,so the Turing sidegrade generation could be sold as a performance upgrade.

They are trying to normalise upscaling as the same as native rendering,frame generation the same as native generation so they can sell you less hardware for more money.

Is it no wonder when we have trash like the RTX4060TI/RTX4060/RX7600. Three years after consoles have come out and huge amounts of the Steam top 10 are not even beating economy hardware consoles.


It won't matter even if you have a 12MP image and upscale to 24MP. The reality is that it would not be the same as native because you can't recreate more detail than the sensor and lens can capture.The only way to recreate more detail would be to get a better sensor and lens combination and take the picture again. Can it do a good estimation? Sure - but remember the marketing will always try and fish out the best output and present it at lowish resolution so you can't see all the slight issues. Plus always they use hand tailored examples on one image - not the general purpose upscalers we get(which won't be as good).

I worked in technical imaging,and with people in machine learning many years ago. Even with training the algorithms,its still an estimation of what the "native" output will be. In this case the native output will be what the gaming engine produces. Do a subtractive per image operation and you will see it will be different.

Plus talking about sharpness - it isn't really a technical term although most of us use it(me too) and is an effect. What it is a visual trick,exploiting human vision oddities where we can perceive high contrast dropoffs. It uses edge detection to find edfgs,and apply a steep contrast gradient.

This is why smartphone images can look "better" than from a medium format digital camera. They apply a lot of contrast based effects like sharpening to appear pleasing,but close up detail is lacking. But the medium format digital image is the more accurate and detailed image. For commercial photography you want that.

The big issue is not that upscaling is there - who wouldn't want more options? Its the marketing which annoys me.

Then the marketing can spin the RTX4060TI is a next generation improvement,and if I want to get something better than what I have I have to spend even more.
I don't have any 12MP images to hand but I do have 24MP or above, here's an example of AI upscaling from Lightroom using the GPU accelerated Super Resolution feature. This particular photo was taken with one of Canon's sharpest standard wide prime lenses they have ever produced with high resolving power edge to edge sharpness, using an equally high quality 30MP sensor (5D4):

26MP source (shot in RAW and output to 100% JPEG): https://robbiekhan.co.uk/root/temp/2023.08.15_1428-36_00241.jpg (13MB)
104MP upscaled Super Resolution: https://robbiekhan.co.uk/root/temp/2023.08.15_1428-36_00241-Enhanced-SR.jpg (43MB)

Even side by side both share the same technical flaws at a 200% zoom level which can be observed, so the reconstruction has retained some of those, but using neighbouring pixels has enhanced the 104MP output to be just as detailed if not more in the main areas of focus like on his blazer jacket where individual threads can be seen in better detail etc.

Bottom line is that AI enhanced upscaling works. It never used to work, but it does now, and things are all the better for it, whether it's in a game, a photograph or whatever else.
 
Last edited:
Soldato
Joined
6 Aug 2009
Posts
7,091
I would love to know what's going on with the engine. There's no ray tracing or silly amounts of tessellation going on and the textures have clearly been configured so they play nicely with 8gb cards ye the game is more demanding the recent Harry Potter game, Cyberpunk 2077 and The Last of Us. I wonder if the engine is using any sort of Primitive Shader/Mesh Shaders (so the card doesn't render anything the player can't see) or A-synchronous compute? I know the shadows are broken and can really hurt performance but that can't be the only reason.
I saw a Buildzoid video where he speculated that it was very memory bandwidth sensitive. Why that would be I wouldn't know.
 

mrk

mrk

Man of Honour
Joined
18 Oct 2002
Posts
101,012
Location
South Coast
If it's memory intensive then surely you'd expect memory to actually be used though. The game uses less than 6GB of VRAM and 7GB of system RAM, so that would point to anything but memory intensive nature of the game? Unless it's specifically targeting keeping a certain amount in VRAM at all times and then smashing textures etc through as fast as possible trying to maintain that set level of VRAM use which means the bottleneck is a software one which Bethesda could solve in a patch or something.
 
Last edited:
Soldato
Joined
9 Nov 2009
Posts
24,918
Location
Planet Earth
I don't have any 12MP images to hand but I do have 24MP or above, here's an example of AI upscaling from Lightroom using the Super Resolution feature. This particular photo was taken with one of Canon's sharpest standard wide prime lenses they have ever produced with high resolving power edge to edge sharpness, using an equally high quality 30MP sensor (5D4):

26MP source CR2: https://robbiekhan.co.uk/root/temp/2023.08.15_1428-36_00241.jpg (13MB)
104MP upscaled Super Resolution: https://robbiekhan.co.uk/root/temp/2023.08.15_1428-36_00241-Enhanced-SR.jpg (43MB)

Even side by side both share the same technical flaws at a 200% zoom level which can be observed, so the reconstruction has retained some of those, but using neighbouring pixels has enhanced the 104MP output to be just as detailed if not more in the main areas of focus like on his blazer jacket where individual threads can be seen in better detail etc.

Bottom line is that AI enhanced upscaling works. It never used to work, but it does now, and things are all the better for it, whether it's in a game, a photograph or whatever else.

The issue is that it can't be better than native when it comes to the photo example I said. All you are seeing is effects such as sharpening which appear to make it "look better" but its predicted detail.It always works best with regular patterns which are easy to predict. There is no "extra detail" - it just magnifying the detail which the original image had. This is what upscaling has done for decades.

When you use machine learning training,all you are doing is refining the upscaling algorithm with more data. It isn't magic. I knew people who worked in machine learning years ago,but marketing has gone made it seem like magic. It isn't.

Just look at colour artefacts on Bayer sensors - the reason you have filters on top is to blur the image,as the colour prediction algorithms can get confused sometimes.That is why you get Moire. In certain critical applications for example you generally won't use Bayer array sensors. You have pure black and white sensors with colour filters. For such critical applications you would rather use false colour than fake colour.Its the same with AI fake bokeh. It looks nice to people,but it is technically not correct especially using it with wide lenses which lead to other sets of issues(distortion). The real effect is more the result of the optical properties of wide aperture telephoto lenses with certain internal optical arrangements. Not a piddly 12mm lens on a phone.

But it can't recreate more detail than the original device used to record the picture. Its an estimation of what a higher output device can do.But when you take a picture its not just the sensor which is the issue. Its the resolving capability of the lens on top of other factors too.

If the lens can't resolve finer detail,that upscaled image won't get that extra detail. If you don't believe me - use the same 26MP source,with a mediocre lens and then use a 102MP medium format digital camera with the highest resolving lens on that system. Then compare the upscaled image 26MP to the 102MP one.

The same goes with audio. It can't produce higher quality audio,than if you re-recorded the original with better equipment. But it might make a good estimation of what it might be like.

It's like if you copied someones painting. Maybe your version will look more pleasant than the original but its not the original.

Maybe we need to agree to disagree but I feel we are talking about different things here.
 
Last edited:
Suspended
Joined
17 Mar 2012
Posts
48,333
Location
ARC-L1, Stanton System
Honestly they can sell it as better than reality for all that I care,when I come to think about it. Why it annoys me so much,is because they use it to upsell rubbish. Utter trash like the RTX4060TI/RTX4060/RX7600 are being sold for much higher than they deserve(especially the Nvidia cards) because they are using upscaling to compensate for actual generational progress. The RTX4070 should be the RTX4060TI,RX7800XT should be the RX7700XT and so on.

Because of this,any of us who want to have a noticeable upgrade are spending more and more and more.



AMD can do plenty to get ahead of the narratives and be proactive. 20 years of PR have shown Nvidia marketing to not only be excellent,but also very proactive it what it does. Nvidia understood the power of social media/internet narratives 20 years ago. Lots of the negative memes about AMD/ATI came out of social media narratives,which might not have been entirely true but stuck. Even the tech press is the way it is because Nvidia made sure it set the rules and they know they will use every trick to win.

There is zero excuse for AMD as a company to be not on the ball like this. This is their job. If people on a forum can see these thing coming a mile away,how can they not be aware? It's costing them money.

You're right, but then should AMD do that to Nvidia? These useful idiots have gone along with all of Nvidia's shenanigans for 2 decades, if AMD started behaving in the same way it becomes a war and these people already don't believe AMD have any credibility what so ever, but they do blindly follow Nvidia.

We have to say to the tech press "YOU don't have any credibility" HUB are already acting like this AMD blocking DLSS was never a thing they blew up out of all proportions, we will allow them to brush it under the carpet and do it again.
 
Last edited:

mrk

mrk

Man of Honour
Joined
18 Oct 2002
Posts
101,012
Location
South Coast
All you are seeing is effects such as sharpening which appear to make it "look better" but its predicted detail.It always works best with regular patterns which are easy to predict. There is no "extra detail" - it just magnifying the detail which the original image had. This is what upscaling has done for decades.
This is not entirely accurate. The way Super resolution works with Lightroom is similar to Topaz Labs' (probably the leader in industry for AI for image enhancements) approach to reconstruct detail using AI, and if you have a RAW source image to work with then the results are at their best as RAW retains a lot of information deep within the capture that isn't always visible at the surface.

Photoshop's Enhance Details feature is AI powered too but simply sharpens the image using AI, Super Resolution in Lightroom uses AI to quadruple the resolution and rebuild detail that might otherwise be there if the original source was that same 104MP image taken with a higher MP sensor. Take a look at the fine detail in the bride's dress lower on the image where the sunlight hits it, the mesh detail isn't clear on the 26MP version, but it is clear on the 104MP upscaled image as it's been reconstructed accordingly and accurately - There is no sharpening applied here to simply enhance details to give the illusion of better detail.

My point is this, there are many variations of these technologies, the way they are implemented matter the most, you can't have a low quality source image (DLSS at 1080p) and expect a 1080p output reconstruction to the same quality as a native 1080p image because the source internal render image is way too low res. But give the AI a suitably details source image, and it can do much much more and actually generate a perfect image as demonstrated with all these DLSS comparison videos being posted online.

In an ideal world everything would be native and run great in games, but we don't have an ideal world currently, but even then we'd still need a superior AA solution at native to combat jaggies, and no in-game internal AA solution is really that good or efficient for today's games, either always too soft, or too whacky for temporal stability. DLSS appears to resolve the shortfalls of inefficient AA so for that alone it's well worth implementing in a game properly, plus we all get free fps gains in the process so what's not to like.
 
Last edited:
Soldato
Joined
6 Aug 2009
Posts
7,091
If it's memory intensive then surely you'd expect memory to actually be used though. The game uses less than 6GB of VRAM and 7GB of system RAM, so that would point to anything but memory intensive nature of the game? Unless it's specifically targeting keeping a certain amount in VRAM at all times and then smashing textures etc through as fast as possible trying to maintain that set level of VRAM use which means the bottleneck is a software one which Bethesda could solve in a patch or something.
I think that was the point, not so much that total amount of memory needed but the bandwidth, the time taken to transfer the data presumably being less with greater bandwidth.
 
Back
Top Bottom