Soldato
lol, I wouldn't go that far but it's certainly a missed oppatunity.Another Marketing Disaster.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
lol, I wouldn't go that far but it's certainly a missed oppatunity.Another Marketing Disaster.
Another Marketing Disaster.
Starfield could have been the poster child for FSR 3 given how demanding it is. AMD really should have made sure it was ready for release at the same time as Starfield, can't help but feel AMD missed an open goal here.
I hope you aren't arguing that audio reconstruction is computationally more complex than video, if you think that I'll drop the discussion because it would be pointless to continue.You can't compare audio streams to visual streams. You can't reconstruct audio layers the same way you can a predictive visual model like an image which looks identical or better than the native version. Plus with compressed audio at a certain bitrate, the human ear is incapable of telling the difference anyway, whereas image reconstruction is only limited to the vision capability of each individual person.
You've just highlighted my point.Another Marketing Disaster.
Well you can compare it to visual streams. I am interested in photography. I could go and take a picture of a complex scene with dSLR or mirrorless camera at 24MP. This is an image formed by light reflecting off the object onto the sensor. You can take a picture with film and get and the same image.
Then I could take the same scene at 6MP,and use some of the machine learning algorithms to reconstruct it to 24MP. Are you going to get a better than a native version of a complex scene? No you will be getting a "version" of the image which is an approximation of the image. You can always do a simple subtractive comparison to see if they are exactly the same.
Even with modern digital sensors,the colour is predictive(colour arrays)- but in scientific imaging I worked in years ago,we would rather use full colour filters because it causes issues(even most space probes use colour filters).
The same goes if you took 4K video,which has random movements of people,etc in it. You could do reconstruction there and even frame insertion,but machine algorithms will still have to predict what is happen. They are not so great at random movement. I remember friends working on machine learning years ago - there are still fundamental limitations to what you can do.
Its getting better,but again its always a good estimate.
Its baked in now for both, you get less shaders and more image downgrading to make up for it.
Because like sheepole we affirm everything Nvidia do in downgrading our quality of GPU's and then criticise AMD for not being as good as Nvidia at it.
We should have said very loudly and clearly no to DLSS the moment it appeard, instead we criticised AMD for not having it, i knew at the time this would lead to slower more expensive GPU's, i warned of it.
Now here we are, i have a first gen DLSS card, for the same $500 it cost me i get 20% more 'native' performance, 4 years later.
lol, I wouldn't go that far but it's certainly a missed oppatunity.
I think AMD's involvement with Bethesda was late and more Bethesda or Microsoft initiated, The one thing AMD are really good at in software is game engine optimisation and Bethesda needed that, but they wouldn't have realised that until they started proof testing the game, that happens late stage.
FSR 2, note its not even FSR 2.2, looks like a late and poorly executed stitch in, last minute, when its done well and done by AMD its just as good as DLSS, FSR 2 in SF is bad, its like someone installed ReShade with it in.
I hope you aren't arguing that audio reconstruction is computationally more complex than video, if you think that I'll drop the discussion because it would be pointless to continue.
Also, you make it sound that everyone hears the same way (ha!) while they have different visual sensitivity (to which of course I agree), so that's another problematic point.
Predictive models are not magic, they are basically applied statistics and the more computationally complex problem you have the harder it is to reconstruct.
What upscalers achieve is basically a more visually pleasant distortion than poorly implemented anti aliasing, however this is more about poor original implementation than real upscaler effectiveness. The only thing mathematically better than native is supersampling (aka rendering at an higher resolution and downscaling to your native one), everything else is basically the video equivalent of Bose speakers, which they basically distort audio in something many people find "concert-like" but it's not the real thing.
My opinion is that the industry did a collective disaster by choosing to push higher resolution instead of HDR, upscalers are basically GPU manifacturers admitting their inability of making decently priced 4k GPUs (especially with RT involved!) and giving developers yet another excuse to skimp on optimization.
Remember that most users have performance between a 1060 and a 3060 and play 1080p, anything else is a small minority, with 4k being tiny.
You've just highlighted my point.
Reconstructing a higher 24MP image from the source 6MP image won't result in a better than 24MP native image, at best it will be very close but not quite up to scrutiny.
That's the equivalent of enabling DLSS/FSR/XeSS at 1080p resolution which works with a much lower internal resolution to get a 1080p reconstructed output. There isn't enough pixel data for the AI to work with to reconstruct a meaningfully detailed image that is as good or better than native.
With a game though there is plenty of data for upscaling to work extremely well if the internal resolution is sufficiently high, 1440p is the minimum in this generation of all the upscalers, and at 4k it's a no brainer because there's vastly more pixel data to reconstruct with. Again, side by side examples prove this in plenty of games.
And for all that to be evident clearly, the developer has to put time into the upscaler implementation as well so that there are no shimmering reflections or whatever. Ratchet & Clank is a prime example of upscaling done right. The image output is sharp and full of amazing detail using DLSS even in Balanced mode, just as good as native resolution at 1440p or above, plenty of us have posted the screenshots in the relevant threads so no need to repeat here as it's a back and forth that has been done to death and proven to be sound in favour of upscaling.
Yep, its a scam. You could see it coming a mile off, but you've got people evangelizing for it, and against their own interests...
There was a time when people naturally distrusted and saw big business as their natural enemy, because it's obvious that in the end, your best interests conflict with what's best for their balance sheet. But now, people think act like these tech corps are their fwend.
I agree, looking at the screenshots the FSR implantation looks pretty bad especially at 1080p, the kicker though is you don't even get that much of an performance uplift (see HUB recent benchmarks).I think AMD's involvement with Bethesda was late and more Bethesda or Microsoft initiated, The one thing AMD are really good at in software is game engine optimisation and Bethesda needed that, but they wouldn't have realised that until they started proof testing the game, that happens late stage.
FSR 2, note its not even FSR 2.2, looks like a late and poorly executed stitch in, last minute, when its done well and done by AMD its just as good as DLSS, FSR 2 in SF is bad, its like someone installed ReShade with it in.
It is,because it allowed Nvidia marketing to get and make Starfield all about them,like they did with HL2.
I simply don't understand after 20 years of Nvidia doing this,AMD marketing hasn't a clue how they work.
It seems the real issue is Starfield's engine and the optimisation is rubbish.I agree, looking at the screenshots the FSR implantation looks pretty bad especially at 1080p, the kicker though is you don't even get that much of an performance uplift (see HUB recent benchmarks).
I just want them to give me something compelling and then i'll give them my money in exchange.
I don't want them it be skilled in convincing me of it, i'm a free thinker, i'm capable of constructing my own reasoning, i don't need them to do that for me, DLSS is dog ####!
AMD know exactly what Nvidia are doing but what can they actually do about it? Nvidia seed the tech press with BS about AMD doing bad things and they run with it like the useful idiots that they are.
All AMD can do is not give them any oxygen, ignore them, which they did.
The problem is the tech press being useful idiots.
I would love to know what's going on with the engine. There's no ray tracing or silly amounts of tessellation going on and the textures have clearly been configured so they play nicely with 8gb cards ye the game is more demanding the recent Harry Potter game, Cyberpunk 2077 and The Last of Us. I wonder if the engine is using any sort of Primitive Shader/Mesh Shaders (so the card doesn't render anything the player can't see) or A-synchronous compute? I know the shadows are broken and can really hurt performance but that can't be the only reason.It seems the real issue is Starfield's engine and the optimisation is rubbish.
I don't have any 12MP images to hand but I do have 24MP or above, here's an example of AI upscaling from Lightroom using the GPU accelerated Super Resolution feature. This particular photo was taken with one of Canon's sharpest standard wide prime lenses they have ever produced with high resolving power edge to edge sharpness, using an equally high quality 30MP sensor (5D4):It is,because it allowed Nvidia marketing to get out and make Starfield all about them,like they did with HL2.
I simply don't understand after 20 years of Nvidia doing this,AMD marketing hasn't a clue how they work.
We have poor AA because of poor generational improvements in mainstream dGPUs. Many of us who are more mainstream gamers,have noticed this.
So PC games brought over console orientated AA methods,etc.
The whole "better than native" narrative was spun from DLSS1 FFS,so the Turing sidegrade generation could be sold as a performance upgrade.
They are trying to normalise upscaling as the same as native rendering,frame generation the same as native generation so they can sell you less hardware for more money.
Is it no wonder when we have trash like the RTX4060TI/RTX4060/RX7600. Three years after consoles have come out and huge amounts of the Steam top 10 are not even beating economy hardware consoles.
It won't matter even if you have a 12MP image and upscale to 24MP. The reality is that it would not be the same as native because you can't recreate more detail than the sensor and lens can capture.The only way to recreate more detail would be to get a better sensor and lens combination and take the picture again. Can it do a good estimation? Sure - but remember the marketing will always try and fish out the best output and present it at lowish resolution so you can't see all the slight issues. Plus always they use hand tailored examples on one image - not the general purpose upscalers we get(which won't be as good).
I worked in technical imaging,and with people in machine learning many years ago. Even with training the algorithms,its still an estimation of what the "native" output will be. In this case the native output will be what the gaming engine produces. Do a subtractive per image operation and you will see it will be different.
Plus talking about sharpness - it isn't really a technical term although most of us use it(me too) and is an effect. What it is a visual trick,exploiting human vision oddities where we can perceive high contrast dropoffs. It uses edge detection to find edfgs,and apply a steep contrast gradient.
This is why smartphone images can look "better" than from a medium format digital camera. They apply a lot of contrast based effects like sharpening to appear pleasing,but close up detail is lacking. But the medium format digital image is the more accurate and detailed image. For commercial photography you want that.
The big issue is not that upscaling is there - who wouldn't want more options? Its the marketing which annoys me.
Then the marketing can spin the RTX4060TI is a next generation improvement,and if I want to get something better than what I have I have to spend even more.
I saw a Buildzoid video where he speculated that it was very memory bandwidth sensitive. Why that would be I wouldn't know.I would love to know what's going on with the engine. There's no ray tracing or silly amounts of tessellation going on and the textures have clearly been configured so they play nicely with 8gb cards ye the game is more demanding the recent Harry Potter game, Cyberpunk 2077 and The Last of Us. I wonder if the engine is using any sort of Primitive Shader/Mesh Shaders (so the card doesn't render anything the player can't see) or A-synchronous compute? I know the shadows are broken and can really hurt performance but that can't be the only reason.
I don't have any 12MP images to hand but I do have 24MP or above, here's an example of AI upscaling from Lightroom using the Super Resolution feature. This particular photo was taken with one of Canon's sharpest standard wide prime lenses they have ever produced with high resolving power edge to edge sharpness, using an equally high quality 30MP sensor (5D4):
26MP source CR2: https://robbiekhan.co.uk/root/temp/2023.08.15_1428-36_00241.jpg (13MB)
104MP upscaled Super Resolution: https://robbiekhan.co.uk/root/temp/2023.08.15_1428-36_00241-Enhanced-SR.jpg (43MB)
Even side by side both share the same technical flaws at a 200% zoom level which can be observed, so the reconstruction has retained some of those, but using neighbouring pixels has enhanced the 104MP output to be just as detailed if not more in the main areas of focus like on his blazer jacket where individual threads can be seen in better detail etc.
Bottom line is that AI enhanced upscaling works. It never used to work, but it does now, and things are all the better for it, whether it's in a game, a photograph or whatever else.
Honestly they can sell it as better than reality for all that I care,when I come to think about it. Why it annoys me so much,is because they use it to upsell rubbish. Utter trash like the RTX4060TI/RTX4060/RX7600 are being sold for much higher than they deserve(especially the Nvidia cards) because they are using upscaling to compensate for actual generational progress. The RTX4070 should be the RTX4060TI,RX7800XT should be the RX7700XT and so on.
Because of this,any of us who want to have a noticeable upgrade are spending more and more and more.
AMD can do plenty to get ahead of the narratives and be proactive. 20 years of PR have shown Nvidia marketing to not only be excellent,but also very proactive it what it does. Nvidia understood the power of social media/internet narratives 20 years ago. Lots of the negative memes about AMD/ATI came out of social media narratives,which might not have been entirely true but stuck. Even the tech press is the way it is because Nvidia made sure it set the rules and they know they will use every trick to win.
There is zero excuse for AMD as a company to be not on the ball like this. This is their job. If people on a forum can see these thing coming a mile away,how can they not be aware? It's costing them money.
This is not entirely accurate. The way Super resolution works with Lightroom is similar to Topaz Labs' (probably the leader in industry for AI for image enhancements) approach to reconstruct detail using AI, and if you have a RAW source image to work with then the results are at their best as RAW retains a lot of information deep within the capture that isn't always visible at the surface.All you are seeing is effects such as sharpening which appear to make it "look better" but its predicted detail.It always works best with regular patterns which are easy to predict. There is no "extra detail" - it just magnifying the detail which the original image had. This is what upscaling has done for decades.
I think that was the point, not so much that total amount of memory needed but the bandwidth, the time taken to transfer the data presumably being less with greater bandwidth.If it's memory intensive then surely you'd expect memory to actually be used though. The game uses less than 6GB of VRAM and 7GB of system RAM, so that would point to anything but memory intensive nature of the game? Unless it's specifically targeting keeping a certain amount in VRAM at all times and then smashing textures etc through as fast as possible trying to maintain that set level of VRAM use which means the bottleneck is a software one which Bethesda could solve in a patch or something.