Man of Honour
The solution is just get a 4090 and laugh at all the trivial stuff
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
The solution is just get a 4090 and laugh at all the trivial stuff
That's not necessary Mine isn't a £2000 PC (omitting the GPU) and isn't bottlenecked beyond the reach of 100fps+
I'm hoping I'll never need any of the image downgrade softwareI’ve been waiting for FSR 3 for sooo loooong it’s unreal.
When are we going to get it? And will it be retroactive on all past AAA games?
Its baked in now for both, you get less shaders and more image downgrading to make up for it.I'm hoping I'll never need any of the image downgrade software
You're assuming game studios wouldn't still be releasing unoptimised beta products that need upscaling tools. Stronger GPUs rendering poorly made games at native resolution is just kicking the can down the road. Either way companies are going to look to extract every penny they can from the customer.Its baked in now for both, you get less shaders and more image downgrading to make up for it.
Because like sheepole we affirm everything Nvidia do in downgrading our quality of GPU's and then criticise AMD for not being as good as Nvidia at it.
We should have said very loudly and clearly no to DLSS the moment it appeard, instead we criticised AMD for not having it, i knew at the time this would lead to slower more expensive GPU's, i warned of it.
Now here we are, i have a first gen DLSS card, for the same $500 it cost me i get 20% more 'native' performance, 4 years later.
Yes I'm very surprised at anyone thinking it was a good idea. If you GPU can't run it native then the hardware is the issue not the software. I can see a use case for lower end older GPU's where it helps them out but not new flagships ones. My plan is to just stay behind the curve. Stick with 1440p and older reduced price GPU's to run it, as the gaming gods intendedIts baked in now for both, you get less shaders and more image downgrading to make up for it.
Because like sheepole we affirm everything Nvidia do in downgrading our quality of GPU's and then criticise AMD for not being as good as Nvidia at it.
We should have said very loudly and clearly no to DLSS the moment it appeard, instead we criticised AMD for not having it, i knew at the time this would lead to slower more expensive GPU's, i warned of it.
Now here we are, i have a first gen DLSS card, for the same $500 it cost me i get 20% more 'native' performance, 4 years later.
Yes I'm very surprised at anyone thinking it was a good idea. If you GPU can't run it native then the hardware is the issue not the software. I can see a use case for lower end older GPU's where it helps them out but not new flagships ones. My plan is to just stay behind the curve. Stick with 1440p and older reduced price GPU's to run it, as the gaming gods intended
There is the elephant in the room KISS is what I say.Of course this all depends on the developer implementing an upscaler properly,
The consoles can only upscale from an already lower resolution though that's the issue, which results in a maximum possible quality level which when looked at closely, can scrutinise image artefacts and stuff (again as shown by the pro reviewers). You can't upscale at native 1440p and use FSR Quality on consoles to get 60fps, for example. You have to factor in the variances at play rather than brush every criticism with the same paint.
What do you mean by downgrade though? Because DLSS Quality/XeSS Quality, FSR Ultra Quality etc can and do yield in better than native sharpness and detail - Although this does depend on your working resolution, 1080p is not a resolution for getting good upscaler results for example. 1440p at the minimum.
Using upscalers in performance modes is only viable at 4k resolution whereby the internal render res is higher so the AI system has a higher LOD to work with and produc e a better than native (or same as) output reconstruction.
This comparisons has been done to death by the community and pros alike, DF have gone into great depth about it showing side by side and there is no argument (other than some people on forums) that upscaling can and does produce as good as, if not better than, native resolution image. Upscaling also offers a superior form of AA without the big performance impact of say MSAA, or the woeful TAA implementations of internal methods in most engines which result in loss of sharpness and other stability issues.
Of course this all depends on the developer implementing an upscaler properly, which not even vendor sponsored games seem to adopt a lot of the time - The latest being Starfield where there is still some aliasing and image artefacting on water reflections on the ground etc when moving the camera fast which is amplified if upscaling is enabled.
Bottom line is that upscaling works, it's free fps gains at no cost to image quality /if/ implemented right. The if is the big part, because it requires developers to actually do some work, instead of just make the feature available on a toggle.
Yes I'm very surprised at anyone thinking it was a good idea. If you GPU can't run it native then the hardware is the issue not the software. I can see a use case for lower end older GPU's where it helps them out but not new flagships ones. My plan is to just stay behind the curve. Stick with 1440p and older reduced price GPU's to run it, as the gaming gods intended
You can't compare audio streams to visual streams. You can't reconstruct audio layers the same way you can a predictive visual model like an image which looks identical or better than the native version. Plus with compressed audio at a certain bitrate, the human ear is incapable of telling the difference anyway, whereas image reconstruction is only limited to the vision capability of each individual person.To see the absurdity of people arguing upscaling is better than native one should simply try something similar in a simpler environment:
Would you think that MP3 + some kind of reconstructing filter would sound better than uncompressed audio?
Go ahead and try to win that battle in any kind of audiophile community then come back and tell us the results. Mind you, this is a far simpler computational problem than what AA is, so according to the "better than native" community it should be easy peasy to argue.
Starfield could have been the poster child for FSR 3 given how demanding it is. AMD really should have made sure it was ready for release at the same time as Starfield, can't help but feel AMD missed an open goal here.
You can't compare audio streams to visual streams. You can't reconstruct audio layers the same way you can a predictive visual model like an image which looks identical or better than the native version. Plus with compressed audio at a certain bitrate, the human ear is incapable of telling the difference anyway, whereas image reconstruction is only limited to the vision capability of each individual person.