I don't mean to sound like a dick, but that's simply not true.
In fact, given that we can actually see the image quality in screenshots, and have several reviews that are in direct contradiction to your claim, I find your assertion to be quite bizarre.
While i'm not a games developer, I've spent nearly a decade as a professional app developer, with several of those years spent focusing almost exclusively on 3D graphics, and I can tell you categorically that taking a native 1440p image and interpolating the extra detail required to build a final result that is comparable to a native 4K image, is a fantastic achievement.
The end result is only distinguishable in stills, and even then you can't tell the difference in many of them.
This is something that could only be achieved, at least with the quality that we're seeing, by using a trained AI. Old upscaling algorithms are simply dreadful in comparison.
Here's what we know from the reviewers who've seen DLSS first hand.
- At 4K, DLSS X1 (1440p internally) can increase performance over native 4K by anything up to 50%.
- The detail loss between native 4K and 4K DLSS X1 is minimal and barely noticeable, but it does exist.
- In some aspects, particularly transparency, DLSS X1 4K is actually better than native 4K with TAA
- DLSS X2 runs internally at full 4K, has all of the AA improvements over TAA, but should still have a small performance boost due to it removing the need for algorithmic, software based AA approaches.
If you didn't use DLSS and merely dropped the resolution to 1440p, sure, you'd get the same sort of performance boost, but the picture would be hugely inferior. Especially on a 4K monitor that's poorly downscaling for you.
Nothing is as simple as "turning down the detail".