But you can do that. You won't get it perfect as ultimately you are guessing, but over millions of pixels you should improve the image.
The key point that annoys me is that, you do this through lots of tensor cores. You might as well have replaced those tensor cores with rasterisation cores and drawn the image correctly with that additional resource.
DLSS is an afterthought imo. They built Turing with ray tracing in mind, and now are coming up with other use cases for tensor cores, forgetting the fact ray tracing is a gimmick to bein with.