Not sure if anyone else has seen this Q&A session with Andrew Edelsten, technical director of deep learning at NVIDIA: https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
It clarifies some of the questions around how DLSS works, but there's an interesting point here:
Q: How does DLSS work?
A: The DLSS team first extracts many aliased frames from the target game, and then for each one we generate a matching “perfect frame” using either super-sampling or accumulation rendering. These paired frames are fed to NVIDIA’s supercomputer. The supercomputer trains the DLSS model to recognize aliased inputs and generate high quality anti-aliased images that match the “perfect frame” as closely as possible. We then repeat the process, but this time we train the model to generate additional pixels rather than applying AA. This has the effect of increasing the resolution of the input. Combining both techniques enables the GPU to render the full monitor resolution at higher frame rates.
Based on that, it sounds like NVIDIA has to run the game through their 'supercomputer' to generate whatever dataset is required for the tensor cores to do their job.
If that's the case, it sounds as if though there's likely going to be some kind of submissions process for developers to have their game 'trained'. If this proves to be the case, will this have an impact on the uptake of DLSS in future titles?
There's a lot of work involved in this. I didn't realise that its all fed into their supercomputer to then feed to the cores to tell it what to do.
Does that mean it will have to be tweaked to every game to get it right, what about all the different configurations of peoples setups?