This was linked from AT forums,but you can see it up close in the DF video:
https://forums.anandtech.com/attachments/dlss-jpg.28992/
There is also temporal artefacts with certain particle effects which DF said they saw,as the machine learning aspects find it hard to predict pseudo-random effects.
It also blatantly uses sharpening - looks sharper than stuff with FXAA/TAA,its because its use local contrast enhancement,ie,sharpening. I knew people who worked on machine learning based techniques for image reconstruction,and image sharpening was part of the pipeline.
In fact,if you have any interest in image editing,upscaling has existed in various forms - native images from ILCs are generally quite soft. Yet many smartphones actually do some degree of interpolation and sharpening,which makes them look better than ILCs to the average person,whilst papering over lack of detail. In fact prior to Nvidia,etc many smartphone SOCs,such as the Apple ones,have "machine learning logic" integrated onto the SOCs. Image upscaling and processing is actually one of the main uses of this logic.
Another use of the logic is for machine learning assisted computer vision,ie, pattern recognition.
That had an impact on DLSS1.0 which was game specific- DLSS2.0 isn't the same. Its probably being trained on specific scene types not games per se,and working off the image reconstruction stuff they have been working on for a few years.