This might be a bit pedantic of me, but calling DLSS an algorithm and comparing it to traditional AA techniques (which are all purely algorithmic) isn't quite accurate. While I know what you mean, it's not really the correct terminology.
DLSS is actually running a trained AI over images. In this case, each image being a single frame of any given game.
1. A game is rendered at ultra high resolution by NVidia's supercomputer, and is also rendered at a traditional resolution.
2. The AI then scans the pixels of each frame/image, comparing the high/low level versions.
3. At the end of each scan of each frame, the AI receives feedback as to the difference between the pixels it has generated and the pixels in the ultra high resolution frame.
4. These operations are run over and over again, with the AI adjusting itself each and every pass, until eventually, it gets close to the high res image.
5. The longer the AI is trained, the better the resulting output.
6. This training model is then saved and sent out to users, who then have an AI that is fully trained to make each game look as close to the ultra high resolution version as possible.
7. When the users run the game with DLSS on, the AI is engaged and scans the pixels each and every frame, using all of it's precomputed training knowledge in order to modify the resulting output.
I expect that the end result will be combined in a pixel shader, which would have been operating over each pixel in the frame anyway, so in terms of computation required at the user level, this should be very nearly completely free.
It really is a paradigm shift in terms of real-time graphics, and going forward, as the power of these AI's continue to grow, I expect that we will see this kind of technique improving the end result all over the place.