If you are filling in the blanks effectively using AI and doing stuff to make lines and curves crisper etc. it can look "better" than native* - but I'm still not a huge fan - I'd rather have the native image personally.
* Kind of like Optoma does with its "embedding stereoscopic depth information from a human-vision-based model" to make upscaled footage look "HD".
The reason DLSS 2.0 in some instances appears to be better tha native is because it's trained at a high resolution.
DLSS 2.0 is trained by running games at 16k resolution on a super conputer, packaging that data into a profile that gets given to your graphics card via a driver.