This is not entirely accurate. The way Super resolution works with Lightroom is similar to Topazx Lab's approach to reconstruct detail using AI,a nd if you have a RAW source image to work with then the results are at their best as RAW retains a lot of information deep within the capture that isn't always visible at the surface.
Photoshop's Enhance Details feature is AI powered too but simply sharpens the image using AI, Super Resolution in Lightroom uses AI to quadruple the resolution and rebuild detail that might otherwise be there if the original source was that same 104MP image taken with a higher MP sensor. Take a look at the fine detail in the bride's dress lower on the image where the sunlight hits it, the mesh detail isn't clear on the 26MP version, but it is clear on the 104MP upscaled image as it's been reconstructed accordingly and accurately - There is no sharpening applied here to simply enahcne details to give the illusion of better detail.
Because again its not recreating extra detail more than the original image which in the case are the RAW files. You could always get a better upscaled image using old fashioned upscaling from RAWs than starting with a jpeg file. But even in your case that is more to do with the jpeg engine in the camera being limited.
All the machine learning is doing is just refining the algorithm. Just look at DxO DeepPrime which uses machine learning to help clean up noise very well. It all has it's uses but what I have an issue is people act like it can make NEW details.
You can't make new details out of the photo file you are basing the image on.All you can do is help extract 100% of the detail,and machine learning can help. But outside that it's just an estimation of what it might look like if you use a better imaging device. The only way to resolve that scene with more detail is to use better equipment.
CSI has a lot to answer for!
Or do you think all the professional photographers will just stick with a 16MP D7200 and never upgrade to a higher resolution system? They could pass everything through the upscale filter and never upgrade again. You know very well that won't happen!
Not even smartphone photographers would do that. More MP,more lenses,more everything!
Marketing bandies around terms and it actually annoyed people I knew who were computing guys. They always called it machine learning,so when marketing keeps calling it AI,they started getting vexed repeatedly!
This is why I think we are probably argueing two things - so best to leave it at that!
My point is this, there are many variations of these technologies, the way they are implemented matter the most, you can't have a low quality source image (DLSS at 1080p) and expect a 1080p output reconstruction to the same quality as a native 1080p image because the source internal render image is way too low res. But give the AI a suitably details source image, and it can do much much more and actually generate a perfect image as demonstrated with all these DLSS comparison videos being posted online.
In an ideal world everything would be native and run great in games, but we don't have an ideal world currently, but even then we'd still need a superior AA solution at native to combat jaggies, and no in-game internal AA solution is really that good or efficient for today's games, either always too soft, or too whacky for temporal stability. DLSS appears to resolve the shortfalls of inefficient AA so for that alone it's well worth implementing in a game properly, plus we all get free fps gains in the process so what's not to like.
My big issue is Nvidia using it to upsell crap like the RTX4060TI. If the RTX4070 had been the RTX4060TI we had all expected for closer to £400,then DLSS,FG,etc would be an additional selling point on top.
I dislike its becoming the main selling point.