RDR2, for instance, worked pretty well with a combination of high resolution and DLSS to solve the aliasing issue on vegetation, at least for me. I had good results in other games, more so since I was playing mostly at either 3x1080p or just 1080p - that was about before getting the 4080.
Another aspect of "better than native" comes from the DLSS use itself. In order to get the same performance as I would with DLSS, I had to scale down details to a really low level where it would look worse than DLSS (but at least it was native

) ). So it could, indeed, also solve the aliasing issue if I would be forced otherwise to play at a lower than native resolution. 720p "native" (without upscalers), looked worse and with more aliasing than DLSS. DLAA, at lower resolutions (as in around 1080p), also worked fined to me, compared to other "native" solution. Again, this was
in my user case.
Well, your use case is super sampling, isn't it? Which is an ideal working environment for DLSS-like algorithms but not how NVIDIA advertises its use nor how majority of gamers use it. That anyone with high end GPU needs to go to 720p native to get proper FPS would be (or rather is) the sad state of affairs, though - lazy devs and badly made games.
Now, fun fact - TI dude actually advocates in his latest vid a lot for DLDSR (or similar solution) to be standard in games instead of badly implemented AA, DLAA and the likes. It looks better, works well and produces very clear image without artefacts, little to no ghosting etc. He shown proper examples of quality differences etc. too. And he doesn't seem to be liking Transformer DLAA much (it still has issues, that he presented).
At the same time, stuff like some fog/smoke, could be pixaleted, as I guess DLSS didn't work that well on it and you'd also lose a bit of sharpeninig, which you could get partially back by a sharpening filter. Some of this got solved with later itterations and I think RR.
Improved yes. Solved - no. NVIDIA themselves said it, their AI models need work and further improvements and will get better over time, but they're not there yet - and that was about Transformer model, which already does a much better job than CNN one, in most cases.
That mythical TAA could worked decently at native, I suppose. What happens when you need to upscale from a lower resoltuion, does it hold up the same as DLSS? I don't think so. Also, since it's mythical, it matters less of what it can be and more on what the state of affairs actually is.
TAA based upscaler exist, it's the default one in UE5. It's not great, though. What would (and exist) help is the DLDSR+DLSS as you know, but that's fiddly and most people will just not use it unless that's native in games. But game devs follow what they think gamers will like, so if enough pressure on that would appear, the'd do it. Yet, NVIDIA doesn't advertise it, so they don't, as hardly anyone knows it exists.
Those were quick examples, valid, since nvidia didn't hold a gun to their head.
Figuratively they did - it's called sponsorship. They do as told, or they don't get monies/help.
But also take Bethesda, with their games, Starfield in particular where it sucked even with amd's support.
Why do you always bring up abominations?

It's also very consistent with Bethesda's track record, sadly.
Far Cry series, where the 2nd instalment worked fine in Eyefinity/Surround, while from 3 onwards was broken and never fixed, even though it says Surround/Eyefinty when selecting the required resoltuion - FOV was probably around 40-50... vomiting inducing.
Yep, I couldn't play any of the later ones without modding for FOV. :/
And that was regardless of who sponsered the game - amd or nvidia.
The thing about NVIDIA and their sponsorship is a bit different - their tactic have been for many GPU generations now to create tech that only works on their hardware, push it through sponsorship into games and try to persuade everyone that it's the only way forth. That way, they are the only ones with hardware support for it and instantly win the market. They failed many times till they realised they will never get that through raster tech, as it's supported by all players and well known, so they pushed toward AI (including DLSS) and RT. It fit perfectly their enterprise R&D too. One can't really work without the other well. Still, it didn't take proper hold till consoles were supporting it well enough, but on the PC at least it finally worked. And here we are, a monopoly.

FOV (and many other things) is not part of their tech, hence they don't care.
Fun fact here as well - TI dude again mentioned he sees only NVIDIA has any vision for the future of graphics evolution. It doesn't mean he agrees with their choices (AI push) but it's the only choice that exist currently, as both Intel and AMD only fight who of these 2 will be a better alternative for NVIDIA, following same steps. None of them actually have any vision about the future of graphics in games. Then again, AMD is a consoles and enterprise producer, with PC gaming being a super niche that they do just BTW APUs (for consoles and not only).
All the games from Red Faction onwards that didn't use proper physics, at most just "baked" one (Battlefield).
I still remember a bunch of devs blaming it on difficulties with lighting implementation in games. Sad bit - that's been resolved years ago with new algorithms, hybrid-RT or even full RT. And? And still no physics.

Lazy devs and bad excuses. At least BF devs admitted at some point that they don't know how to make a well balanced, fun multiplayer game with proper physics, completely ignoring that people LOVED Bad Company 2 and full map destruction.
Or amd's demo froblins where you have thousands of AI running around, doing their thing, on basically ancient hardware, but didn't get traction. Mantle, that although it was a performance boosting solution, didn't get the support that it should have.
AMD made a bunch of really well looking and head of their time demos, like this one:
It still looks well today in 4k, even though it's been full of shortcuts and optimisations. But they never had much traction with game devs and NVIDIA often didn't support many of these technologies well enough to push for them to be used, so it took years before we could see some appear in games (like tessellation etc.). NVIDIA's marketing have always been so much better, not just with gamers but also with game devs.
As for Indy, I don't see something complety wrong with their faces.
Every person is different, most people might not like uncanny valley faces, but not everyone reacts the same. Also, playing a lot of games and realising it's a computer graphics and not reality, helps a lot in supressing such instincts. I see them as abominations, though.
