That would be great, way more interesting than also non-devs people like DF and the likes. It's ok to look at final product and say "I like this or that better", compare screenshots etc. but I don't understand why some people consider them as some experts in the field, where in reality seem to be at best on the level of average enthusiast gamers.
He does show the good AA, in other videos, with live examples in movement. It's actually just tweaked TAA, which always had the bad press that was caused mostly by really bad implementation in UE4 engine and not by the tech itself. That said, DLSS and the likes are really also TAA with slapped "AI" on top (more on that later). Also, he clearly put on the graph "AT LEAST 1080p, better quality AA etc." as base resolution to aim at, instead of upscaled from 840p+ like one often gets with DLSS these days - that was the point, not to limit visuals to 1080p. You would get better upscaling form good 1080p than crap 864p, is the point too. As in, the opposite of what you thought he proposed.
I am not sure where you got that idea from, this isn't what he said at all and it wasn't even his point. He was just comparing that native 4k isn't necessary for playing on TV for various reasons (same as 4k movies, that most people can't even see any difference as they don't have big enough TV, close enough to their sofa, to actually be able to see the difference), where you can get very similar level of details in 1080p with proper AA and then upscaled to 4k as needed. Again, this is in comparison to upscaling from 864p like a lot of games do (especially on consoles) these days.
Again, not what he was talking about, as I described above. When you have proper FPS in 1080p, with good AA and amount of details, then you can do whatever you want with the image - leave it as is, upscale in various methods etc. But the goal should be to make the source image as good and fluid as possible first and then worry about fluff later.
Consoles and xx60 series cards are huge majority of the market by far. 4070Ti+ is a tiny fraction of the market in comparison, simply because of the pricing. That said, they test everything on the xx60 series as that's about console speed and if you can make game work well for this largest chunk of the market, higher-end GPUs will give you even more FPS and higher resolution. Sounds logical to me. Then, you can later add more fluff on top for higher end users, but that's an extra. However, when you aim at 4090 as your main target and then game works really badly on xx60 cards, you failed as a dev to target the majority of the market. And that seems to be the point here. Also, don't forget what happened to 3080 10GB and that it already suffers in games where weaker GPU with more vRAM can get better results in same settings - it wouldn't be a problem if games were better optimised.
I do play at times on a console, my wife mostly plays on the console. Neither of us would touch 30FPS with a 10 foot pole... It's just a horrible experience, making me physically sick. That publishers seem to be pushing 30FPS as the new meta (total regression) doesn't mean it's good for players nor that they like it. Usually games give option to have more bling in 30FPS or proper 60FPS with worse lighting - all the stats I've seen show that huge majority of players go for 60FPS and ignore the bling.
AMD came up with very good AO methods, along with bunch of other things - all open tech, available for years now. There's a reason many indie devs go for AMD tech like SSR, AO, upscaling etc. - it's just easily available for them to tweak and implement as desired instead of NVIDIA black box approach.
Again, performance matters - HW one on 3060 would be completely unplayable, so might as well not exist at all.
He's showing TAA is everywhere in games, on all kinds of effects, masking artefacts. You can't disable it in settings, it makes whole image very blurry and that's before one even adds upscaling (DLSS and other) and/or AA. Before his vids I wasn't even aware TAA is so widespread in games just not where we would expect it to be (as in to do actual AA). And it also explained to me why everything is so blurry these days in games - I though it's DLAA, DLSS, but turning them off changed nothing, etc. but now I know why. And it's not just my old eyes.
Shadows would be rendered using different method, this is just about GI. It lagging isn't any different (it actually is often better) than GI in CP2077 with PT - that one can lag like hell without using AI and even with AI it's far from perfect. Very disturbing when looking at it, as my brain knows this is just wrong.
It's also a relatively old game (8 years and ticking), but the whole point is that it was very cheap performance wise and still looked great for that hardware level from that time. It doesn't mean it can't be improved nor tweaked further, hardware got faster since that time for sure.
And likely won't be in AAA world, as that would require them to do something else than enabling Lumen - it cost money, so won't be done. Unless something finally gives in that world, which doesn't seem far off considering their games are flopping left and right these days, financially.
I don't believe this was his point at all
Also, this whole vid was more of a quick bullet points, without very many details. Comments underneath seem to be full of devs by the way, quite a few said they just learned something new about methods they never heard about before - AAA dev world is just like any other IT world I work with daily, people just simply don't research and don't realise there are more than one way of doing things. I see it daily, makes me sad at times how closed minded many devs and other IT people are. Usually takes someone unusual, a genius, to push things forth - like J. Carmack in the past and Doom engine creation in very clever ways, using old math equations that most people didn't even know about.
This whole AI upscaling is very misleading though, isn't it? It's really just TAA with AI convolution used for better frame matching and removing temporal artefacts (which it still fails to do now and then with ghosting and other artefacts still visible in places even on newest version of it). People imagine AI in it is filling in blanks, adding missing details etc. - but that is not what it's doing there at all, as per NVIDIA's own papers. Which is why it barely uses tensor cores even on 3060, as is, because it has very little to do in the whole process. Ergo, it's marketing more than anything of real value.