I agree, yet they call that patch PT, not my invention.
Current gen includes 4060 class cards and xx60 class are the majority. Ignoring that fact, we see more and more high refresh rate monitors out there, with even over 400Hz now - high FPS in games is more desired now than in the past apparently, since there's a market for such expensive monitors. Ergo, optimization is for all GPUs, not just top 1% for 60fps only.
That's not Nanite itself, that's also geometry instancing - the real calculation happens for only one statue. 3D Mark had a benchmark for that for a while now. Now, put 500 different meshes, so instancing doesn't work anymore, and you will see how FPS drops on its face. Then check how many triangles are really needed for good enough details of this statue, where it still makes real difference, cut down the mesh to that. And suddenly you can run a lot of different meshes that don't kill FPS so much anymore, with proper quality, that use less memory and space on the drive too. It's basic optimisation really, to not waste space and performance for nothing. You don't have to turn off Nanite, you just have to use it with head and not flip the switch on and forget about it.
That aside, highly optimised demo showing one scene is really nothing like actual games are. Might as well say 3D Mark tech demos are a good indicative of how games behave, which was never true.
What Devs need to do is to follow basic guidelines of UE and ML Devs - so far, as witnessed in many games and demos on ue5 engine, they often do not do even that. Your arguments make really little sense when even said ML Devs ask developers in their post to finally please follow performance optimisation guidelines and not just be lazy about it. That also, by the way, increases image quality and not just performance - bad practices introduce noise and attracts where it didn't have to be.