Uh, speaking as a graphics researcher, graphics haven't really moved that much in the last 5 years. Shader technology itself may be interesting from a implementation point of view, but we are still looking at the same tired techniques of optimized meshes, multiple view dependant level of detail for texture mapping etc that we have since the first Half-Life.
The artists / level designers / architects design all the layouts, run them through a big monte carlo radiosity solver, dump all the data out in bitmaps and the latest graphics cards chuck them onto the same scan line rendered polygons. Just because the surface normals are specified with a script instead of a specified texture / value doesn't mean that you aren't using the same overall algorithms.
When they decide to start implementing in hardware the algorithms that allow ray tracing, and eventually radiosity, in hardware, will we see the next "jump" in graphics. Until that time, what you see is the output of thousands of powerful computers, texture mapped onto your own crappy low polygon models. And making a model have more polygons can only increase a models' photorealism until your realise that it is the same rendering engine as before, and that by developing a new one for each and every game, companies waste billions of pounds / dollars.
The scripts might be able to be run more quickly now, to produce a higher resolution image for dumping to the screen - but its the same technology!
But *wink* that could all change soon