eh, the 8800gtx, was no real significant boost over the previous gen, than any other new gen. In terms of architecture, it was a brute force style architecture and nothing particularly fancy. Saying it can still game in 1920x1200, is like saying a gf3 could still play 1920x1200 3 years after its release, of course it could, if you played a 3 year old game, 3 years later, it offered the same, or even increased performance. Try to play a "tough" brand new game on a gf3 3 years post release, or a G80 post release, and they are not good at high res.
Honestly, the R600/2900xt was one of the most interesting gpu's in terms of tech features, just a tiny bit ahead of its time and didn't follow AMD's current path of design based on manufacturing quality available. IF TSMC didn't suck the 2900XT released on 65NM WOULD have been faster than a 8800gtx, the ring bus was a huge, but ultimately too costly step forward that isn't quite required yet(but will likely make a comeback when we get to much higher shader counts).
IT was also AMD who started the push towards shaders rather than pipelines with the x1800/x1900 series, they also had the first programable shader core(in the Xbox).
Nvidia's latest core, the unreleased Fermi has a few unexpected but very interesting features, however, we'll have to see if we get told around release exactly what they are. So far we know they've moved a lot of the rendering pipeline into the shader clusters and split it up, however a lot of things, like tesselation, we don't know if they'd made specific hardware for or not.
The GF3 was a very nice little card though, and the changes between their first cards and a GF 256(IIRC), were massive and interesting.
The G80 went towards programable shaders, the iterations since then have been, more is better really rather than anything fancy. Even the changes in Fermi, the primary change is, more shaders the better with very small differences, its the rendering pipeline as opposed to the shaders which have changed most, and with those changes, we're not sure whats changed, other than location, as Nvidia haven't told us anything except whats in the unit, not how its done.
The 2900xt, from a manufacturing standpoint, was maybe the most amazing card made for years. Fermi is struggling to be built on the process it was intended for, thats out, been used for a year, sucks balls but is there. AMD managed to get the 2900xt, designed exclusively for the 65nm process, the process which was delayed by almost a year and wasn't available at all due to big problems with it, and moved the design UP a process to 80nm and still managed to get it out and in games like Bioshock it even beat a 8800gtx in DX10 mode. That is an amazing feat, imagine Fermi sucking so badly on 40nm Nvidia would decide to push it back to 55nm, a 3billion transistor core, thats the equivilent. Remember the 2900xt was a humongous core on 80nm, bigger than a G80, and far more complex, to within 6 months move it up a process is something that afaik, is unmatched in terms of manufacturing feats in CPU and GPU's.
To really appreciate a core you have to understand the architecture, and realistically the best/biggest changes and biggest advances came with a GF 256, 9700 pro, x1900, 2900xt.