I'm going to go out on a limb here and disagree with ernyzmuntz - not for the sake of it, but with a bit of hindsight.
What we have seen in the world of the CPU is MHz, MHz, MHz, culminating in Preshot which was a joke as far as efficiency was concerned. AMD were doing fine with their CPUs as their pipelone was not as long as the one that goes from Tunguska through to Europe. Mercifully, the massive power consumption and rubbish output was put to bed and out resurfaced the P3-type chip, the Core 2 Duo. This just shows that chip manufacturers won't be going after massively high MHz any more, and that parallelism is now king: dual/quad/etc. core to go with multithreaded applications. Future OS releases will have numerous subroutines, each of which will require their own bit of CPU time concurrently with another, etc.
The same will eventually happen in the graphics world, too. ATi's R300 was a very interesting chip, not just because of it's longevity (9700Pro ---> X850XT (albeit in very, very tweaked form)) and the fact that it completely battered everything else, but becaue it introduced something unique at that point: the ability to communicate with another GPU. This resulted in the dual GPU cards whose pictures occasionally found their way onto the internet to raucous cries of "Ph0t0shopz0r!!!" and whatnot. Curiously, however, such solutions never made their way into the general market.
However, this parallelism is rearing its head again in the form of SLi/Crossfire, but more importantly in 7950GX2 and the 7900GT Masterworks (or whatever it's called). one of which uses two cards, the other uses two chips on one card.
I personally don't think it will be too long until Nvidia/ATi start packing two bits onto one chip/card and stir things up that way. Nvidia have the reins at his point, because they've got the fastest DX9 solution and currently the fastest DX10 capable card. But, as the technology refines itself, it will shrink and require less power whilst churning out more textures. ;Tis the way of such things.