CPUs and GPUs are insanely fast these days. There shouldn't be much of a bottleneck (if any) if you have an 2500K or 2600K and an AMD HD 6990.
I think the biggest limitation we will hit will be in the transportation of data E.G - How quickly the mobo and CPU can give data to the graphics card. There's only a certain amount of speed the current 'bus' technologies have.
In silicon, data travels at a maximum of 200 million meters per second, which is pretty fast, but it's still not as fast as light. If we could get light it would make the transfer of data around your PC 1.5x quicker. And I'm no genious, but I'm betting that's a lot more energy efficient too.
The most mainstream connection at the moment is PCI2.0x16. Which is great. And I don't think we'll see any PCI 3.0 graphics cards any time soon. The PCI 3.0 standard been around since late 2010; we already have PCI 3.0 mobos, but the cards may not appear till 2012/2013.
Sooner, rather than later, games will require even more bandwidth and even more memory, and require it all to be shunted around even quicker.
The best solution is to create a massive GPGPU (General Purpose GPU) - Where everything is contained within one die. Which is most possibly the future, as we have with the Sandy Bridge.
At the moment with discrete graphics, the CPU has to processe where the data needs to go. So your CPU will get the binary and send it to the graphics card for processing. I don't know how much a CPU will be taxed with that, especially considering the speeds they run at today; and I don't develop games, so I'm not even sure if engineers/programmers can write up some binary, or low-level code to depict which core(s) to dedicate to the graphics card (on their game), thus, leaving the other 1,2,3,4,5,6 or even 7 cores free for physics and AI (that's not even including hypothetical 'virtual' cores).
/End of 2 pence.