Is there a limit to how many fps a cpu can add?

Soldato
Joined
12 Oct 2003
Posts
4,027
I was just thinking about how you always see an increase in fps with a better cpu, say you have a high end graphics card with a cpu, motherboard and ram that was essentially unlimited in performance, if you kept increasing their performance above a high end intel, at what point would the fps stop going up because of the graphics card?
 
Depends if the game is CPU limited or Graphics card limited, if it is graphics card limited the FPS will not go up any higher no matter how much you overclock the CPU because you have hit the limit of the graphics card, so you would be looking at overclocking the graphics card or adding a more powerful graphics card or crossfire or SLI to see the FPS go up again, again overclocking the CPU to a point the CPU becomes the weak link and if overclocking the CPU higher shows more gain in FPS, it means the CPU is the bottleneck. I will give you an example of a very CPU limited game and to this day still requires a better CPU to work really well at full settings, FSX microsoft flight simulator. One of the only reasons I update my CPU is because of this title and video encoding and rendering.
 
Last edited:
there will always be a bottleneck, and that bottleneck will not always be the CPU, so yes there will become a point where increasing the CPU doesnt increase the FPS.

even in your unlimited GPU, motherboard and ram example you will find bottlenecks in the monitor and cables

assuming you have removed the FPS cap of the game the next bottleneck will be you - you may be getting 91827649862345 FPS, but the human eye has a limit as well. i'm not quite sure what that limit is, but i know its at least 200FPS
 
Last edited:
I was really talking in realistic terms, using current software and games, if everything except the graphics card was unlimited what's the most fps you could get out of one?
 
the GFX card has to at the very least construct the image to be sent to the display so even if the game was written to only use CPU power then there will reach a point the video card is unable to push the data into the frame buffer (or what ever its called) for the monitor to display..
 
I was really talking in realistic terms, using current software and games, if everything except the graphics card was unlimited what's the most fps you could get out of one?

well, this isnt realistic terms, but you could get a CPU that is so unbelievably fast that its actually better at doing the graphics card's job than the graphics card is.

however, if you want to know what the maximum we can get today is, look for some benchmarks of a quad SLI system
 
I guess all i was asking is whats the maximum performance you can get out of a given graphics card or is a cpu capable of adding more and more fps as it improves?

Why aren't graphics cards given a maximum performance benchmark by the manufacturers?
 
CPUs and GPUs are insanely fast these days. There shouldn't be much of a bottleneck (if any) if you have an 2500K or 2600K and an AMD HD 6990.

I think the biggest limitation we will hit will be in the transportation of data E.G - How quickly the mobo and CPU can give data to the graphics card. There's only a certain amount of speed the current 'bus' technologies have.

In silicon, data travels at a maximum of 200 million meters per second, which is pretty fast, but it's still not as fast as light. If we could get light it would make the transfer of data around your PC 1.5x quicker. And I'm no genious, but I'm betting that's a lot more energy efficient too.

The most mainstream connection at the moment is PCI2.0x16. Which is great. And I don't think we'll see any PCI 3.0 graphics cards any time soon. The PCI 3.0 standard been around since late 2010; we already have PCI 3.0 mobos, but the cards may not appear till 2012/2013.

Sooner, rather than later, games will require even more bandwidth and even more memory, and require it all to be shunted around even quicker.

The best solution is to create a massive GPGPU (General Purpose GPU) - Where everything is contained within one die. Which is most possibly the future, as we have with the Sandy Bridge.

At the moment with discrete graphics, the CPU has to processe where the data needs to go. So your CPU will get the binary and send it to the graphics card for processing. I don't know how much a CPU will be taxed with that, especially considering the speeds they run at today; and I don't develop games, so I'm not even sure if engineers/programmers can write up some binary, or low-level code to depict which core(s) to dedicate to the graphics card (on their game), thus, leaving the other 1,2,3,4,5,6 or even 7 cores free for physics and AI (that's not even including hypothetical 'virtual' cores).

/End of 2 pence.
 
Last edited:
CPUs and GPUs are insanely fast these days. There shouldn't be much of a bottleneck (if any) if you have an 2500K or 2600K and an AMD HD 6990.

I think the biggest limitation we will hit will be in the transportation of data E.G - How quickly the mobo and CPU can give data to the graphics card. There's only a certain amount of speed the current 'bus' technologies have.

In silicon, data travels at a maximum of 200 million meters per second, which is pretty fast, but it's still not as fast as light. If we could get light it would make the transfer of data around your PC 1.5x quicker. And I'm no genious, but I'm betting that's a lot more energy efficient too.

The most mainstream connection at the moment is PCI2.0x16. Which is great. And I don't think we'll see any PCI 3.0 graphics cards any time soon. The PCI 3.0 standard been around since late 2010; we already have PCI 3.0 mobos, but the cards may not appear till 2012/2013.

Sooner, rather than later, games will require even more bandwidth and even more memory, and require it all to be shunted around even quicker.

The best solution is to create a massive GPGPU (General Purpose GPU) - Where everything is contained within one die. Which is most possibly the future, as we have with the Sandy Bridge.

At the moment with discrete graphics, the CPU has to processe where the data needs to go. So your CPU will get the binary and send it to the graphics card for processing. I don't know how much a CPU will be taxed with that, especially considering the speeds they run at today; and I don't develop games, so I'm not even sure if engineers/programmers can write up some binary, or low-level code to depict which core(s) to dedicate to the graphics card (on their game), thus, leaving the other 1,2,3,4,5,6 or even 7 cores free for physics and AI (that's not even including hypothetical 'virtual' cores).

/End of 2 pence.

Ever seen that big yellow ball in the sky? :p
 
Back
Top Bottom