That's an interesting statement? . . . what is that based on please? . . . I'd like to know more? . . .
Okay, it works like this:
Most multi-GPU setups use an alternate frame rendering technique (AFR), where one GPU works on one frame, while the other works on the next. Unfortunately, the two GPUs don't usually output the frames at regular intervals. You will often get one frame output, with a short gap before the next frame and then a longer gap before the next frame (etc). The issue is that the eyes notice the *longest* gap as a measure of smoothness, not the raw number of frames output as your FPS counter measures.
This phenomenon is known as "microstutter" which is a terrible name, because it all happens on a very short timescale and does NOT look like "stuttering" at all, unless you're running at framerates low enough to be able to clearly see each individual frame (in which case you have crap performance anyway - microstutter or not). It is really just framerate irregularity.
So what's the problem? Well, if you look at a game scene running at (say) 60fps, using a single GPU setup, then the frames are usually output at very close to regular intervals, and the scene looks like it is running at 60fps. BUT using a multi-GPU setup, because the frames are not output regularly, and because it's the
longest gap between frames that governs how smoothly we see the game scene at that instant, it looks like it's running at a slower framerate. The value of this "apparent framerate" must lie somewhere between 30fps and 60fps. 60fps would be the best case (perfect regular frame output), and 30fps would be the worst case, where two frames are output simultaneously with a double gap until the next one.
It's possible to measure the amount of microstutter, and compute the "apparent" framerate (I wrote a program to do it - see the thread
here). In most cases where microstutter appears (see below) I found an apparent framerate reduction of 10-30%. This means that in our 60fps example the game scene would appear to be running at somewhere between 42 and 54fps. It also means that, in the real world, you can't compare the framerate of a multi-GPU setup with that of a single-GPU setup. A multi-GPU setup will always need a higher framerate to demonstrate the same degree of smoothness.
One final important point: Microstutter all-but disappears in two main circumstances:
1. CPU limitation: If a game scene is CPU limited, then the GPUs finish their workload before the CPU and wait for new input. In these circumstances the output of the GPU syncs nicely to the regular output of the CPU, and frames are output regularly.
2. When using vsync: When using double buffer vsync, the GPU is almost never working at 100% capacity. On a 60hz screen, the only framerates that are output are 60, 30, 20, 15, 12.5 etc (integer divisions of 60). Unless the maximum capacity of the card is very close to one of these intervals, there is some time left waiting idle for the framebuffer to clear. This waiting period allows the two GPUs to always start working at regular intervals, and maintains regular output.
The tl;dr version: Adding a second GPU will still generally improve your performance, although not by as much as benchmarks suggest. Unless you're planning to always run vsync, consider a faster single-GPU solution instead.
edit: Thanks ejizz - beat me to it
