Thanks for writing that reply, the basic technical SLI stuff I already new but I haven't encountered an explanation of this "microstutter" thing before? . . . I just read the XS thread through from start to finish and I can't say there is any evidence presented in there that would stand up in a court of law!
All I'm doing is applying a simple and well known mathematical algorithm to quantify the degree of framerate variance in the output. It exists, and there are numerous ways it can be seen. My interest is in assessing the situation under as wide a range of conditions as possible, so I can better understand it.
I could just jump on the "microstutter" bandwagon with you guys and accept the explanation as given but allow me to take up the role of "Devils Advocate" and say the problem doesn't exist, your program is badly coded and the few users who complained had a faulty overclocked system and mushed up drivers?
Nothing wrong with playing devil's advocate. But let me set your mind at ease: The phenomenon is repeatable, and to a certain extent predictable. We understand where it comes from, and the situations in which it occurs. Anyone with a multi-GPU setup can see it for themselves. Just run a FRAPS benchmark, and check the output. It's not just a product of mushed up drivers.
If a genuine punter looked at some SLI benchmarks and saw 90FPS in a game he liked and then bought the exact hardware as used in the review, installed everything are you suggesting the performance wouldn't be the same? . . . all things being equal he should see approx 90FPS in the counter and he should have a smooth and enjoyable end user experience? . . . are you suggesting oherwise? . . . and if so can you prove it factually beyond the "Hearsay" of 1000 people?
The hardware would be 90fps. This is 100% true. The subtlety is that the 90fps be output irregularly, and so would
look more like a game scene running at say 75fps, (for a 15% microstutter readout). The reason for this is that the human eye notices the gaps between frames as the basis for smoothness - it doesn't just count the raw number of frames like a computer. In your 90fps case the game would still seem very smooth, since 75fps from a regular output source would still look smooth...
Okay take it to the extreme, as an example, and consider a case where you have 60fps output, but two of the frames are output simultaneously. Or just a fraction of a millisecond apart. Your eye doesn't catch the first two frames, and although a solid 60fps is being output, you're looking at a scene that seems as if it is running at 30fps. This would represent 100% microstutter in the program I wrote. Reality is a much less extreme version.
As for proof, I can prove that it exists, and I can measure its effect. That's factual science that will stand up to any reasoning. As for anything else, it's all subjective. Adding a second card will still improve performance, but microstutter or not, that doesn't guarantee a "smooth and pleasant gaming experience".
In that thread there was a lot of assumptions and conjecture made? . . . has anyone done some very, very controlled testing, noting exact hardware configs (stock/overclocked etc), driver versions, made videos etc? . . . is this something that can be observed with the human eye or? . . .
I would love to have a whole suite of hardware to test out the effect on scientifically. Of course, I only have one rig and a couple of hardware sets to work with properly. So, I wrote a tool that other people could use to generate data in scenarios I can't replicate. I'd love for a review site like anand or HardOCP to include a formal analysis in their reviews. But they are familiar with average FPS benchmarks, and further complicating things is not an attractive proposition.
What I learned from XS and from Hardforums is a that it affects a broad range of configurations. That when vsync is enabled there is little to no microstutter (I had found this with my own testing but knowing it was the same for other setups was interesting). Also, I discovered that when the GPUs were not working at 100% capacity then microstutter was drastically reduced.
I read the bit about the "effect" being more noticable when the GPU's where 95%-100% loaded and less in effect when CPU limited . . . due to the GPU's being able to sync up their output? . . . but how do you know this in fact is not actually something to do with an overclocked system? . . . i.e when the system is overclocked something related to the graphics subsystem becomes a bit borked/out of range etc? . . .
As far as the CPU limitation issue. Don't think of it that way - the reduction of microstutter occurs whenever the GPUs are not working at full capacity. Whenever they have finished their workload and are waiting for data from another source they become synchronised to that source. In the case of vsync, the output is syncronised to the time the framebuffer clears and the GPU can start working again, which is in turn tied to the monitor refresh rate. In the case of CPU restriction, the output is syncronised to the output from the CPU, which is needed for the frame rendering to begin or proceed. In each of these cases, the output is regular.
It's just the mechanics of how the hardware works in conjunction. It's interesting to see that it holds true in the physical scenarios I've been able to test, whether directly or indirectly.
If someone runs two GTX 460's on a stock Intel Core i5 system and loads up a game and plays at 1920x1200 with 4xAA (or 8xAA etc) are you guys suggesting the end user won't have a lovely smooth experience? . . . assuming say the benchmark for this config was approx 60FPS? . . . would everything look nice and smooth etc?
I'm didn't say anything of the sort, and I don't have any interest in the performance of the card in any individual game.
All I'm trying to do is point out that this phenomena exists in multi-GPU systems, and until it is sorted people looking to buy a multi-GPU system should be aware of them. Adding a second card will still improve real-world gaming performance over a single card of the same type. That's for sure. But, the real-world improvement is not going to be as much as you might expect from average FPS benchmark scores.
It's no different to pointing out that minimum framerates might be worth considering as well as average scores. Or that the GPU memory use at different resolutions would be worth reporting. It's just another facet to the complicated work of computer hardware.