• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Microstutter - fact or fiction? Let's find out once and for all...

Associate
Joined
3 Jul 2008
Posts
378
There is much debate about 'microstutter' (which should really be called 'uneven frame output' but anyway) when using multi-GPU setups. Many people report that they notice this when using their multi-GPU systems. A few have produced results 'proving' its existence, whereas others swear down that they have never experienced it, and/or question the validity or applicability of the very few numerical results actually available.

Anyway, to help answer the question, I have written a very simple little program which analyses FRAPS benchmark outputs to determine the degree of microstutter present. You can download the program here:


http://www.mediafire.com/download.php?2moyugibhez (thanks lightnix...)


Of course there will be some non-uniformity of frame output, even in single GPU setups. Also it is clear that the degree of non-uniformity changes from game to game. So, what would really be interesting is to find out by how much using a multi-GPU setup increases the level of microstutter, if at all.

Instructions for use are included in the readme.txt file (pasted below).

To get started, here are a couple of results I gathered to check that the thing was working. These were both gathered stood still, looking out across a vista (there was some dynamic stuff happening in the distance in each case however):


System:
e6600 @ 3.4Ghz
GTX280 @ 650/1107

HL2 ep2:
microstutterhl2gj4.png



Crysis (1920*1200 high)
microstuttercrysisao1.png



Anyway, single GPU results would be interesting, but the multi-GPU results should be the real eye-openers, one way or another. It would also be nice to see if there is a difference between SLI and xfire as far as microstutter goes. If so, that would be a real selling point in my eyes.





readme.txt included in file said:
This little program attempts to quantify the degree of "microstutter" in a particular FRAPS benchmark output. Microstutter has become the popular term for describing uneven frame output, often present in multi-GPU setups when running with an alternating frame rendering (AFR) configuration. Its effect, when present, is to make a given framerate appear 'less smooth' than an equivalent framerate with a single GPU setup.



*** How to use the program ***

1) Make a folder wherever you like, and copy across the microstutter_test.exe file

2) Download and install FRAPS (http://www.fraps.com/). In FRAPS, go to the 'FPS' tab, and select the 'Frametimes' tick-box (located at the bottom left)

3) Enter your game with FRAPS running, and at a suitable point, run a FRAPS benchmark (default key is F11, I think...). Run the benchmark for a suitable length of time (say 30 - 60 seconds) during regular gameplay (no loading screens)

4) Go to the FRAPS/benchmarks folder, and find the appropriate *.csv file for the benchmark you just ran.

5) Copy this *.csv file to the microstutter test folder and rename the *.csv file to: input.csv

6) Run the microstutter_test.exe program, and observe the microstutter index. The higher the index, the greater the effect of microstutter during the benchmark.




*** How NOT to use the program ***

1) No not use with (double buffer) vsync enabled.
The microstutter index will always appear to be zero, since double buffer vsync switches discretely between constant framerates. However, if you have triple buffer vsync working properly, and your framerate stays below your monitor refresh rate (usually 60), then it might be interesting to see by how much the microstutter is reduced.

2) Do not use in games with a maximum framerate cap (unless your framerate stays below this)
As above, a fixed maximum framerate will lead to a zero microstutter index. Microstutter is not an issue anyway if you are operating at your maximum monitor refresh rate, or at a fixed framerate.

3) Do not run a benchmark while the game is loading, particularly if a separate loading screen is shown. In-game loading and paging should be accounted for to some extent (see below) but it is best to avoid this if possible. A good microstutter index can be obtained simply staring into space and doing nothing.



*** How does the program work? What is the microstutter index, exactly? ***

In short, the degree of microstutter is calculated for every frame by comparing its variation away from the local average frametime. This value is scaled by the local average frametime to obtain a non-dimensional index. The reported 'microstutter index' is the average of these values over the benchmark, multiplied by 100.

So, the smaller the microstutter index the closer each frame is to the 'smoothed' average framerate, and the less 'noisy' the data.

You can think of the microstutter index as being the (average) percentage varation of each frame away from the stabilised local framerate.


...Step by step:

The local average frametime is obtained by taking the average of the frametime for 4 frames in either direction (for a total of 9 contributions to the average). This is not calculated for the first 5 or last 4 frames, and so these frames are not included in the microstutter statistics.

The variation away from the average frametime is calculated for each valid frame. To help avoid the final data being skewed by large 'spikes' due to paging etc and other game related events, the largest 2% of local varations are culled, and no longer appear in statistics.

The local variation in frametime is divided by the local average frametime to produce a small nondimensional number (local microstutter index).

The average, over valid and non-culled frames, is taken for the local microstutter index. This number is multiplied by 100 to produce a more 'user-friendly' range of numbers.



*** Anything else? ***

Well, don't abuse me for the noobishness of the program. I'm not a computer scientist or software engineer or anything like that.

If you find any real bugs, let me know on the ocuk forums thread (see http://forums.overclockers.co.uk/ and look for Dr_Fartypants)

If you want to create a nice visual interface for the program, or otherwise improve it, then go right ahead! You can use the executable, or I can pass you the (horrible fortran90) source if you like. Just let me know what you're doing is my only requirement.

If you want to find me then I can usually be found at OcUK forums (Dr_fartypants), or hardforums (arseface).
 
Last edited:
The amount of stutter varies game-by-game anyway, so I wouldn't worry about that. The key thing will be to see if multi GPU setups also have a similar level of microstutter (<10% say), or if they're much much higher (ie 30%+).

But it's certainly odd that you get a larger value with vsync on... I'll have a look how FRAPS handles it, whether it records the time at which the frame is displayed or is sent to the buffer. The other possibility is that the monitor is not dead-on 1/60 th second update every time.



edit - yeah, that is strange, with vsync. I just tested HL2 with vsync enabled, and I got a microstutter index of 0.56. I guess it could be down to the monitor. Maybe a TV (even a good one like yours) isn't as precise in its update times, since it doesn't need to be?
 
Last edited:
http://www.rounddodger.plus.com/files/grid_vsync.gif

Here is the results for reference with VSYNC in game enabled, not sure what type of vsync is enabled, double, triple whatever!

Yep - very strange. Well, what can I say? It could be due to the TV you're using, but that's a guess. I just tested HL2 and crysis with vsync and both come out with values less than 1%.

Still, for a really noticable microstutter I would be expecting a value of around 30%, or even higher.
 
Try running GRID and see what you get

I don't have grid, I'm afraid. But yeah, sounds like it's just an issue with GRID.

Surprising though, as I would have thought vsync would be so closely tied to the monitor refresh rate as to be virtually the same in all games.

edit - if you want, you can send me the file and I'll have a look at it more closely. arseface1 (at) hotmail (dot) com
 
Last edited:

Just had a look, and wow - it really is all over the place isn't it? Really not sure what's going on with GRID then. I suppose it could be an issue with the FRAPS readout or something.


Nice program. Will get round to a few tests of 8800m gtx sli vs single on the laptop and 8800gtx sli vs single on the desktop later on.

Should be interesting. :)

That would be great :) Single vs SLI (and x-fire) comparisons are what we really need to see to settle the debate :)
 
Well, some multi-GPu results at last :)

Thanks a lot for your contribution, Devious. Some interesting results. The crysis single card results are surprising - I've never seen such a high level of microstutter on my single card setup...

Anyway, it certainly seems that the level of MS is highly game dependent. When we have more results it would be interesting to see which modes each of the games is running in, to see if there is a correlation.
 
Man, this is like your No.1 Topic on this forum Dr. F. Kinda bored of it already, can you keep it confined to one thread instead of splurging everywhere (threads/taking threads off-topic?)

Sincerly,
Matthew


Well moderator Scougar, I'm not exactly sure where I have taken threads off-topic, but I can certainly see why my producing a tool to answer questions debated in another thread (here) would offend you.

You know, there is a simple solution to this. You can just put me on ignore. That way my endless 'splurging' will not interfere with your own efforts to help people make an informed decision on what hardware to buy.

Sincerely,
Flobalobalobaleth
 
I think what we need is a program which artificially delays the frame render time by slight variations, you steadily increase this frametime variation delta until it starts to become obvious to the user, at which time they click a button and it gives them a number.

People can then use those numbers determined with your program + FRAPS to compare with their own 'I noticed it at:' number to determine whether a certain card setup is right for them (i.e. single card crossfire, SLi, etc., etc.).

That would be a very nice utility. Unfortunately it's about 7.23 lightyears beyond my programming skills :(
 
After a quick scan of the thread I think I understand the VSYNC issue in relation to microstutter:

Say you have a display going at 60hz, your graphics card will want to do 60fps. What if it cant do 60fps? Well, 55fps is no good because it will cause tearing, which is what VSYNC is trying to avoid. So it has to drop to the next framerate that it can handle but still be in synch, which I think is 30fps. So because you drop from 60 -> 30 and back up to 60, you've effectively "microstuttered".

So if you take a game like HL2, it's easier for your card to output 60fps all the time. I dunno what GRID is like, but I'd suspect it's harder to pull off the frames required to stay at 60fps.

Please don't kill me if I'm wrong, but I think thats how it works! :P

It's a good point, and yes, with vsync the framerate does drop to 30fps (or another integer division of 60) when the framerate can't keep up. However, that's not what's happening in the GRID benchmark for two reasons:

1) I take into account the possibility of these kinds of effects (along with paging and other 'real' stutter effects) by 'culling' the worst 2% of frametime variations before taking the average to calculate the index. So, unless more than 2% of frames are next to 'jump' frames, the effect won't come out significantly on the result

2) The average framerate is pretty much bang on 60fps, implying that it has not dipped below 60fps during the benchmark (I use all frames to calculate the average fps).

On top of this, rounddodger sent me the benchmark file, and it's really all over the place. Quite strange, actually... I imagine it's something GRID specific.


Anyway, I don't want you to think I'm 'jumping down your throat' with this - it was a very good idea. It's nice to see people thinking about the 'how and why' rather than just the 'what' :)
 
Just to be clear, are we still saying that microstutter is unique to multi-GPU setups, and that all the people on this thread claiming microstutter on single GPU setups are actually suffering from some other, unrelated performance hitch?

There will always be some degree of 'microstutter', even with single GPUs. It is actually this irregularity of frame output which makes it impossible to predict when to output frames in multi-GPU AFR mode, and leads to 'noticable' microstutter! For the most part though, it seems small with single GPU setups (less than 10%).

So far, with the few results we have with multi-GPU setups, it seems that on some games we see much larger microstutter values (up to 40%), whereas on others, particularly ones that don't make proper use of x-fire, we see similar values to the single GPU setup.

This being said, I don't think we have enough SLI/x-fire results yet to start drawing conclusions. When we have 5 or 6 more results across a few games I think we can start to look at patterns.
 
It SEEMS from your descriptions that you are trying to ignore this type of stutter but instead are looking for general variations from the total average frame rate. To me this means that you are looking normal drops in frame rate from the average which you would get with any GPU. If this is the case then I would be the first to say that the measurement isn't worthless but it doesn't imo measure microstuttering?

'Microstutter' takes place at very short timescales.

To make it clear: to make the index I don't measure variation away from the TOTAL average framerate (which would be just measuring framerate consistency as you say...), I measure variation away from the LOCAL averaged framerate.

What I do, for each frame, is look at the frametime for the surrounding 9 frames (that frame, 4 before it, and 4 afterwards). From this I determine the LOCAL averaged framerate. This smooths out local solution noise, which is that we are trying to measure. I then compare the frametime of that particular frame against the 'averaged' frametime. This is a typical method for determining local data noise (eg turbulence intensity in fluid dynamics).

Note that this 'averaged' framerate will still vary (a lot) during the course of the benchmark. All framerate indicators that you see in games (like FRAPS or in-game indicators) will use some kind of local time averaging, otherise they would constantly be changing value every 1/50th of a second or so.


As for culling the worst 2% of values - we're looking for purely local effects (microstutter) rather than global hitching (paging / regular stutter etc). By removing the worst 2% of values we eliminate most of the jumps caused by paging or other global phenomena.

For more information on the procedure I use, take a look at the readme :)


Microstuttering from my own experience and understanding happens when the second GPU fails to "hit" the next frame in time and so the output from the second GPU is ignored. This is effectively (imo) just like the problem with VSYNC when the frame rate drops below the monitor SYNC rate, the frame is ignored and you get a stutter.

That's one aspect of it, but 'microstutter' in general is just 'framerate output irregularity'. Of course, if one frame output is totally missed, that will appear as a significant irregularity. Since this would happen frequently and on a short timescale, it would be picked up by my program, and would add to the microstutter index.
 
Last edited:
Fantastic :) Thanks for the results, lay-z-boy

It certainly looks like there is some form of microstutter going on there which is due to the SLI configuration. Although, 20% in HL2 isn't too bad, really. Certainly the increase in raw framerate will more than make up for any decrease in apparent smoothness due to the microstutter. So, in this case, SLI seems a worthwhile bet.
 
How does it actually play though? Does it seem to "stutter" in the referenced setup?

At 112fps average I would doubt it. Even if the microstutter was far worse (closer to 100%) it would still 'appear' at the very worst to be half this (about 56fps average). This is still fairly smooth.

The strong scaling of performance with SLI should have overcome the effect of microstuttering. But of course it's subjective, so maybe lay-z-boy has a different opinion.

Remember, 'microstuttering' isn't "stutter" per se, like paging. It's just an apparent reduction in framerate due to uneven output of frames.
 
I'm going to do the benchmarks again because of this and update my original post accordingly.

Sounds good. Could you leave the originals there though, so we can see what (if any) difference the new drivers make.

It's nice to see how much is down to software interfacing and how much is down to the hardware itself.
 
Looks like drivers can have a big impact then. This suggests that the problem is at least 'solvable'. Which is good news... :)



On a separate note, I found this graph, posted by Sampsa over at XS, to be quite interesting also:




grid_graph2.png
 
I had micro stutter on my 9800GX2, but it was purely to do with the silly 512MB usable VRAM, and my screens 2560x1600 res. And i think this is the main problem for this issue, not enough memory. Which will effect non-SLI/crossfire setups exactly the same.

"microstutter" is a completely different phenomenon to the VRAM to system-RAM paging that you describe. The former is due to uneven frame output, and the latter due to not having enough video memory in which to fit all the textures.

Any time you get paging though it really does kill performance. I can well imagine that the GTX280 is a lot better than the GX2 in some games, on a 30" screen - particularly with AA enabled.
 
Back
Top Bottom