• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

To SLI or not?

Rules for buying SLI:


-When 2x mid-range cards are both faster and substantially cheaper in combined cost than the current top end single card. (Or poor availability of the top end card).
-When you already have 1 of the cards and its substantially cheaper to get a 2nd one than buying the latest faster/fastest card.


At all other times I'd be looking at single GPUs.
Agree with theese two points, ive weighed up the pro's/cons of switching to a single current gen card, (470 gtx), not really worth it, my current sli setup doesnt struggle with anything im playing, and only cost £230.00 a few months ago. Only thing im missing out on is DX11.
 
That's an interesting statement? . . . what is that based on please? . . . I'd like to know more? . . .

Okay, it works like this:


Most multi-GPU setups use an alternate frame rendering technique (AFR), where one GPU works on one frame, while the other works on the next. Unfortunately, the two GPUs don't usually output the frames at regular intervals. You will often get one frame output, with a short gap before the next frame and then a longer gap before the next frame (etc). The issue is that the eyes notice the *longest* gap as a measure of smoothness, not the raw number of frames output as your FPS counter measures.

This phenomenon is known as "microstutter" which is a terrible name, because it all happens on a very short timescale and does NOT look like "stuttering" at all, unless you're running at framerates low enough to be able to clearly see each individual frame (in which case you have crap performance anyway - microstutter or not). It is really just framerate irregularity.

So what's the problem? Well, if you look at a game scene running at (say) 60fps, using a single GPU setup, then the frames are usually output at very close to regular intervals, and the scene looks like it is running at 60fps. BUT using a multi-GPU setup, because the frames are not output regularly, and because it's the longest gap between frames that governs how smoothly we see the game scene at that instant, it looks like it's running at a slower framerate. The value of this "apparent framerate" must lie somewhere between 30fps and 60fps. 60fps would be the best case (perfect regular frame output), and 30fps would be the worst case, where two frames are output simultaneously with a double gap until the next one.


It's possible to measure the amount of microstutter, and compute the "apparent" framerate (I wrote a program to do it - see the thread here). In most cases where microstutter appears (see below) I found an apparent framerate reduction of 10-30%. This means that in our 60fps example the game scene would appear to be running at somewhere between 42 and 54fps. It also means that, in the real world, you can't compare the framerate of a multi-GPU setup with that of a single-GPU setup. A multi-GPU setup will always need a higher framerate to demonstrate the same degree of smoothness.

One final important point: Microstutter all-but disappears in two main circumstances:

1. CPU limitation: If a game scene is CPU limited, then the GPUs finish their workload before the CPU and wait for new input. In these circumstances the output of the GPU syncs nicely to the regular output of the CPU, and frames are output regularly.

2. When using vsync: When using double buffer vsync, the GPU is almost never working at 100% capacity. On a 60hz screen, the only framerates that are output are 60, 30, 20, 15, 12.5 etc (integer divisions of 60). Unless the maximum capacity of the card is very close to one of these intervals, there is some time left waiting idle for the framebuffer to clear. This waiting period allows the two GPUs to always start working at regular intervals, and maintains regular output.



The tl;dr version: Adding a second GPU will still generally improve your performance, although not by as much as benchmarks suggest. Unless you're planning to always run vsync, consider a faster single-GPU solution instead.


edit: Thanks ejizz - beat me to it :p
 
Last edited:
Bare in mind that "microstutter" isn't exclusive to multi-GPU rendering - single GPU systems can exhibit it as well for a number of reasons... unless your framerate is arbitarily capped below the capabilities of your GPU its very rare even with a single GPU to have a completely consistant interval between frames... so the actual difference between single and multi GPU is less than the difference between multi and ideal.
 
Bare in mind that "microstutter" isn't exclusive to multi-GPU rendering - single GPU systems can exhibit it as well for a number of reasons... unless your framerate is arbitarily capped below the capabilities of your GPU its very rare even with a single GPU to have a completely consistant interval between frames... so the actual difference between single and multi GPU is less than the difference between multi and ideal.

This is true to a certain extent... But check out the thread I linked to earlier:


In most cases the single GPU setups show around 2% microstutter. The worst case I found was 5.9%, with the heaven benchmark.

With multi-GPU systems, unless the game is CPU-limited, the degree of microstutter is generally around 10-25%, with the worst case I found was 54% on an ATI 5850 tri-fire setup.


The magnitude of variation between the two setups is really quite large. It is far more significant on multi-GPU setups.
 
Sheesh this is turning into an informative thread :D Thanks for the all replies.

I've decided to go with a single GPU. I'm only placing the order on 1st Nov, so may get one of the 6000 series if they will be available by then or shortly after. The shop I am buying from has the following in stock:

Sapphire ATI Radeon HD5870 Vapor-X 1GB
HIS ATI HD5870 iCooler V 1GB
MSI ATI HD5870 Lightning 1GB

I'd appreciate some replies from people who own one of these and can recommend it? I'm thinking about cool and quiet!
 
Last edited:
If you have to go SLI 460s, definitely get the 1gb model.
As much as I feel the 768mb card has it's place, it is not in an SLI setup where both cards are bought at the same time and the alternative is a top end single card.

I'd consider a GTX470 for the great performance when OCd and the relatively reasonable price. :)

Edit: That'll teach me to not refresh before replying. Good call on the single card choice.
Buy whatever 5870 is the best price with the longest warranty. I tend to try and stick to Sapphire now but... "Whatever's cheapest" can also tempt me. :D
 
Hello Duff-Man :)

check out the thread I linked to earlier

Thanks for writing that reply, the basic technical SLI stuff I already new but I haven't encountered an explanation of this "microstutter" thing before? . . . I just read the XS thread through from start to finish and I can't say there is any evidence presented in there that would stand up in a court of law! :(

I could just jump on the "microstutter" bandwagon with you guys and accept the explanation as given but allow me to take up the role of "Devils Advocate" and say the problem doesn't exist, your program is badly coded and the few users who complained had a faulty overclocked system and mushed up drivers? :confused:

If a genuine punter looked at some SLI benchmarks and saw 90FPS in a game he liked and then bought the exact hardware as used in the review, installed everything are you suggesting the performance wouldn't be the same? . . . all things being equal he should see approx 90FPS in the counter and he should have a smooth and enjoyable end user experience? . . . are you suggesting oherwise? . . . and if so can you prove it factually beyond the "Hearsay" of 1000 people?

Obviously direct no ire towards me as I am just asking for "proof"! ;)

In that thread there was a lot of assumptions and conjecture made? . . . has anyone done some very, very controlled testing, noting exact hardware configs (stock/overclocked etc), driver versions, made videos etc? . . . is this something that can be observed with the human eye or? . . .

I read the bit about the "effect" being more noticable when the GPU's where 95%-100% loaded and less in effect when CPU limited . . . due to the GPU's being able to sync up their output? . . . but how do you know this in fact is not actually something to do with an overclocked system? . . . i.e when the system is overclocked something related to the graphics subsystem becomes a bit borked/out of range etc? . . .

If someone runs two GTX 460's on a stock Intel Core i5 system and loads up a game and plays at 1920x1200 with 4xAA (or 8xAA etc) are you guys suggesting the end user won't have a lovely smooth experience? . . . assuming say the benchmark for this config was approx 60FPS? . . . would everything look nice and smooth etc?

Thanks in advance for any answers! :cool:
 
Obviously I don't need to make this reply but will anyway! :p

If you have to go SLI 460s, definitely get the 1gb model
definitely? . . . haha I don't agree at all! :D

£100 extra up the swanny! . . .

As much as I feel the 768mb card has it's place, it is not in an SLI setup where both cards are bought at the same time and the alternative is a top end single card
That's exactly when the 768MB should be bought! . . . right now and at the same time? . . . . the single 1GB card should be bought now if the user is intended to add another in say another 1GB card 12-18months time! . . . tis logical no? ;)

I'd consider a GTX470 for the great performance when OCd and the relatively reasonable price. :)

and the performance is much less than SLI'ed 768MB GTX 460's . . . I dunno. some people just love to burn money! :cool:
 
HIS ATI Radeon HD 5870 iCooler V 1024MB

£279.99 inc

That makes no sense at all? :confused:

spending £279.99 on a year old GPU and a month away from next gen Radeons makes little sense? . . . and offers less performance than two SLI'ed GTX 460? . . . look at the results in post #17 . . . why would somebody pay more money for a slower product? . . . why? :D
 
I've decided to go with a single GPU.
I don't think you decided . . I think you let a group of people decide for you . . or rather scared you into making that choice! :p

I'm amazed how like "putty" some end users are . . . safety in the herd indeed! ;)

I'm only placing the order on 1st Nov, so may get one of the 6000 series if they will be available by then or shortly after.
phew! :cool:
 
Hello Duff-Man :)



Thanks for writing that reply, the basic technical SLI stuff I already new but I haven't encountered an explanation of this "microstutter" thing before? . . . I just read the XS thread through from start to finish and I can't say there is any evidence presented in there that would stand up in a court of law! :(

I could just jump on the "microstutter" bandwagon with you guys and accept the explanation as given but allow me to take up the role of "Devils Advocate" and say the problem doesn't exist, your program is badly coded and the few users who complained had a faulty overclocked system and mushed up drivers? :confused:

If a genuine punter looked at some SLI benchmarks and saw 90FPS in a game he liked and then bought the exact hardware as used in the review, installed everything are you suggesting the performance wouldn't be the same? . . . all things being equal he should see approx 90FPS in the counter and he should have a smooth and enjoyable end user experience? . . . are you suggesting oherwise? . . . and if so can you prove it factually beyond the "Hearsay" of 1000 people?

Obviously direct no ire towards me as I am just asking for "proof"! ;)

In that thread there was a lot of assumptions and conjecture made? . . . has anyone done some very, very controlled testing, noting exact hardware configs (stock/overclocked etc), driver versions, made videos etc? . . . is this something that can be observed with the human eye or? . . .

I read the bit about the "effect" being more noticable when the GPU's where 95%-100% loaded and less in effect when CPU limited . . . due to the GPU's being able to sync up their output? . . . but how do you know this in fact is not actually something to do with an overclocked system? . . . i.e when the system is overclocked something related to the graphics subsystem becomes a bit borked/out of range etc? . . .

If someone runs two GTX 460's on a stock Intel Core i5 system and loads up a game and plays at 1920x1200 with 4xAA (or 8xAA etc) are you guys suggesting the end user won't have a lovely smooth experience? . . . assuming say the benchmark for this config was approx 60FPS? . . . would everything look nice and smooth etc?

Thanks in advance for any answers! :cool:

Such a tiresome post, it sounds to me personally that ANY amount of data can be presented to you and you would still refuse to believe it, unless Nvidia or AMD admitted it themselves.
Duff-Man's explanation was more than sufficient. Instead of asking for further proof to the already overwhelming evidence, how about YOU actually do some research and testing yourself and report back if your findings are different than Duff-Man's...
 
I don't think you decided . . I think you let a group of people decide for you . . or rather scared you into making that choice! :p

I'm amazed how like "putty" some end users are . . . safety in the herd indeed! ;)


phew! :cool:

I can't believe just how insulting your being to the Op,
Looks to me your just miff'd that the Op hasn't gone for your SLY Multi-Gpu 'advice'...
 
Back
Top Bottom