• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

4x9800 gx2

LOL, that pc would probably not work that well for games. :(
Games would never scale that much to use 8 cores, compared to the software they are using.

But how the hell have they fitted 4 of them on a motherboard. :confused:
 
aye but the cards are not talking to each other, i think in gaming terms it wont be good as it dosnt have sli
 
LOL, that pc would probably not work that well for games. :(
Games would never scale that much to use 8 cores, compared to the software they are using.

But how the hell have they fitted 4 of them on a motherboard. :confused:

Skulltrail no doubt. Though how they managed to get the end 2 cards next to each other is anyones guess..
 
neat, I say that being a medical student gamer lol, but how are they actually using 4 cards then?

They're using the 8 GPUs in parallel for scientific computations, using CUDA.

There are alsorts of machines (usually in rackmount form) that use 32, 64, 128 or even more GPUs for scientific calculation purposes. You can even buy pre-built nvidia "tesla" boxes with multiple GPUs in, designed for this purpose.
 
me confused i thought there is no octo- sli ? and were is the sli bridges?


It isn't using SLI.

It isn't a gaming rig. it's doing scientific computations.

SLI is a gaming-specific creation to allow two GPUs to contribute to rendering a single frame, or sequence of frames. It has no purpose in scientific computation and is not supported by CUDA. You simply add another level of parallelism to the code you're writing and run on multiple GPUs.
 
It would be rubbish, more cards don't mean more speed as thing start to get slower. SLI is bad enough, tri SLI is a rip off and so on.
 
It isn't using SLI.

It isn't a gaming rig. it's doing scientific computations.

SLI is a gaming-specific creation to allow two GPUs to contribute to rendering a single frame, or sequence of frames. It has no purpose in scientific computation and is not supported by CUDA. You simply add another level of parallelism to the code you're writing and run on multiple GPUs.

oh kk thanks for explaining
 
It would be rubbish, more cards don't mean more speed as thing start to get slower. SLI is bad enough, tri SLI is a rip off and so on.

Have you actually read any post in here? It isn't using SLI so SLI performance is not an issue here.

And no from the video it clearly isn't a skull trail mobo.

I wonder if 4 3870X2s would do the job just as well if not better for less money?
 
Last edited:
It would be rubbish, more cards don't mean more speed as thing start to get slower. SLI is bad enough, tri SLI is a rip off and so on.

Well scientific stuff that use cuda it won't be rubbish. SLI then yes becuase games don't nearly scale as much.

I've looked at nforce 790i boards and none of them have 4 pci-e slots. :confused:

Edit: oppps 780 amd chipset. Dont have a clue how they have manage to fit the last card on as the last two slots have the pci-e slots next to each other.
 
Edit: oppps 780 amd chipset. Dont have a clue how they have manage to fit the last card on as the last two slots have the pci-e slots next to each other.

It's using a MSI K9A2 Platinum, which has double spacing between all the PCI-E slots - it says so on the website, under 'Specs & Benchmarks' :p
 
And no from the video it clearly isn't a skull trail mobo.

Didn't watch the vid, at work, just assumed they'd have gone intel, but as the cards are doing all the work then AMD mobo would have been the best bet.
 
i dont think people can get the fact that all they need is 4x pci-e slots and it'll work into their thick skulls.
I don't think there is a need to insult anyone. Most people here are aware of sli and its requirements but not the requirements of teaming up gpu's for cuda use.

impressive, however those cards dont look like they get much breathing room!

are we reaching the end of the cpu?
No. Gpu's are only quick at massively parallel data crunching.
 
Back
Top Bottom