• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will there be a comparable Nvidia multi GPU card to the 4870x2?

Looks like ATI might get the performance crown for a reasonable time with the 4870x2, which should make the prices interesting.

I'm most interested in when ati or nvidia for that matter will be able to have profileless crossfireX/sli. When that happens is when i will go multi GPU, and from what i have read it seems ati will reach that goal first, when it is reached multi GPU technology will be the future. Unfortunately it still remains to be seen if this is possible with DX10.
 
There are fundamental practical issues with multi-GPU setups - requiring an equal set of memory for each GPU, for example.

While a dual-GPU setup is a nice way to increase performance (ignoring the more subtle performance issues for now), greater parallelism is just not practical. Having an 8-GPU card with 8Gb of memory, which can load only 1Gb of textures, is not cost effective.

Large 'monolithic' GPU setups are here to stay - at least until inter-GPU bandwidth and latencies can be improved by several orders of magnitude, allowing all the parallel GPUs to read from the same memory stack - and that is way into the future.

Sorta follow your first point but lost on your second.

Already shown with the 4850 that two 4850's come close to a single gpu GTX280 and two 4870's will smash it and the 4870x2 so you are saying that until dual gpu out perform a single gpu by "several orders of magintude" then a single gpu is the way forward?

I thought as soon as dual gpu's beat a single for the same or less money then it was game over for the single gpu?

Forgice me if I am been thick and misunderstood you.
 
Sorta follow your first point but lost on your second.

Yeah, on second read I didn't really put that very well.

In principle, if both GPUs could access a common data stack then they should act like a single unit, with the added bonus of not requiring a doubling-up of memory storage.

The problem with this type of implementation is that the two GPUs must communicate heavily in order to efficiently divvy up the workload, and the loading of the textures. If we could have a fast enough data transfer between the two GPUs then this could be done, and the idea of having lots of smaller GPUs powering the card could become realistic because you wouldn't have to store 'N' copies of all your data when you have 'N' different GPUs..

There are some rumours that the r700 GPUs share a common data stack, which would be brilliant and a real step forwards IMO, since it should kill off most of the SLI/x-fire issues we see (although it probably won't offer as much of a performance bump as regular x-fire). Anyway, if this turns out to be true I would love to see the GPU schematic to find out how they did it, since to my thinking we need interconnects many times faster than those available today. I can still hope though!
 
Last edited:
I thought as soon as dual gpu's beat a single for the same or less money then it was game over for the single gpu?

That's probably true, at the top end of the market. But without the advancement to allow the different GPUs to share a common memory stack, where does it end? 2 GPUs? 4? 8? Every time you double the number of processors you must double your onboard memory. Also, diminishing returns kicks in as with everything in life and you will see less improvement every time you double up.

I'm sure that dual-GPU setups will start to become the norm for high-end setups, but in order to have the fastest dual-GPU solution you should also expect to have the fastest single-GPU solution (unless your single GPU solution is so power hungry as to precude a dual-GPU implmentation, as the 280 seems to be!).
 
Well if the 65nm->55nm die shrunk of the 9800GTX is anything to go by, the die area only shrunk by 1mm sq. That's still not enough to make the GT200GX2 viable.
Eh!?

The same design, shrunk from 65nm to 55nm will, by definition, occupy 72% of the area it did previously.
 
That's if they don't change anything, but going by the last couple of generations of die shrinks they're bound to tweak the architecture, just like with the 9800GTX+.
 
Interesting article at bit-tech about the comparisons

source

Makes you wonder what they added to keep it nearly as big as the old die on a shrunk process?
 
Back
Top Bottom