• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GeForce GTX280 folds three times faster than Radeon HD 3870

Protein folding is about the best example of an easily parallelisable computing problem. This makes them ideal for use on a GPU (and also ideal for a distributed computing network, like the one you contribute to when you run fold@home).

I've said before a million times, but a GPU is a massively parallel floating-point powerhouse. CPUs are designed for serial flexibility, but have nowhere near the floating point capacity of a specialised FPU unit like a GPU. That chart sums this up nicely.

The resuilts shown on the chart are actually really encouraging as far as the GTX280 goes. In gaming, the 3870 was significantly slower than the 8800GTX (mainly due to fill-rate limitations). However, for scientific computing purposes (pure floating point number crunching) it was usually equal to or faster than the 8800GTX. The fact that the GTX280 is now beating the 3870 by a factor of 3 in a pure-FPU problem like folding suggests that nvidia's pipline architecture really has benefitted from an efficiency improvement over the G80 series.

All this being said, it's a nvidia benchmark, so could well be optimised to their architecture. Take with a pinch of salt.
 
It's about time nVidia got into the folding@home scene although I can't help but think only a very small handful of people will lick their lips at the prospect of 3x folding speed. I've folded about 200,000 points myself so I know it can be quite addictive.

I'm wondering is it possible to fold on your CPU and GPU at the same time? maybe using two clients?

Also do you think nVidia will enable SLI folding? that along with a Quad-Core would really smack through some units! :eek:
 
Last thing I read is that ATI is going to be using shared memory, seems such a waste duplicating everything like in current crossfire/sli setups.

They pretty much have no choice. Communication latencies involved between the two separate GPUs requires them to access different memory stacks. This is the fundamental reason behind all the crossfire and SLI issues you see.

...the benefit, however, is an effectively doubled memory bandwidth (in the same way that RAID can effectively double HDD transfer speeds through redudancy of data-storage).
 
I'm wondering is it possible to fold on your CPU and GPU at the same time? maybe using two clients?

Also do you think nVidia will enable SLI folding? that along with a Quad-Core would really smack through some units! :eek:


To the first question:
I don't see why not. For a pure-FPU application like F@H the CPU overhead in running on the GPU would be relatively low. But, once you have the GPU crunching away at ~200x the capacity of the CPU, why would you want to bother utilising the CPu at all?!

To the SLi question:
Nvidia's scientific computing API (called CUDA) supports multiple graphics cards, although not in "SLI" per se. SLI and crossfire are fundamentally gaming-based constructions. You would probaly use two instances of the program to run separately on your two graphics cards. As for whether this is supported from the onset by nvidia, well, it depends on their software implementation. I imagine it should be as multi-GPU support is included right from the base level in CUDA. When nvidia sell their 'Tesla' GPU racks they usually sell 16 or 32 GPUs at a time, so multi-GPU support is a fairly fundmanetal feature.
 
Next gen cards are mainly going to be 1GB (though it's unclear if this is 2x512MB).

An interesting snippet from Kyle [H] Editor,

Kyle_Bennett said:
R700 will be a multiple GPU architecture. Will not be CrossFire on a card. The architecture is being designed from the ground up to be "multiple GPU." Engineers at ATI have also told me specifically that AA is "fixed."

Hopefully this will clear up some of the rumors. As you know, we bat about 1000 when it comes to this kind of information. If I tell you more now, I will have to kill you.
 
What does it mean, AA is gonna be "fixed"? Does it mean, free AA?

I think they mean the AA-processing issues that were present in the r600-series (and caused big performance drops when AA was enabled) will no longer be present.

Ie AA works as it is supposed to.
 
Oh right, the quotation marks threw me off there... Then again, recent benchmarks have shown the 3870 to be on par with the 8800 GT with AA on in certain (somewhat rare) situations (see THG's 9800 GTX review and check the Crysis scores). While their AA performance isn't perfect, it has certainly improved over time.
 
There are clients that allow folding to be run on the GPU (CPU will still be needed to feed it data).

I found this really unimpressive, so their new gen card is 3 times faster than ATIs last gen card....yay? Leaves me thinking that if this card was 3 times faster in games than the 3870 they'd have told us that instead.

+1. Folding isn't really all that important in the grand scheme of things... ie, more £££ will be made by catering to gamers than folders
 
+1. Folding isn't really all that important in the grand scheme of things... ie, more £££ will be made by catering to gamers than folders

But folding IS a good benchmark for scientific computing applications.

The potential for selling GPGPUs for scientific computing far outstrips the potential for gaming GPUs. After all, when you buy a graphics card you pay in the region of £100 - £400, Whereas even a medium-spec supercomputer will set you back several million. The profit margins are also much higher in the GPGPU arena (the 'Tesla' equivalent of the 8800GTX still sells for ~£600).

Apart from all the above, the GPGPU arena is stille expanding exponentially, whereas gaming GPU ownership has reached saturation point. Of course all technology comes with periodic upgrades, but any sensible company will focus the most of their attention on the developing market.
 
Okay, but let's look at it from another perspective. What the hell do we get out of it? So nVidia sell a supercomputer on a stick, that's great and all, but are they gonna start leaving the PC gamer as a secondary concern when designing their products being that they can supposedly make so much more money with GPGPU products?
 
I really wanna see ATi's tessellation engine put to some good use, it looks really promising, that plus the hope of DX10.1 being more widely used (yeah right).
 
Okay, but let's look at it from another perspective. What the hell do we get out of it? So nVidia sell a supercomputer on a stick, that's great and all, but are they gonna start leaving the PC gamer as a secondary concern when designing their products being that they can supposedly make so much more money with GPGPU products?

Personally, I think we'll see the product line split entirely from the next (real) generation onwards.

Don't get me wrong - gaming GPUs are still a huge business (after all, it's nvidia and ATI that make the console GPUs for the most part). I'm not saying either faction is going to forget about graphics. I'm just saying that the GPGPU market has the potential to eventually be bigger, and since it's the fastest growing sector, they're pushing harder in that area at this moment.

I don't think there's any reason to worry :p


As for what you get out of it - well, a card which can do more calculations can push more pixels. Even when the product lines split, the underlying technology will still be common, at least in part. Development in the GPGPU arena will benefit gaming and vice-versa.
 
Last edited:
I think they mean the AA-processing issues that were present in the r600-series (and caused big performance drops when AA was enabled) will no longer be present.

Ie AA works as it is supposed to.

Once AA becomes "Fixed" again. I shall return to ATi.
 
+1.

When gaming at 1920 x 1200, a little AA is nice to finish the picture off. The performance sacrifice from the current generation ATI cards was unaccceptable - here's hoping they improve this time around - their pricing certainly looks like it has.
 
+1.

When gaming at 1920 x 1200, a little AA is nice to finish the picture off. The performance sacrifice from the current generation ATI cards was unaccceptable - here's hoping they improve this time around - their pricing certainly looks like it has.

Funnily enough, the G92 cards ended up having a bigger hit on AA Performance than the R600/RV670s. I think they've pretty sorted the "AA issues" for the most part now. Look at some of TPU's recent graphics card reviews, and you will see an 8800GT's frame rate dropping more between res and AA changes than the 3870.

Edit, I've ran both a 2900 and 3870X2 and can say myself that the "AA Issue" is pretty much minimalised, if not mostly gone.
 
Last edited:
Back
Top Bottom