• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia/CUDA vs ATI/Stream GPUs for encryption?

Associate
Joined
24 Oct 2009
Posts
138
Ok i was just about to go and get a GTX 470 when i suddenly saw the following table (Page 2 of below link)

http://www.blackhat.com/presentations/bh-usa-09/BEVAND/BHUSA09-Bevand-MD5-PAPER.pdf

To summarise, the ATI HD 4870 X2 Performance, as measured by billion instructions per second, has 1200 whilst Nvidia GTX 285 has 334.

Are the ATI GPUs better for performing mass encryption of MD5 hashes etc, as thats what i was going to use CUDA and a GTX 470 for?
 
If you actually want something useable go with CUDA... if you want to bang your head against a wall repeatedly then go with Stream.

Performance wise the difference between the hardware changes dramatically from scenario to scenario depending on what kinda data your processing, the ATI 4/5 series GPUs are very good at handling certain types of vector math which gives them a huge performance advantage in that specific field.
 
That paper would suggest that the ATi/AMD cards would be massively better at that type of work.

"By that number, it is obvious that ATI GPUs offer a significantly superior level of
performance/Watt, performance/USD, and absolute performance, than Nvidia GPUs.
These 3 metrics are the reason why the rest of the paper focuses on ATI GPUs, despite
the fact that the ATI GPGPU SDK (Stream SDK) is generally perceived as less mature
than the Nvidia GPGPU SDK (CUDA). The VLIW architecture chosen by ATI seems
to have paid off as it allows them to provide more computing power per square inch of
die area."

Table 1: Technical specifications of high-end video cards (current as of June 2009)

Video Card ===== (Ginstr/s)
ATI HD 4870X2 == 1200
ATI HD 4850X2 == 1000
ATI HD 4890 ==== 680
ATI HD 4870 ==== 600
Nvidia GTX 295 === 596
ATI HD 4850 ==== 500
ATI HD 4770 ==== 480
Nvidia GTX 285 === 354
Nvidia GTX 275 === 337
 
Last edited:
If you actually want something useable go with CUDA... if you want to bang your head against a wall repeatedly then go with Stream.

Performance wise the difference between the hardware changes dramatically from scenario to scenario depending on what kinda data your processing, the ATI 4/5 series GPUs are very good at handling certain types of vector math which gives them a huge performance advantage in that specific field.

So the way each card can is designed effects what it can do outside of drawing triangles? Is that why Geforce cards are much better for folding over Radeon based solutions or is that more down to drivers and support from AMD rather then hardware differances?
 
Drivers/API support is one of the reasons, but the shader architecture is the biggest reason, ATI and nVidia have approached it in different ways which gives different strengths to different types of arithmetic operations. Theres also a massive issue regarding storage, nVidia GPUs typically tend to have localised cache which makes certain things much faster an area which ATI/AMDs GPUs with less of a compute focus lack.

End of the day its not my area of expertise tho, played around a little with CUDA but thats it, I rely on info from people who actually use this stuff day to day - who I do work with from time to time - consensus seems to be CUDA is much more matured and approachable whereas Stream is a lot of hassle if your trying to do something outside of the very narrow range of tried and tested application.
 
So the way each card can is designed effects what it can do outside of drawing triangles? Is that why Geforce cards are much better for folding over Radeon based solutions or is that more down to drivers and support from AMD rather then hardware differances?

Its a complex subject with conspiracy theories at every turn. Basically its a time/resource/development issue. Basically once the OpenCL version of the f@H clients are released, then we will see a point where we can truly compare the performance of the cards on a like for like basis.

But it still may be that the way Stanford requires the software to do its job will suit one type of GPU processing technique than another. You can see that in other projects like the one in the OP and things like Milkyway@Home which has performance on Ati GPU's that leaves Cuda in the dust.

The flip side of course is that far more apps lever CUDA. But I think that hopefully the performance of the Open standards like OpenCL will be close enough to the "to the metal" of CUDA that the cross platformness of it will be deemed more important.

Only time will tell. :) But I don't think its ever going to be true to say that one is better at GPGPU than the other. They just do it differently.
 
Additional to that - once you start looking at newer GPUs the story is likely to change again... it might not be cost effective to buy a 4870 (which might decimate a 200 series GPU at a specific task) at say £70 or whatever they are going for now, if the mid-range 400 series cards at a little more money are 4x quicker at it than even the 4870. (Not say thats the case - but compute is far from a cut and dry story).
 
Back
Top Bottom