BTI Dabbles in GPU Folding

Man of Honour
Soldato
Joined
2 Aug 2005
Posts
8,721
Location
Cleveland, Ohio, USA
On a whim I picked up a second hand X1950 Pro from a friend who upgraded to an 8800 GT. I took it home and threw it in my HTPC to see if it could do a better job at accelerating H.264 video and the like than the onboard nVidia 6150 graphics chipset. It then occurred to me that I could be using it for FAH. I threw Windows on it and had a go.

Test Setup:
AMD X2 3800+ "Energy Efficient" 35 W
Asus M2NPV-VM
1 x 1 GiB generic PC2-4200
250 W mATX PSU, 14 A on 12 V rail
HIS X1950 Pro 256 MiB GDDR3 PCI-e 16x
GPU client 6.00 beta 1/Linux SMP client 6.00 beta 1
Catalyst 7.11 drivers (latest off of ATi's site)

This rig usually runs Ubuntu Server 6.06.1 LTS for x86-64. It gets a hair less than 900 PPD so that's the number to beat.

My testing revealed that with a fresh install of XP Pro it gets about 550 PPD on 2D clocks. I forgot about the whole 2D/3D clocks thing until it was 50% done with the WU. ATiTool made it crash, as the release notes promised, so I installed the abortion that is CCC and raised the clocks to their 3D-levels. I then got 630 PPD. That's not too shabby, but I'll be going back to Linux SMP since I get a third more PPD and it uses two thirds less power.

I would love to continue crunching with it but SMP gives me better points and I'm just a big old points whore. They really need to revamp the GPU work unit points. They say it does about 40x the amount of work as a single CPU client in the same time period. I'll pull a number out of my ass and say that the SMP client gives 4x the performance of a single CPU client. If they doubled the points for the GPU work units they'd be getting a net gain of 10 times the work done by me in the same time period. I'd get better PPD and they'd get more work done.

But alas, they probably won't change anything until they rework the GPU FAH system. I suspect that they're working on a more generic GPU FAH solution and they won't fiddle with the current iteration much any more The new one will probably try to be more hardware/driver agnostic since the current only works with X1k cards and even then it only works well with X19xx cards. It's already a generation behind the cutting edge.

Is there anything I could be doing to get better performance out of it? It's always run a little hot, even when my friend owned it, so I cranked the fan to 100% for the time being and pointed a Yate Loon D12SL-12 at it. It runs at 58 degrees on 3D clocks which doesn't seem too bad.
 
I had mine running at a near constant 80C when I did GPU crunching. Got a pretty good PPD out of it too, around 800 I think...

Still, definitely not worth the noise, heat and power consumption, which is why I didn't hesitate to install Ubuntu. If they adjusted the points they would get more people doing it, but at the end of the day a GPU really is not ideal to have running 24/7
 
Did you do anything special to get 800 PPD, Sirius? Was your card the same, an X1950 Pro 256 MiB PCI-e?


Slightly OT: When testing this card I'm even more impressed with the integrated 6150 graphics than usual. I tested it by playing an episode of Heroes encoded from an HD-DVD into a 720p .mkv with H.264 graphics and AAC sound with VLC on Windows XP. With the nVidia 6150 onboard graphics over DVI I saw CPU usage between 10 and 30%, balanced nicely across both proc cores. With the big fancy graphics card consuming ten times the power I saw CPU usage between 10 and 25% with the same balancing.

nVidia claims nothing about H.264 acceleration on its product web page and only mentions hardware MPEG 2 decoding, the reason why I bought it. ATi claims that the X1950 Pro performs hardware acceleration of the same. I suspect nVidia is being modest about its product.
 
Mine is an X1900XT, which I think are faster than the X1950s. For a start it's 512MB.

To get the higher PPD I overclocked the card using ATI Tray Tools [or ATITool... can't remember which :p].

Concorde was the GPU king though, he took his GPU overclock much further than I ever dared.
 
Ah, that'd explain it.. The number of shaders makes a big difference. The 19xx XT and XTX have 48 whilst the Pro has 36. The 1900s are indeed faster than the 1950s. The benefit of the 1950 is lower poser consumption. There's no way I could run an XT or XTX on my wee PSU. Heck, ATi recommend at least 525 W and 30 A on the 12 V rail for this card and I don't even have half of that.
 
Last edited:
i'm currently running the GPU client on a 1950 pro 512Mb and getting 660ppd, as its only in a P4 3.2 HT its about 4-5 times more points than two console clients
 
Has anyone tested the 3800 and new 8880s for the performance? The standford site seems to be a bit out of date for these cards so it may be that the nvidias do work as well now :)
 
i'm currently running the GPU client on a 1950 pro 512Mb and getting 660ppd, as its only in a P4 3.2 HT its about 4-5 times more points than two console clients
What cooler do you have on yours?
Has anyone tested the 3800 and new 8880s for the performance? The standford site seems to be a bit out of date for these cards so it may be that the nvidias do work as well now :)
It definitely won't work on those cards. Right now it only works on X1k cards. Of those it only works well enough to be worth it on the X19xx cards. Other GPU architectures don't have the hardware or driver support needed to work. Supposedly they're working on a more universal solution but they're usually very hush-hush about what they're working on and when their stuff will be released.
You wouldn't wanna run low on Poser power would ya? :p :cool:
I hate it when I run out of poser. I try to conserve mine as much as possible!
 
Right now it only works on X1k cards. Of those it only works well enough to be worth it on the X19xx cards. Other GPU architectures don't have the hardware or driver support needed to work. Supposedly they're working on a more universal solution but they're usually very hush-hush about what they're working on and when their stuff will be released.

I would hope they are working on these now, the potential processing power they are missing out on is a lot from what I've seen :)
 
The difference with working on GPUs is first of all the architecture is very different to a CPU, so they have to come up with a way to get it going. Secondly they need the drivers to support the client, which means they need the help of ATi and NVidia.

ATi is easy since they are already helping, so updates for newer cards shouldn't be a huge issue. NVidia is an issue, because apparently the reason they originally chose to focus on ATi is due to NVidia's architecture was an absolute royal pain in the arse.
 
Back
Top Bottom