Megarig!

Man of Honour
Soldato
Joined
2 Aug 2005
Posts
8,721
Location
Cleveland, Ohio, USA
Hey boys, long time no see! I've been very busy and have had nothing but crappy laptops and a couple of old servers so my FAH output has been very low. Shameful, really.

Anyway, the new job is definitely paying off and I'm itching for a new überbox. For the first time in ages I can afford one! :p

What's the preferred FAH setup these days? I'll run Linux, of course, but could dual boot with Windows (which version?) if necessary. I have about a $2000 budget which is plenty but it need not all be spent. I'd prefer single-socket motherboards because it seems when you move to duals or more they increase the prices of everything by nigh on an order of magnitude. Furthermore dual/quad socket mobos tend not to have a sufficient quantity of PCI-e slots. I'm looking to buy in the next month or so unless something good is right over the horizon.

I know next to nothing about the latest graphics cards and their relation to FAH. Last I knew it was ATi 1k series-only, required Windows, and sucked.

Do me proud :)
 
Last edited:
GPU folding is where the points really lie at the moment, particularly the nvidia cards - the GTX295 is good for ~14000-16000 ppd depending on project, so a couple of those at $500 each will get a lot of points (and drink leccy too!). However, any reasonable 8 series card or higher still give great points compared to the ATI top end. The client does not use much cpu power, so pureply for a folding rig it would be better to use a low power cpu and lots of GPUs for max ppd and minimum power usage. Unfortunately there is not a native linux gfx card client - there is a work around using WINE, but having tried it is it is a little tempremental unless you plan on 24/7 usage (it doesnt really like being stopped that much). In this respect, windows is a much better choice, plus there is better overclocking support, fan speed control etc that is not really there yet in linux.

As for CPU, i think if your going for a single socket it has to be i7 all the way! :D 8 threads of foldy goodness! The windows SMP client is still in beta, can be a bit flakey and does not get anywhere near as many ppd as the linux client however. A good compromise is to use VMware to run the linux SMP client, which gets very close to the ppd of a native linux client (but not quite). Someone on xtremesystems ran 4 VMs on an i7 pc and was getting some great numbers (and possibly more than using a single, 8 threaded client). The VMs are a bit of a pain, so multiple windows clients *could* be easier. If you just want to cpu fold, or use the WINE fudge for the gfx cards, then native linux is definately the easiest (and probably second nature to you!).

Its a bit of a minefield client wise - there are loads of combinations to choose from, and each has a real drawback really... They need to sort a linux gfx client (and for some decent linux software overclocking tools to be developed!), which would make setting up a pd optimised folding machine a LOT easier, but i imagine this is fairly low on stanfords to do list. If you want to go dual socket, i think a supermicro server board and dual 771 xeons is the best bet at the moment (some people on xtremesystems are having a lot of difficulties with the asus 7zs 'skulltrial' esque mobo), but Gainstown (8 core, 16 thread xeon) is round the corner, so a 32 threaded folding monster could be a possibility!

EDIT: I think pics of the resulting folding monster should be mandatory!
 
Last edited:
Thanks Chrissy! So for the GPU clients I'd have to use Windows, but for CPU clients Linux is the way to go?

Do the GPU clients still devour a whole CPU core to do their business?

So one i7, 12 GiB RAM (DDR3 mobos come with 6 slots, right?), and 3 GTX295 is tops for single-socket setups?

Has anybody tried a Mac Pro's 4 PCI-e slots? Do all the slots have to be 16x or will a lower number of lanes like 4x or 1x PCI-e do?

Pics will be requisite, of course!

Edit: What's the hot chipset these days? Is Intel's offering still the best, or has nVidia finally sorted their **** out?

Edit2: Not sure if anybody'd know, but is there a way to use the Linux CPU client on a Solaris rig? I know it takes a flag to use it with *BSD, but I haven't much experience with Solaris/Linux binary compatibility. I say this because I'm looking to do an OpenSolaris NAS so I can use ZFS. If it could also be a cruncher that would make it even better. Maybe Xen?
 
Last edited:
No, the GPU client hardly uses any CPU at all now, even under XP.
You don't need 12GB RAM for folding but it's fine if you just want to willy-wave. :D
I've had good results on a quad by running a couple of diskless FAH ( http://reilly.homeip.net/folding/ ) clients under VMWare alongside GPU folding. It's a dedicated stripped down Linux installation so it gets the nice Linux WUs. Use the CD generator and it's easy.
 
Well, if you are using native linux, then you could be running just one smp client (the newer linux A2 cire scales up to 8 threads), or two four threaded clients. Of you choose to go down the VMware route, you could run up to 4 SMP clients (therefore 4 gb or RAM would be ideal). You could have course run 8 normal clients.... but the ppd would be useless compared to SMP. On windows... i dont know really. Maybe up to four SMP clients to make the use of the cores? The windows client can only handle four threads, and is not efficient in its use of them, so many peopel get around this by using more than one. Of course, i7 given you more to play with.

I personally would day windows was the better platform for the gpus - im sure others may say different, and its great that the community got it to work under linux. I know you are a bit of a linux guy, so maybe give it a go and see for yourself? I did - didnt take long to sort out following a guide, and it makes CPU folding a lot easier.

I think solaris is a bit of a no go. I have been reading your thread in the Linux section and had a look myself last night. I dont know about the CPU client, but there is no official CUDA support (and therefore i doubt the ATI equivalent), so GPU folding is a total no go.

Before buying 3 GTX295s, i would definately take a look on the official forums and make sure they are running smoothly - i know there were some teething problems with running multiples of these cards (although no problems with the previous 9800GX2....). I would definately wait until a few days before purcahse to make up my mind on the GPUs. ATI is catcjing up with nvidia (slowly!), so sooner or later they may come up tops.
 
Last edited:
Vista Home Premium 64bit has proved v stable for me & allows up to 16GB RAM if mobo supports that much [ I guess that means 12 GB max with i7 triple channel? or 18GB & lose 2GB if they make 3 GB modules]
 
i currently use vista ultimate 32bit only wish i had got the 64bit version instead but if you go for vista ultimate 64bit you can use upto 128Gb of ram if you wanted to.
 
Overclocked Core i7 940 (or 965) with a couple of GTX295s and 6GB RAM (12GB is excessive for Folding, but go for it if you can afford it :p) Run Vista x64 with four GPU clients - these will use practically no CPU - and four Linux SMP VMs to max out the chip's eight threads.

That will be a beastly folder, and will use a beastly amount of leccy! At least you won't need to heat that room in the winter :p
 
Has anybody tried Xen? It might be interesting to use Xen to run the Windows GPU clients, and then run the Linux client naively on dom0. All these fancy new procs support IVT.
 
Does it show the virtualised OS the hardware, or just virtual hardware? Seems a bit vague from their site, but VMware etc cannot show the VM the nvidia/ATI card. Not sure if that makes 100% sense, but unless Xen can give the virtualised OS access to the graphics hardware (a non virtual card), it may work. If it cannot detect that there is an nvidia 8800 etc installed, the client wont run.
 
Last edited:
Does it show the virtualised OS the hardware, or just virtual hardware. Seems a bit vague from their site, but VMware etc cannot show the VM the nvidia/ATI card. Not sure if that makes 100% sense, but unless Xen can give the virtualised OS access to the graphics hardware (a non virtual card), it may work. If it cannot detect that there is an nvidia 8800 etc installed, the client wont run.

I don't know. It is really just a hypervisor that schedules hardware time and it runs at an extremely low level. I shall endeavor to find out soon enough. ;)
 
It might be a few weeks before I can test. Since I'm on an extended out-of-town stay on business I've not got much hardware with me. I'm definitely going to look into it. :D
 
Back
Top Bottom