OcUK - Stomp Monster Project

Berserker said:
That's still £2.5k, and you've not even considered mobos/PSUs/heatsinks/etc yet. Let's say 50 people got involved (a not unreasonable number given the number involved in the original stompmonster), that's £50 each, and probably at least £75 by the time you've added all the other stuff. While some people might be willing to offer that sort of money, I very much doubt you'd get 50 of them.

I'm not trying to blow your idea out of the water here, but I do think you may be setting your aim too high.

I agree. My sugestion:
-asrock board, onboard graphics and lan, nf4. £36
-sempron 2800+ overclocked to 3400+ speeds with stock hsf £52
-corsair value 512 ram cas 2.5 £28
-Cheap 350w powersupply £15
Total: £131 per node

Less than the £217 for just an x2 3800+.
x16 = Grand Total: £2096
Obviously you would need one with a hdd conected to the internet.

This isn't a high end enterprise server, its a cheap donor funded folding farm.
 
Last edited:
Joe42 said:
I agree. My sugestion:
-asrock board, onboard graphics and lan, nf4. £36
-sempron 2800+ overclcoked to 3400+ speeds with stock hsf £52
-corsair value 512 ram cas 2.5 £28
-Cheap 350w powersupply £15
Total: £131 per node

Less than the £217 for just an x2 3800+.
x16 = Grand Total: £2096
Obviously you would need one with a hdd conected to the internet.

This isn't a high end enterprise server, its a cheap donor funded folding farm.

Let me have a few days to put together some numbers, real numbers that provide for initial capitol investment, and sustaining costs and see if iI can come up with a descent proposal that makes some sence.

I looked at the Asrocks for myself, but OcUK doesnt' sell them or at least I didnt' see them < I believe Asrock is a subdivision of Asus>. The other side of this is, the contribution from OcUK could be a small discount to us if all the material was / is purched from them. Don't know if Spie is interested in that or not, but if we put a descent proposal in front of him < after all he is all about business> the advertising and constent grouth of the member base within the gorup / project has to have some marketing value that would allow him to realise a real return on investement, which is what business is all about.
 
Last edited:
I think it has to be the P4s. I know they have obvious disadvantages, ie. they're more expensive to buy than Semprons and they do put out a lot of heat. But when you consider that a half-decent P4 can put out 400ppd+ running QMDs, and that any AMD processor you buy is likely to spend most of its time running at around 130ppd on normal Gromacs WUs... it means that you'd need 3 AMD systems to give the same output as one P4. That suddenly makes the P4s seem cost-effective.

It would be ideal if Stanford could sort out the AMD licensing issue re: QMDs, but I can't see it happening anytime soon.
 
BillytheImpaler said:
Maybe cheat a bit by getting SSE3 supporting AMD CPUs then modding FAH504-Console.exe to think it's a P4. It'd probably crunch faster than a P4 of the same cost and put out less heat to boot.

I've tried it - it runs at about the same speed as a similarly-priced P4. I don't know what would happen if we were caught doing this on such a large scale though. If it seems like cheating is our 'official' policy... don't want our entire team getting banned or something crazy like that.
 
Last edited:
Don't bother with P3s for this - £175 is hardly cheaper than a much, much more powerful setup :)

We had specific needs/reasons for buying P3s - namely a very cheap local seller :D (local to a US member unfortunately :p).

In a project like this, space is also an issue.
 
Joe42 said:
When it comes to PC hardware, yes. Rest of post removed.
Berserker
Apolgies.

I didn't realise p3's couldn't do qmd's. So it has to be cheap p4's or celerons?
What sort of an impact does hyperthreading have on it?
 
I think one thing you need to realise is insurance. Unless as someone said it lives in a datacenter then insurance is going to be a big issue for anyone with a business.

Also there is no way on earth you are going to get any datacenter to agree to house a full 48U rack of caseless motherboards.

So you are stuck between finding a private "tame" datacenter type setup where the person doesn't care about having a fire hazard or insurance, and hope that the donors don't worry about whether their hard earned cash is going into a bunch of hardware that isn't insured.

Or.

You spend more money/less layers and get the thing properly racked up either using cheap small shuttle type machines or second hand 1U servers.
 
BillytheImpaler said:
I'd definitely go for the P4 option so that she could be pulling QMDs. Need it be so serious with all the rack mounting and jazz? Stompy was just regular ATX mobos in a stack. I know magman's had a tough time maintaining the inner nodes becasue they're hard to access but it seems easy to me to just lay the mobos on a shelf. Put rollers on the shelves so they behave like drawers and you're in business.

Actually, the StompMonster layers at Chez Magman have been rackmounted for a long time. I now have a 19" rack in my garage which houses Stompy's layers, my server, a UPS and a few of my own layers.

Here's a link to a page I just set up showing some of what Stompy now looks like.

Stompy Photo's

I've been reading through the other comments above and have a few thoughts to throw into the discussions as well.

First, you will have to take into account the running costs for the new cruncher ("Stompy II", "Son of Stompy", or "Stompy! the Reincarnation" perhaps). With 16 layers each putting out over 100 Watts of power, you have something in the order of a 2kW heater running 24 hours. Apart from disipating this heat, you also will have 40+kW Hours of electricity costs to pay for somehow. This can be mitigated by not going for the cheapest PSU, try finding one with PFC and an efficiency greater than 80% if possible. A more efficient PSU will easily pay for itself in a very short timeframe.

One possible solution is to distribute Stompy, 4 hosts with 4 layers each for example (this seems very apt to me a distributed distributed cruncher), though this would mean having 4 boot disks rather than one. This option would also considerably increase the resiliance of Stompy II.

There have been some mention of different heatsinks. There is a very practical limitaiton on the size of the heatsinks due to the promposed layering of the crunchers. The taller the heatsink/fan combination, the fewer layers you can fit in a rack. For this reason, you can't use some of the better price/perfomance heatsinks such as the Arctic cooling range as they are normally quite tall.

One final point. Chossing a MATX (or even Micro ITX now) board is a good choice from a space and accomodation point of view, but I haven't found many MATX boards that are good overclockers as yet. Most Manufacturers pigeonhole MATX boards at the media centre or low cost part of the market, it is normally enthusiast boards that have good overclocking options.

That's all for now, but I will no doubt comment more as the discussions progress.
 
magman said:
First, you will have to take into account the running costs for the new cruncher ("Stompy II", "Son of Stompy", or "Stompy! the Reincarnation" perhaps). With 16 layers each putting out over 100 Watts of power, you have something in the order of a 2kW heater running 24 hours. Apart from disipating this heat, you also will have 40+kW Hours of electricity costs to pay for somehow. This can be mitigated by not going for the cheapest PSU, try finding one with PFC and an efficiency greater than 80% if possible. A more efficient PSU will easily pay for itself in a very short timeframe.
Definately something that needs to be looked at if Stompy II is to reside in a single residential location. Agree 100%, that a more effecient, but yet a bit more expensive PSU will be the answer to better overall performance and cost effeciency.

magman said:
One possible solution is to distribute Stompy, 4 hosts with 4 layers each for example (this seems very apt to me a distributed distributed cruncher), though this would mean having 4 boot disks rather than one. This option would also considerably increase the resiliance of Stompy II.
This would also be a good option if the power / inssurance situaiton that Biffa referred to becomes a roadblock. I've found some 16U portable racks < fully assembled, doors, panales, casters etc > that could allow us to split Stompy-II into 4 units < somthing like Stompy-II(A), Stompy-II(B), etc etc> that could easily hold 4 layers, with plenty of room to spare. price ~ 300.00 delivered.

magman said:
There have been some mention of different heatsinks. There is a very practical limitaiton on the size of the heatsinks due to the promposed layering of the crunchers. The taller the heatsink/fan combination, the fewer layers you can fit in a rack. For this reason, you can't use some of the better price/perfomance heatsinks such as the Arctic cooling range as they are normally quite tall.
I thought about the low profile XP-120 .. I know it's a large heatsink, but would certly disapate the heat this thing will be generating.

magman said:
One final point. Chossing a MATX (or even Micro ITX now) board is a good choice from a space and accomodation point of view, but I haven't found many MATX boards that are good overclockers as yet. Most Manufacturers pigeonhole MATX boards at the media centre or low cost part of the market, it is normally enthusiast boards that have good overclocking options.
I was going to make a suggestion to Spie on the Asrock boards. I will find out today, but Im pretty sure Asrock is a subdivision of Asus and from a few of the reports I've found floating about on the Web, seems the boards OC rather well.
.
 
Last edited:
BillytheImpaler said:
HT is nice to have as it allows you to crunch 2 WUs simultaneously. Having it is not a necessity. I'd rather have a slower P4 over a faster Celeron becasue cache size is a big factor in how fast it'll crunch.

If we are going to be doing QMDs, HT won't make much difference at all. It'll just mean we need more RAM in every box. Agree about the Celerons - they're abysmal for most sorts of crunching.

On the CPU front, how about P4 2.4C or 2.6C? I've seen them going for dirt cheap - £40 to £60 on you-know where. They'll give a decent PPD on QMDs. They'll run cool, because they're Northwoods. And because they're bottom of the range, they'll be nice clockers - 3.0-3.2Ghz, probably.
 
I did a few quick calculations using a protable 16U Rack approach < = 4 Layer Stompy-II Module > v.s. the 42U Rack. This would accomidate at least (4) layers or 8 CPU's. Made one change, removed the diskless boot and added in an inexpensive 256MB USB key for each system for the O/S < Gentoo - Flash Linux > which allows for simple installation in any DHCP network:

Base Rack system - 16U Portable:
Rack CAB-F16U66 16U 600 x 600mm = £300.00
Panduit Cable Management For the Head Switch = £20.00
APC PDU 8 Port APC Power Distriution Unit = £310.00
Switch 8 Port 10/100 Unmanaged Switch, Rack Mounted = £50.00
Mounting Boards 10mm MDF or eqial material 4x = £20.00
Mounting Hardware Misc mounting hardware for nodes 8x £40.00
Total = £740.00 per module

Option(1) P4 Option:
PSU - Enermax Liberty 620W v2.2 Black PSU ELT620AWT 1x = £110.00
RAM - Corsair® Value Select 1GB DDR PC3200 Kit (2 x 512MB) (VS1GBKIT400C3) 2x = £116.00
MOB - Asrock 775DUAL-915GL 2x = £91.00
USB - Corsair Flash Voyager 256MB USB2.0 2x £40.00
CPU - Intel Pentium 4 630 "LGA775 Prescott" 3.0GHz 2x = £246.00
CBL - PSU ATX Power Splitter Cable 1x = £16.00
NIC - CAT-5e Network Cable 2x = £6.00
FAN - Stock CPU Cooler 0x = £0.00
Total = £625.00 per Layer or £312.00 per Node

Option(2) AMD X2 3800+:
PSU - Enermax Liberty 620W v2.2 Black PSU ELT620AWT 1x = £110.00
RAM - Corsair® Value Select 1GB DDR PC3200 Kit (2 x 512MB) (VS1GBKIT400C3) 2x = £116.00
MOB - Asrock 939 NF4G-SATA2 2x = £96.00
USB - Corsair Flash Voyager 256MB USB2.0 2x £40.00
CPU - AMD Athlon 64 X2 Dual Core 3800+ 2x £434.00
CBL - PSU ATX Power Splitter Cable 1x = £16.00
NIC - CAT-5e Network Cable 2x £6.00
FAN - Stock CPU Cooler 0x £0.00
Total = £818.00 per Layer or £409.00 per Node

Option(3) Sempron 3800:
PSU - Enermax Liberty 620W v2.2 Black PSU ELT620AWT 1x = £110.00
RAM - Corsair® Value Select 1GB DDR PC3200 Kit (2 x 512MB) (VS1GBKIT400C3) 2x £116.00
MOB - Asrock K8NF4G-SATA 2 2x = £76.00
USB - Corsair Flash Voyager 256MB USB2.0 2x = £40.00
CPU - AMD Sempron 64 3400+ 2.0GHz 2x = £186.00
PSU - PSU ATX Power Splitter Cable 1x = £16.00
NIC - CAT-5e Network Cable 2x = £6.00
FAN - Stock CPU Cooler 0x = £0.00
Total = £550.00 per Layer or £275.00 per Node

Still working up a few other options, specifically the 165 opterons and the new dual-core P4's, Celerons are not a viable option for folding.

[EDIT:]
So an initial Sartup System would Be / Cost:

P4 Flavor
-->Rack System & One Layer < 2 Nodes >: £1365.00
--> Additional Layer < 2 More Nodes >: 625.00
Power Consumption: 2 Nodes ~ 436W
Need the Processor PPD to calculate the PPD/W

X2 Flavor
-->Rack System & One Layer < 2 Nodes >: £1588.00
--> Additional Layer < 2 More Nodes >: 818.00
Power Consumption: 2 Nodes ~ 346W
Need the Processor PPD to calculate the PPD/W

Sempron Flavor
-->Rack System & One Layer < 2 Nodes >: £1290.00
--> Additional Layer < 2 More Nodes >: 550.00
Power Consumption: 2 Nodes ~ 212W
Need the Processor PPD to calculate the PPD/W

More to follow .......

[END EDIT]
.
 
Last edited:
Back
Top Bottom