OcUK - Stomp Monster Project

Associate
Joined
3 Jul 2004
Posts
866
Location
Helena, Montana
Is this project still active, meaning is it still being expaned upon or are there upgrage plans etc?

Additionally, I was wondering if there would be any interest in starting up another project similar to it for F@H?

I've been working on plans for building my own cluster for some time now, and would be interested in working with or leading a project similar in nature.

Just a thought.
.
 
Berserker said:
Organising a new project off our own backs would take some doing, but if it looks like it's getting serious, then I'll take it to the Don's Room for consideration.

Mr _Berserker_ - Appreciate the appeal to the Don's < provided there was / is interest >. Here's what I was "Thinking About" from a design standpoint.

--> Master & Slave Node Diskless System < Linux to Save on Lic Costs & Ease of remote administration >
--> Apache Web Sever on the Master for System / Node Performance Data < e.g Ganglia or something similar >
--> Need an IP Address in a DMZ somewhere for access to Stats < heat, loading etc >
--> PSU's could be 1 PSU for every two systems < Saves Costs > does add a bit of heat though.

Base Config Thoughts:
1x 48U Rack < Allows for extensive expansion >
1x 24 Port Switch < Managed for Remote Access would be preferred >
1x APC Master Switch < Also remote Manageable > 1 per 8 nodes.

Processor Considerations: < Power Consumptoipn v.s. Performance Considered>
--> AMD X2's due to power and cost. But the new LGA775 Presler" 2.8GHz have allot to offer, and it has an L2 of 2MB's per Core on a 60nm chip. Also supports SSE3.

Motherboards:
Any mid range Micro-ATX would suffice, as Micro-ATX will allow for 2x nodes per layer in the rack. May be difficult to get a 955X Micro-ATX Motherboard to support the Presler Core chips. Would preferr them to all be the same, but not critical. Would definately want a VGA output onbard.

RAM:
Well, that's not to difficult

PSU:
A descent 450 to 550 Modular PSU would do. The higher (W) the better as we'd be running two board from one PSU, and they'd be more effecient.

CPU Coolers:
A nice ThermalRight XP-120 or SI-120 would do nicely to keep things cool.

Thouhgts, comments, Ideas???
.
 
Last edited:
Billy -

There was a couple of other considerations I thought about. With the rack specifically, it' makes for a tidy configuraiton / instalaltion, it's easly moved to a more appropriate locaiton < if desired > etc. The cost of a rack is approximately the price of 1.5 nodes, but provides the appropriate enviroment for managing the cluster < wires, switches, power etc etc >. It could also have the names of the contributors plastered on the front glass door :D

The RAID-1 setup is really a must. A couple of zippy SATA drives would do the job. Disk I/O is rather low once the system is up and running. Would depend on the check-point setting for each instalaltion and the node monitor update frequency. That could be staggered if necessary. We run a 64 node cluster in a similar set up, which has a very high disk I/O rate on some applicaitons, doesnt' seem to hurt performance much, and the disks have survived a couple of years of punding. What are you thinking about with regards to 50BM's per node ? is that due to the Core space requirments?

Another consideration < which simplifies the cluster configuration > is to run a caching DNS server and add in a Simple Router to manage communications to and from stanford or the outside world.

As for future proofing, well, that's tricky, but certainly something to keep in that back of your mind, specifically with the use off GPU's as additional crunchers. I've read allot about the x1800 and it would be a nice upgrade to any system for sure. Have not checked out the price different in motherboard though.
.
 
My thought on the processors was rather simple. With either an X2 or a P4, we'd get nealy 2x the output for the same amount consumption. < if not less, allthough I havent done a formal study on power consumption v.s. ppd on the wu's >.

I know the rack seems overkill, but if we're to convience somebody to fess up 10 to 15 amps of power 24x7 .. they will probably want it to look the part and fit in somwhere appropriate, but ya never know.

As for the Modular PSU, it did cross my mind to run 4 systems < I've found some commercially made power splitters > from one 650W PSU and we don't need all the wire's and crap hanging around to do nothing but disrupt air flow. But point taken on the addiditnoal costs. In a case where your hammering the PSU's quality is important for long term performance & stability.
.
 
Last edited:
Beansprout said:
Getting jealous of my foundation's uber-leet P3 rosetta@home cluster are we? :D

http://www.freeearthfoundation.com/cluster/2/

:D

One question I have....how are we going to afford 30-odd X2s - are they truly the best price/perf?
Definately Jellous .!!!

The scope was more along the lines of 16 Physical CPU's but could be more. If the option is the X2 say 3800+ flavor or or the new P4's there about what, 160 ish per processor?

The opteron option say 165 or somehting would be attractive as well, as it's on 939 pin now.

There all just Ideas, but worth exploring mathmatically for the most Bang for the buck.
.
 
Last edited:
Berserker said:
That's still £2.5k, and you've not even considered mobos/PSUs/heatsinks/etc yet. Let's say 50 people got involved (a not unreasonable number given the number involved in the original stompmonster), that's £50 each, and probably at least £75 by the time you've added all the other stuff. While some people might be willing to offer that sort of money, I very much doubt you'd get 50 of them.

I'm not trying to blow your idea out of the water here, but I do think you may be setting your aim too high.
Definately a valid point. The desgin / Idea was / is for a system of 16 or so, but, we can start with as little as one master and one slave and gradually add to the system. I personally wouln't have a problem with say twice per year coughing up for a couple of nodes, but everyone personal situation is different, so we'd have to allow for everyone's needs / abilities to participate.

[EDIT]: One thing is for sure, we're not going to solve all the quesitons and potential problems iin one day, so, maybe a list of pros / cons / challanges / oppertunities to excel < management - 101 classes -- see I do remember some things :eek: > is in order to hash this out properly.
.
 
Last edited:
Joe42 said:
I agree. My sugestion:
-asrock board, onboard graphics and lan, nf4. £36
-sempron 2800+ overclcoked to 3400+ speeds with stock hsf £52
-corsair value 512 ram cas 2.5 £28
-Cheap 350w powersupply £15
Total: £131 per node

Less than the £217 for just an x2 3800+.
x16 = Grand Total: £2096
Obviously you would need one with a hdd conected to the internet.

This isn't a high end enterprise server, its a cheap donor funded folding farm.

Let me have a few days to put together some numbers, real numbers that provide for initial capitol investment, and sustaining costs and see if iI can come up with a descent proposal that makes some sence.

I looked at the Asrocks for myself, but OcUK doesnt' sell them or at least I didnt' see them < I believe Asrock is a subdivision of Asus>. The other side of this is, the contribution from OcUK could be a small discount to us if all the material was / is purched from them. Don't know if Spie is interested in that or not, but if we put a descent proposal in front of him < after all he is all about business> the advertising and constent grouth of the member base within the gorup / project has to have some marketing value that would allow him to realise a real return on investement, which is what business is all about.
 
Last edited:
magman said:
First, you will have to take into account the running costs for the new cruncher ("Stompy II", "Son of Stompy", or "Stompy! the Reincarnation" perhaps). With 16 layers each putting out over 100 Watts of power, you have something in the order of a 2kW heater running 24 hours. Apart from disipating this heat, you also will have 40+kW Hours of electricity costs to pay for somehow. This can be mitigated by not going for the cheapest PSU, try finding one with PFC and an efficiency greater than 80% if possible. A more efficient PSU will easily pay for itself in a very short timeframe.
Definately something that needs to be looked at if Stompy II is to reside in a single residential location. Agree 100%, that a more effecient, but yet a bit more expensive PSU will be the answer to better overall performance and cost effeciency.

magman said:
One possible solution is to distribute Stompy, 4 hosts with 4 layers each for example (this seems very apt to me a distributed distributed cruncher), though this would mean having 4 boot disks rather than one. This option would also considerably increase the resiliance of Stompy II.
This would also be a good option if the power / inssurance situaiton that Biffa referred to becomes a roadblock. I've found some 16U portable racks < fully assembled, doors, panales, casters etc > that could allow us to split Stompy-II into 4 units < somthing like Stompy-II(A), Stompy-II(B), etc etc> that could easily hold 4 layers, with plenty of room to spare. price ~ 300.00 delivered.

magman said:
There have been some mention of different heatsinks. There is a very practical limitaiton on the size of the heatsinks due to the promposed layering of the crunchers. The taller the heatsink/fan combination, the fewer layers you can fit in a rack. For this reason, you can't use some of the better price/perfomance heatsinks such as the Arctic cooling range as they are normally quite tall.
I thought about the low profile XP-120 .. I know it's a large heatsink, but would certly disapate the heat this thing will be generating.

magman said:
One final point. Chossing a MATX (or even Micro ITX now) board is a good choice from a space and accomodation point of view, but I haven't found many MATX boards that are good overclockers as yet. Most Manufacturers pigeonhole MATX boards at the media centre or low cost part of the market, it is normally enthusiast boards that have good overclocking options.
I was going to make a suggestion to Spie on the Asrock boards. I will find out today, but Im pretty sure Asrock is a subdivision of Asus and from a few of the reports I've found floating about on the Web, seems the boards OC rather well.
.
 
Last edited:
I did a few quick calculations using a protable 16U Rack approach < = 4 Layer Stompy-II Module > v.s. the 42U Rack. This would accomidate at least (4) layers or 8 CPU's. Made one change, removed the diskless boot and added in an inexpensive 256MB USB key for each system for the O/S < Gentoo - Flash Linux > which allows for simple installation in any DHCP network:

Base Rack system - 16U Portable:
Rack CAB-F16U66 16U 600 x 600mm = £300.00
Panduit Cable Management For the Head Switch = £20.00
APC PDU 8 Port APC Power Distriution Unit = £310.00
Switch 8 Port 10/100 Unmanaged Switch, Rack Mounted = £50.00
Mounting Boards 10mm MDF or eqial material 4x = £20.00
Mounting Hardware Misc mounting hardware for nodes 8x £40.00
Total = £740.00 per module

Option(1) P4 Option:
PSU - Enermax Liberty 620W v2.2 Black PSU ELT620AWT 1x = £110.00
RAM - Corsair® Value Select 1GB DDR PC3200 Kit (2 x 512MB) (VS1GBKIT400C3) 2x = £116.00
MOB - Asrock 775DUAL-915GL 2x = £91.00
USB - Corsair Flash Voyager 256MB USB2.0 2x £40.00
CPU - Intel Pentium 4 630 "LGA775 Prescott" 3.0GHz 2x = £246.00
CBL - PSU ATX Power Splitter Cable 1x = £16.00
NIC - CAT-5e Network Cable 2x = £6.00
FAN - Stock CPU Cooler 0x = £0.00
Total = £625.00 per Layer or £312.00 per Node

Option(2) AMD X2 3800+:
PSU - Enermax Liberty 620W v2.2 Black PSU ELT620AWT 1x = £110.00
RAM - Corsair® Value Select 1GB DDR PC3200 Kit (2 x 512MB) (VS1GBKIT400C3) 2x = £116.00
MOB - Asrock 939 NF4G-SATA2 2x = £96.00
USB - Corsair Flash Voyager 256MB USB2.0 2x £40.00
CPU - AMD Athlon 64 X2 Dual Core 3800+ 2x £434.00
CBL - PSU ATX Power Splitter Cable 1x = £16.00
NIC - CAT-5e Network Cable 2x £6.00
FAN - Stock CPU Cooler 0x £0.00
Total = £818.00 per Layer or £409.00 per Node

Option(3) Sempron 3800:
PSU - Enermax Liberty 620W v2.2 Black PSU ELT620AWT 1x = £110.00
RAM - Corsair® Value Select 1GB DDR PC3200 Kit (2 x 512MB) (VS1GBKIT400C3) 2x £116.00
MOB - Asrock K8NF4G-SATA 2 2x = £76.00
USB - Corsair Flash Voyager 256MB USB2.0 2x = £40.00
CPU - AMD Sempron 64 3400+ 2.0GHz 2x = £186.00
PSU - PSU ATX Power Splitter Cable 1x = £16.00
NIC - CAT-5e Network Cable 2x = £6.00
FAN - Stock CPU Cooler 0x = £0.00
Total = £550.00 per Layer or £275.00 per Node

Still working up a few other options, specifically the 165 opterons and the new dual-core P4's, Celerons are not a viable option for folding.

[EDIT:]
So an initial Sartup System would Be / Cost:

P4 Flavor
-->Rack System & One Layer < 2 Nodes >: £1365.00
--> Additional Layer < 2 More Nodes >: 625.00
Power Consumption: 2 Nodes ~ 436W
Need the Processor PPD to calculate the PPD/W

X2 Flavor
-->Rack System & One Layer < 2 Nodes >: £1588.00
--> Additional Layer < 2 More Nodes >: 818.00
Power Consumption: 2 Nodes ~ 346W
Need the Processor PPD to calculate the PPD/W

Sempron Flavor
-->Rack System & One Layer < 2 Nodes >: £1290.00
--> Additional Layer < 2 More Nodes >: 550.00
Power Consumption: 2 Nodes ~ 212W
Need the Processor PPD to calculate the PPD/W

More to follow .......

[END EDIT]
.
 
Last edited:
BillytheImpaler said:
Shouldn't option(1), the P4 option, have an Intel CPU rather than an AMD X2 3800+? :confused:
:o LOL ... yes, and the Motherboard was wrong as well .. :D

All sorted out now. Thats what I get for trying to do things the easy way < Crtl-A, Ctrl-V, Crtl-P > .. Opps.. lol :o
.
 
Joe42 said:
Found an interesting article: We should calculate £/mhz in order to find the best value cpu as in the first article.
I'm not so sure that is the way to approach it, especially with the new dual core processors. I'd say that PPD < if we are talking strictly F@H> would be the approach to take.

Price per point may be the optimum formula to work from.
.
 
On the APC's i've monitored, Dual XEON full servers, fully loaded CPU's, with disk drives etc, pulled about 13.0 AMPS at 110VAC and 7.5 - 8.0 at 240VAC for 8 Servers.

Remember to WAV ( Say hello there :p ) Watts = Amps x Volts

So 8A * 240VAC = ~ 1940W Per 8 servers

or

13A * 110VAC = ~ 1430W Per 8 servers

No way would this be 15,000 Watts of Power. Which stands to reason. If you had 8 550W PSU's the maximum therotical available power is 4400W.
.
 
Last edited:
Back
Top Bottom