OcUK - Stomp Monster Project

Man of Honour
2 Aug 2005
Cleveland, Ohio, USA
Mr. Sprout that sounds very good. It's exactly what we're looking for. I'd be a bit concerned that the board member is not a part of the OcUK DC community but on your honor we could trust him. Would he be willing to put up with the electricity costs of such an abomination? Would he commit to a certain amount of time as caretaker?
18 Oct 2002
whilst this seems like a valuable effort, perhaps it would be quicker / easier / faster / cheaper for someone to whip up to stoke and pick up the rest of stompy classic ?

if we could get the rest of the layers that are not at ches magman running it would be a good starter for ten. this would also give the custodian of the layers some experience of what multiple nekkid PCs running at 100% 24/7 means in terms of heat and noise output and power input.

just a thought

23 Feb 2006

Just thought I'd introduce myself here, even though I'm not in the UK ;) I'm Beansprout's associate from the US that currently hosts the Free Earth Foundation's cluster. I'll try to hit all of the things I've noticed that I felt it would be a good idea to respond to...

We (me/The Free Earth Foundation) would be happy to provide you with adequate space for a small end cluster, although as always there are certain terms and suggestions. All electronics equipment MUST be in enclosures. Computer / Electronics equipment running "naked" (with no enclosure) is an incredible fire / shock hazard and I havent had time to install a pre-action smoke detection system (basically a high end smoke detector that can detect smoke before visible flames are present). We'd also want to "share" the processor time between what our priority is (processing aerial and satellite imagery) and BOINC. We'd also _like_ the ability to "take over" control of all CPU cycles in "emergent data" cases. This wouldnt happen a lot, a good example of what would be considered "emergent data" would be the Imagery from the US Hurricanes last year (Rita, Katrina, etc). This imagery is extremely time sensitive, and while F@H is important to us, saving lives is slightly more important ;) (Yes, it does save lives, the National Guard/NATO have actually been using our software / imagery in their infrastructure).

Anyway, beyond that we already have the infrastructure running (Our machines PXE boot from a machine here) and the cluster already runs BOINC actually, even though it isnt very well scheduled (Someone has to start and stop BOINC by hand, instead of it automagically happening). Basically, all we have to do is install new nodes, is dust them out, record the MAC address, and plug em in.

Our cluster currently as I'm sure Bean has mentioned, is comprised of IBM Netvistas, Pentium 3 w/ 256 MB SDRam and 20 GB HD's. We can get these machines for 70$ each or 65$ if we order in lots of 20. I've noticed our supplier has started offering newer Netvista's, Pentium 4 1.8 Ghz systems with 256 MB Ram and a 40 GB HD.

Basically, ours is another distributed computing project but the problem is that we move such massive ammounts of data that we cannot distribute a "BOINC" type client, and must use machines on 100 mb ethernet.

Current Infrastructure:
The cluster is interconnected currently by 24port and 16port 10/100 Megabit switches. We're hoping this connection is going to be replaced by a 96 Port switch from Foundry Networks (foundry.com) sometime in the near future, which will allow for expansion (our initial buildout plan calls for 100 nodes) and allows us a 1 gbit switch fabric from the core to the edge. Our core network is comprised of three machines, storage1 was my initial storage server, and hosts a 1.75 TB Raid5 array; storage2 is a newish storage server that currently hosts 3 250 GB serial ata drives, which will soon be upgraded with approx. 5 400 GB serial ATA drives and a PCI-E raid card; My workstation is also in there which is used to visually verify and correct data errors. This "core" segment is interconnected with a 10/100/1000 mbit switch which will eventually have a 1gbit trunk back to the 96port switch for the cluster.

Theirs also some random VOIP and wifi equipment on that network, but its not vital to the cluster. Just incase your curious, our data gets shipped out to local datacenters on 250 GB USB drives, where it gets uploaded to our production webservers (currently we have 2 TB of production webspace, with another 800 GB coming online shortly).

Anyway, as with the Free Earth Foundation, I'd be happy to donate the costs of the power. Also, currently cooling is not a problem as the P3's we are using have a very low thermal output which just dissapates throughout the room. The machines are also arranged in a "hot aisle, cold aisle" configuration (machines are oriented back to back (or in this case, side to side) so that Hot air vents into one "column" cold air gets sucked in through a different column. Our power consumption at full load is 0.66 Amps per machine (@110 volts). I would give you the current "total" consumption, but my new power outlets havent been installed yet. I'm also getting 4 20 amp circuits instead, with 8-16 new outlets, so we'll have plenty of room to grow electrically.

Feel free to let me know if you have any questions, I'll try to keep an eye on this thread, but if you want to email me, its [email protected] (msn is the same).
15 May 2004
I see a few issues. Back in t' day, it was SETI all the way. Now Folding seems to be the project of choice on the forums, but there is a fair amount of people running other things. If it's to be a Folding only cluster then that might alienate some folk.

Also, I know the non-on layers of Stompy aren't going to cut it for the big scoring Folding WUs now, but they would cut it on other projects. I suggest either getting them online, or perhaps, selling them off to the community with the procedes going to the fund for this.
9 Jan 2003
look at this forum

look at the posts in the last 24 hours
the last week
the last month

we're not the superpower that we once where, and with the cost of buying/running such a project we've not got the willpower to do so.
my little home farm 6 PCs with 7-8 on their way costs a small fortune to run/maintain, and is only viable for me as I don't have to pay the eleccy bill!

also we're a split team, before we had 99% of people on Seti, now we're split over 10 or more projects!

every so often I look at what gives the best bang per buck for Folding/Seti, its been the AMD 2500s up until recently

just worked this out

Intel Pentium 4 2.8GHz (Prescott) Socket 478 £79.09
MSI 661FM2-LSR SKT478 mATX DDR400 AGP onboard graphics £29.78
Hitachi 7K80 Deskstar 40GB 7200RPM ATA/100 2MB Cache - OEM £26.62
1GB DDR PC3200 400MHz 184pin Ram £40.84

Cart Total: £176.33

Shipping Band: £4.49
Approx Cart Weight: 2.32Kg

SubTotal: £180.82
VAT: £31.67
Total: £212.49

thats a QMD workhorse for £212.49
cheaper if you bought 10+ at one time.

(note, not from overclockers but a trade supplyer I have)
Last edited:
Top Bottom