OcUK - Stomp Monster Project

BillytheImpaler said:
Shouldn't option(1), the P4 option, have an Intel CPU rather than an AMD X2 3800+? :confused:
:o LOL ... yes, and the Motherboard was wrong as well .. :D

All sorted out now. Thats what I get for trying to do things the easy way < Crtl-A, Ctrl-V, Crtl-P > .. Opps.. lol :o
.
 
Joe42 said:
Found an interesting article: We should calculate £/mhz in order to find the best value cpu as in the first article.
I'm not so sure that is the way to approach it, especially with the new dual core processors. I'd say that PPD < if we are talking strictly F@H> would be the approach to take.

Price per point may be the optimum formula to work from.
.
 
QMDs will be around for some time. They're not going to switch compilers/math libraries any time soon either. I think that P4s would be a good bet.

Mr. Mattus, are you on the beta team? If so you'll see the new QMDs before the rest of us do. You'll have to give us a full report when it comes time.
 
KE1HA said:
My thought on the processors was rather simple. With either an X2 or a P4, we'd get nealy 2x the output for the same amount consumption. < if not less, allthough I havent done a formal study on power consumption v.s. ppd on the wu's >.

I know the rack seems overkill, but if we're to convience somebody to fess up 10 to 15 amps of power 24x7 .. they will probably want it to look the part and fit in somwhere appropriate, but ya never know.

As for the Modular PSU, it did cross my mind to run 4 systems < I've found some commercially made power splitters > from one 650W PSU and we don't need all the wire's and crap hanging around to do nothing but disrupt air flow. But point taken on the addiditnoal costs. In a case where your hammering the PSU's quality is important for long term performance & stability.
.

An example of power/heat...

42U rack + 10 DL385s with two operon 275s sucks ~15KW and kicks out over 26K BTU/hr. So that'sa good eight 600cfm in the back of the rack... not to mention the additional A/C requirements..

Check the number of PDUs and the phase usage carefully from your 16 amp lines..
 
BillytheImpaler said:
Mr. Mattus, are you on the beta team? If so you'll see the new QMDs before the rest of us do. You'll have to give us a full report when it comes time.

Yup, I joined a couple of weeks back. Fairly sure Rich is as well.

No sign of them yet :p
 
Biffa said:
Exactly £/mhz is a useless measure now days with lower mhz cpu's beating higher mhz ones in most cases.
Not if we're only interested in p4's.
Add up the clock speed for dual cores to get a total, and that ought to act as a fairly good measure of folding performance, as long at its all the same core/architecture.
Then we can use that to calculate a £/mhz number for each processor to find the best one.

An example of power/heat...

42U rack + 10 DL385s with two operon 275s sucks ~15KW and kicks out over 26K BTU/hr. So that'sa good eight 600cfm in the back of the rack... not to mention the additional A/C requirements..
Wouldn't it be easier to keep cool without the rackmount? That way you would have more space, and could have each pc sitting on a shelf in the open air. I know its a bit untidy and such but it ought to be cooler and cheaper...
 
Hi guys, sorry to butt in, but have you thought about power consumption over the term of say 1 year.
I have both A64s(939) and P4s and can tell you the AMD systems use atleast a third less power even when overclocked.
So whilst an AMD system may be more expensive to start with the cost would even out over the first year and ater that youd be quids-in.
Just a thought I dont know how you intended to pay for the electric.
Tom :)
 
Joe42 said:
Not if we're only interested in p4's.
Add up the clock speed for dual cores to get a total, and that ought to act as a fairly good measure of folding performance, as long at its all the same core/architecture.
Then we can use that to calculate a £/mhz number for each processor to find the best one.

Wouldn't it be easier to keep cool without the rackmount? That way you would have more space, and could have each pc sitting on a shelf in the open air. I know its a bit untidy and such but it ought to be cooler and cheaper...

1. calculating MHz/£ is pointless, it would be better to get an idea of how quickly it does a unit for your chosen project. In essence a two step process of calculating the time per WU and then the WUtime/£.
That is a static captial cost for each node you add but allows the best price per WU.
The reason I say this is that new hardware/nodes will need to have a compariable performance rating.

Next calculate the power consumption cost for a each hour of running for a single node - that is both the power (KW) and the heat output (BTU) per hour. Calculate the cost of electricity per hour and the cost of cooling the heat output per hour.
Then look at the take the time for the average WU in hours (or part of) and then multiply the cost of electricity and cooling to give the cost of a WU in terms of electricity and cooling.

This is your ongoing cost per WU that you will incur for processing it. If you have more than one node then multiply that figure by the number of nodes.

2. Cooling without a rack is ok as long as it's design right, however you will still have the same heat output in BTU/hr being soaked into the room which will need extraction.

In additional you'll need to factor in a cost of replacement parts (hard discs are the norm) and spread that over each WU processing cost or set aside a pool of money to replace dead bits.

You could make a webpage for public display of progress and a paypal donation system for paying for the hosting/electricity.
You'd need to start with a single node, and gain financial backing through the paypal.
At the end of each month you will need to make a judgement call if the project continuation is a go/no go or you (the individual running it) will incur the costs and possible debit (even after recovering money from the sale of the nodes).
Then comes all the complications in that individual's tax... but that would need exploration.
 
Last edited:
NickK said:
An example of power/heat...

42U rack + 10 DL385s with two operon 275s sucks ~15KW and kicks out over 26K BTU/hr. So that'sa good eight 600cfm in the back of the rack... not to mention the additional A/C requirements..

What sort of dual cpu rig uses 1500 W? You got 50 HDDs in them or something?
 
On the APC's i've monitored, Dual XEON full servers, fully loaded CPU's, with disk drives etc, pulled about 13.0 AMPS at 110VAC and 7.5 - 8.0 at 240VAC for 8 Servers.

Remember to WAV ( Say hello there :p ) Watts = Amps x Volts

So 8A * 240VAC = ~ 1940W Per 8 servers

or

13A * 110VAC = ~ 1430W Per 8 servers

No way would this be 15,000 Watts of Power. Which stands to reason. If you had 8 550W PSU's the maximum therotical available power is 4400W.
.
 
Last edited:
well here's the official figure for a DL385

244W with 2x512 MB Dimms and a single Opteron 265 just to put it in perspective


anyway electricity supply and cooling would be a major factor in the long-term success of this kind of project, it's not a show-stopper but it does need to be taken very seriously
 
PhilthyPhil said:
Well he said 10 dual rigs use 15kW so that 1500 W per rig... which is still a hell of a lot.

Good to see someone's awake. ;)

5750W max if you were to put 10 in without hotwap PSUs, phase power switch, couple of big Cisco 3750 switches, more ickle cisco switches, additional PDUs and not add any slack... oh and remove the 8 fans cooling it :D
 
Last edited:
I'd possibly try and lay down some LV Xeons, they've got awesome production rate with hyperthreading considering the CPU cost. They've also got pretty good performance/watt, considering at stock 1.3vcore I've got two LV rigs running at dual 2.7GHz and dual 2.8GHz from 1.6GHz.

Whereabouts would all this be hosted by the way?

There's a datacenter nearby where i host a cluster of 5 machines crunching away for basically the price of the electricity they use as it's effectively for charity. They're expanding to another datacenter in the near future as well, I can get in contact with the owner if that's of interest?
 
I'm on the board of a non-profit org and a fellow board member is running 25 PCs in his basement as our imagery processing cluster. There's room for lots more kit, but it would need extra cooling & electricity beef (though we're installing those anyway) and there's no physical access for any of us.

But it's an option :) (well I haven't asked if we could use the space or his time yet, so this is more a hypothetical 'what is the consensus' post :) )
 
Back
Top Bottom