Break the supercomputer processing record

Soldato
Joined
13 Jul 2004
Posts
20,345
Location
Stanley Hotel, Colorado
I was contacted by another forum trying to get everyone organised into assembling a large enough amount of processing power to outdo the worlds largest supercomputers


Since this is an overclocking forum I figure there must be enough WMD lying unused here to make it worth mentioning

Heres the link

http://www.xtremesystems.org/forums/showthread.php?t=249459

& this is the 5 min instructions to install the utility, you dont have to change your screensaver just have it ready for May

http://www.xtremesystems.org/forums/showthread.php?t=230613


If you can install it ready now for the start of May I think globally people can combine to form enough power to outdo a supercomputer no problem.


This is Nasa's main Columbia computer

147111maincolumbiacomp2.jpg
 
I get images of someone contolling the worlds biggest bot net when this thing goes off lol

KaHn
 
Isnt this exactly the same as SETI and that cancer research program? Just wondering what the point of installing this software is.
 
That's not one computer. Why is it such a tiny datacentre?


No its all about parallel processing. The days of one giant chip doing everything are long gone afaik.

So whether the power is one room like Nasa or across the world like this guy is trying to organise doesnt matter



Isnt this exactly the same as SETI and that cancer research program? Just wondering what the point of installing this software is.


The point is to get the highest amount of power recorded then before. So to me that means including ordinary guys with plain old quad cores or I have a dual myself who dont normally run SETI.

I usually have my computer doing something else but for one week Im prepared to see if we can break this record and get some guy at Nasa whistling at how we just made his million dollar computer look slow
 
More powerful? Depends on the measurement scale surely.

If we are talking purely FLOPS, then I guess you can argue that the processing potential of a distributed system is the sum of its individual CPUs. In which case, simply by networking lots of home processors together via a WAN would indeed produce a "system" with a higher processing potential than the majority of the super-computing clusters out there.

However, clearly, that isn't the whole story. Something has to connect those processors together, locally (8 or so) it will be the standard shared memory architecture we all know and love, globally however, it will be something like low latency Infiniband or high throughput 20-40gb Ethernet. Either way, the amount of time it will take nodes in the system to distribute data to other nodes (you can't process what you don't have) will be far FAR smaller than any system linked through the internet.

As has been said, these SETI style systems based around BOINC are great for very specific processing tasks, i.e. tiny data packets requiring massive amounts of simple yet iterative processing and resulting in equally tiny data packets. However take a more typical HPC scientific compute problem, probably parallelised using MPI or a similar library, and throw it out over a BOINC style system, even if you can work around the problems of having a non heterogeneous compute cluster, you will soon be crying over the latency and the resultant overhead of your data passing.

Nice idea, but their explanation and ultimately their claims aren't defined well enough!

Supercomputers don't cost a lot of money only because of the hardware they use, many use entirely off the shelf consumer grade hardware these days. They cost so much because of the interconnects and perhaps more importantly in some respects, the SHAPE of the system the interconnects produce!
 
Last edited:
Pretty much and mainly for the XS forum group, which might put many of the other WGC's like the OcUK seti group off.

KaHn



Its just a one off attempt for a week. The normal intra-forum wars can continue just after anyway I think this more about how many normal users computers can contribute rather then just the big guns
 
No its all about parallel processing. The days of one giant chip doing everything are long gone afaik.

So whether the power is one room like Nasa or across the world like this guy is trying to organise doesnt matter
By that definition it is automatic fail. One team could never compete with any of the existing distributed computing projects.

The challenge would be in finding an actual super computer, such as IBM's "Watson", and see how many PCs it would take to match it.


Unless I missed something and you're talking about a competition just between the smaller teams during that week. Yeah, that must be it. I must have overlooked that detail.

edit: it looks like they're using BOINC for it, so we'd just need to add this project and dedicate the cruncher to it for a week. I'm sure the guys in DC would do that so Team OcUK can crunch XS WCG. :)
 
Last edited:
By that definition it is automatic fail. One team could never compete with any of the existing distributed computing projects.

The challenge would be in finding an actual super computer, such as IBM's "Watson", and see how many PCs it would take to match it.


Unless I missed something and you're talking about a competition just between the smaller teams during that week. Yeah, that must be it. I must have overlooked that detail.

That challenge is effectively impossible though, as long as we aren't going to replace all of our WiFi and 100Mbit Ethernet and ADSL/Cable connections with countrywide Infiniband or 20-40gb Ethernet and place all of the systems in a torus or cube configuration, for all but the most ideally suited processing tasks, a BOINC based distributed processing cluster will *never* beat a large HPC.

What I would suggest is that if enough people get onboard, this will be a very large waste of electricity with little to no actual outcome (at least Seti etc actually do some worthwhile processing!) This is just an attempt to say ner ner to NASA etc.! I can guarantee you right now, whoever specifies these HPC clusters in use at NASA or other large HPC consumers (universities etc.) knows full well the possibilities of low cost alternatives, Beowulf clusters, BOINC based clusters etc. It's just they simply don't meet the requirements of most of their work (mainly CFD and FEM codes at NASA I would hazard a guess).
 
I agree having it all available realtime is probably what Nasa needs to have on hand but I think this project could beat them on processing power



The challenge would be in finding an actual super computer, such as IBM's "Watson", and see how many PCs it would take to match it.


This was part of the email I got. Seems a reasonable aim, as well as that I think they could set a new record if various forums were to join in just for this 1 week but that will be harder to do

For one week--from Saturday, May 1 to Saturday, May 8--try and bring everyone in the forum onto our WCG team and see what we can show for computational power. We're aiming big here: match the worlds top supercomputers.
 
I agree having it all available realtime is probably what Nasa needs to have on hand but I think this project could beat them on processing power






This was part of the email I got. Seems a reasonable aim, as well as that I think they could set a new record if various forums were to join in just for this 1 week but that will be harder to do

It would be fairly easy to beat NASA on potential raw compute power. But, what i'm saying is, why?

It's simple maths to work out how many FLOPS NASA has at their finger tips and how many "home pc's" would be required to reach that, i'll bet it's not that many, few hundred thousand processors probably (I don't know where NASA ranks in the TOP 50 atm, so I don't know the goal), but in the real world, and this is VERY important thing to remember with parallel computing, you can have all the horse power in the world available to you, if you can't get data to it in time, the ability to execute instructions is ultimately useless.

For the cost of an average HPC cluster, you could probably buy twice if not more off the shelf processors and shared memory hardware than the average HPC cluster has in it, but whats the point? No real world parallel problem will be able to make use of it to the extent where you have unlocked all that horsepower. That's why something like 50% (off the top of my head) of the overall cost of a HPC cluster goes into the interconnects, the custom design of the cluster (these aren't off the shelf products, they just incorporate them), and of course the storage nodes.

Its the comparison that bothers me, and if nothing was to be lost by people doing this other than them falsely claiming victory when they (almost inevitably) put together a system with more processing potential than NASA's system, then fine, who cares. But it will result in a colossal waste of electricity as hundreds of thousands of processors are pushed to near their maximum, and therefore their maximum energy intake, for no good discernible reason!
 
Back
Top Bottom