Supercomputers

Soldato
Joined
18 Oct 2002
Posts
10,962
Location
Bristol
So the University of Southhampton have a new Supercomputer. The BBC made a little video here:

http://www.bbc.co.uk/news/technology-10859175

According to the university’s page it consists of:

  • 1008 Intel Nehalem compute nodes with two 4-core processors;
  • 8064 processor-cores providing over 72 TFlops;
  • Standard compute nodes have 22 GB of RAM per node;
  • 32 high-memory nodes with 45 GB of RAM per node;
  • All nodes are connected to a high speed disk system with 110 TB of storage;

In the video Dr Oz Parchment suggests that in the world supercomputer ranking this new system would place around 83rd, interestingly he also notes that 5-6 years ago it could have been number 1. That’s the pace of computer improvement. Let’s compare with the basic office PC I’m writing this on, it cost around £600. It’s based around Intel’s Core i5-750 CPU, running at 2.66GHz. The Intel specification sheet give this CPU a floating point performance of 42.56 GFlops (billion floating point operations per second). This sounds reasonable when we consider the supercomputer with its 2016 CPUs is reported to have 72 TFlops suggesting 36 GFlops per chip. After all, Supercomputers are just large numbers of regular processors (and memory) connected together with a fast bus.

We can run Parchment’s rough calculation for my computer. How far back in time do we have to go for my standard desktop PC to be considered a supercomputer?

Since 1993 a list of the world’s fastest supercomputers has been maintained, Top 500. Going back to the beginning, we see that in 1993 a CM-5/1024 developed by Thinking Machines Corporation and owned by Los Alamos National Laboratory in the US held the top spot. This was also the computer used in the control room in the Jurassic Park film. Here’s what just a few nodes looked like, the Los Alamos system was far larger:

cm-5.jpg

Thinking Machines' CM-5 Supercomputer, 1993

Being the fastest computer of it’s day it would have cost millions, been staffed by a team of engineers and scientists and been employed on the most computationally taxing investigations being carried out anywhere in the world. I expect it spent most of its time working on nuclear weapons. According to this the CM-5 cost $46k per node in 1993, which would price the Los Alamos National Laboratory system at $47 million, or around $70 million in today’s money. It’s performance? A theoretical peak of 131 GFlops, with a benchmark achieved performance of 59.7 GFlops. The same ball park as my run of the mill office computer today. It was also twice as fast as number two and ten times the power of the 20th ranked system.

What this means is that the computational resources available at the cutting edge just 17 years ago, now sit on everyone’s desk running Office 2010.

In 1997 I was lucky enough to visit the European Centre for Medium-Range Weather Forecasts (EMCWF). They had recently taken delivery of a new Fujitsu VPP700/116 and had claimed the 8th spot in the Top 500 ranking with a theoretical peak of 255.2 GFlops. The system was used for 10-day weather forecasts. This image shows a 56 node VPP700 system, the EMCWF system was ~twice the size:

vpp_neu.jpg

Fujitsu VPP700 Supercomputer, 1997

Using off the shelf components, a similarly powerful desktop computer could be built for a few thousand pounds using four Intel Xeon processors.

State of the art computer performance from a little over a decade ago, is now available to everyone able to afford a modern PC. We’re all using supercomputers. Could we be doing more with our computers than playing games and Microsoft Office 2010?
 
Last edited:
I knew computers had advanced a lot over the years, but today's sub £1k PCs being as powerful as ones costing almost 50,000 times more less than 20 years ago is just amazing!:eek::eek:
 
wonder what they do with the old ones? am betting they dont skip them for anyone to take home :(

But can it run Crysis on full?

lol, more pertinent question would be, could it run windows, dont think crysis supports any other operating systems
also, hmmmm folding, bet it'd be nippy at crunching those big WU's lol
 
Wonder what sartups would be like with those speeds :p

But what spec desktops does he actually mean? 340 in that rack? Wow :)

++ I might try that WC system :p
 
http://en.wikipedia.org/wiki/Jaguar_(computer)

1.75 petaflops, 360+TB RAM, 10+PB storage at 260GB/s.

Om.

Nom.

Nom.


Unfortunately my i7 just falls short of that beast.

Give it another ~15 years...

My main point is that a supercomputer 15 years ago was an amazing machine, doing amazing science... We now own that kind of power, shouldn't we be doing some amazing stuff with it?

Anyone know any recent costs for modern top end supercomputers? The CM-5/1024 was ~$70m in today's money. Are top end systems more or less these days? I'd guess Jaguar with it's 224,256 Opteron's might be approximately $100 million based on a very rough ~$500 per node average total cost.
 
and you could also get the same number of TFlops with 18 5870's which would be considerably cheaper than 2016 xeons
 
I'm a dinosaur and use AS/400s which was re-branded as iSeries. Been goingsince late 80s. A lot of banks still use the legacy software some of which dates back to mid 80s. Very fast powerful machines dedicated to batch processing.
 
I'm a dinosaur and use AS/400s which was re-branded as iSeries. Been goingsince late 80s. A lot of banks still use the legacy software some of which dates back to mid 80s. Very fast powerful machines dedicated to batch processing.

Ah yes, the AS/400, still runs the backend of Waterford Wedgwood, or at least it did when I left the company.
 
To demonstrate this point.... my dad recently re-wrote the software he designed for his thesis in C++ (something to do with non-convergent routes). When he first wrote it, it took 2 solid days to crunch on the university mainframe. It took less than a minute on the rig in my sig.
 
Back
Top Bottom