• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

4x9800 gx2

Indeed, why spend £1000 when you can spend £1000000, more meals and days out from the people selling the NHS the kit ;)
 
Also having 4x9800GX2 next to each other, i wouldn't like to run that 24/7.

MHS probably don't run it becuase no company provides a package like that.
 
are we reaching the end of the cpu?

Not at all.

A CPU is an incredibly flexible tool, whereas a GPU (when used for computations) is extremely specialised for parallel computations. Not many applications can be written to run effectively with a GPU, but those that can get massive benefits.

Think of a CPU as being like a scalpel, and a GPU like a sledgehammer. ;)
It would take a long time to take down a wall with a scalpel, but it is possible. On the other hand, you'll never do brain surgery with a sledgehammer no matter how long you stick at it :)
 
why don't hospitals use this kind technology for their rendering of scans, it's cheap & fast

It's also new, experimental, and fairly unstable. Most of the codes that use GPUs are currently academic research codes. Large non-specialist organisations like the NHS will only adpot technology like this when it's mature, can be supported on a large scale, and can be used by non-specialists. After all, CUDA was only released in Feb 2007 and it takes a long time to write complex user-friendly software.

Trust me though, there is massive development going on in all aspects of highly parallelisable computing. For example, my field of numerical simulations (computational fluid dynamics etc) is already seeing incredible improvements from GPUs - a factor of over 100x in many cases. A lot of people will make a lot of money from this technology over the next 5 years or so, and nvidia/ATI will be at the bottom of the pile, collecting the cash as it trickles down the pyramid ;)
 
Not at all.

A CPU is an incredibly flexible tool, whereas a GPU (when used for computations) is extremely specialised for parallel computations. Not many applications can be written to run effectively with a GPU, but those that can get massive benefits.

Think of a CPU as being like a scalpel, and a GPU like a sledgehammer. ;)
It would take a long time to take down a wall with a scalpel, but it is possible. On the other hand, you'll never do brain surgery with a sledgehammer no matter how long you stick at it :)
but surely if you started from scratch, all computing could be done over lots of small calculations on multiple threads
surely the two will merge? a gpu with a greater instruction set?

cpu is gaining cores, gpu is gaining power.
 
but surely if you started from scratch, all computing could be done over lots of small calculations on multiple threads
surely the two will merge? a gpu with a greater instruction set?

cpu is gaining cores, gpu is gaining power.

Not everything can be written efficiently on multiple threads.

It's all about communication. The size and frequency of data transfers required between threads. If each thread is always waiting for data from all the others, your efficiency drops like a stone - perhaps much lower than using just a single thread. It's always going to be an application-specific thing to determine whether or not a massively parallel approach is realistic.

Besides, writing software with just a few threads can be very difficult. Writing even the simplest codes for an arbitrary number of threads can be a nightmare, if the algorithm doesn't lend itself natrually to this approach.
 
Last edited:
It's faster than a 512 node Opteron supercomputer for what they are using it for :eek:

image007.png
 
It's faster than a 512 node Opteron supercomputer for what they are using it for :eek:

http://fastra.ua.ac.be/images/image007.png

Nice improvements there, but again - it's highly application dependent. For example, we did some tests with a simple matrix-matrix multiplication problem, comparing an 8800GTX against a 2.4Ghz core2duo CPU (using one thread). The GPU is always significanly faster, but the improvements are better as the problem size increases (minimising the relative overhead cost of setup and other internal latencies etc)

matmulexamplerh7.png




We're just now implementing our full research code to support CUDA for the sparse-matrix solutions. From initial results we're seeing around 50 - 100x speedup over the same CPU, for large problems. Still, nothing to sneeze at.
 
Back
Top Bottom