• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GTX 380 & GTX 360 Pictured + Specs

Thats why I used 99%.

The top 0.1% of company's probably can at a huge cost. But what about everyone that need to say.... move to new building ? or write off a whole up and running system thats had years of development before they could even consider a upgrade using huge parts.

For most it will not be a viable upgrade.

It is a viable option, look at Clearspeed their entire business is devoted to it:
http://www.clearspeed.com/

IIRC they even do rack mount setups now (NV that is) with 3xTeslas in them. So it's totally viable. Theres some massive improvements in the Fermi that makes it outstanding for CUDA work (error correcting memory, more flexibility in this SPUs do what, and the fact we can now write code in C++:D). There will be a big move towards GPU's to accelerate scientific apps (and I mean even bigger than what is happening now) thanks to Fermi. I wouldn't be in the least bit suprised if in two years time most of the top 100 super computers are using it.

Then we get to business users, though it's more of a stream than a torrent at the moment big financial companies are seeing massive improvements in modeling techniques (MCMC for example) using CUDA enabled cards - so as soon as they're back up and running properly I'll expect NV to get a nice big revenue increase.
 
Then we get to business users, though it's more of a stream than a torrent at the moment big financial companies are seeing massive improvements in modeling techniques (MCMC for example) using CUDA enabled cards - so as soon as they're back up and running properly I'll expect NV to get a nice big revenue increase.

I doubt nVidia will take over from IBM in the financial sector for the serious stuff - they just don't have what it takes to live upto the rigorous standards required.
 
When new systems go online theres a good chance they would be looking at these kinda solutions...

Tho seeing as the backend for the system we are using at work is in parts still running an old version of unix with a system built in cobol85 it could take awhile with some companies...
 
I have a full server room of 100 systems with 30% cooling redundancy and and 25% power.

Half are 1U dual intel 5060's on Intel S5000PALR i5000p boards.

The others are 2u quad opteron HE 2214's on tyan thunder n3600qe boards.

Sell this company (nu-vidia ?) to me.
 
Last edited:
The main problem is moving to a new system software wise, migrating the backend, etc.

Hardware wise its not really an issue for a modern server facility... power and cooling are generally provisioned in a manner that makes them upgradeable on demand and racking can be re-purposed quickly and efficently.
 
The main problem is moving to a new system software wise, migrating the backend, etc.

Hardware wise its not really an issue for a modern server facility... power and cooling are generally provisioned in a manner that makes them upgradeable on demand and racking can be re-purposed quickly and efficently.

Slow down Rroff lets get into the software side of things later.

My server room was built and desigen by a "proper company" and I have upgraded.

My power systems have %25 and the cooling system has %30 left in them.

How can this new company help ?
 
You won't see GPGPU setups replacing small fry server hosting... it will be bespoke computing labs for specific applications or purpose built super computer arrays.
 
You won't see GPGPU setups replacing small fry server hosting... it will be bespoke computing labs for specific applications or purpose built super computer arrays.


Small fry ? This a £5 million a year company doing serious research in a lab with proper programmers and everything.

How can they help me ?
 
Last edited:
100 servers is small fry...

I don't get the question... they can boost your number crunching on appropriate applications by between 4x and 100x or even more depending on application... isn't that help enough?
 
Way i look at it is buy what does what you want at the time you want to buy

Thanks, great advice. But...if I wait until the new Geforce 300 series cards are out, I could, potentially, install all of my new PC hardware into my current mid-tower case. But the downside would be that my current tower is nowhere near the airflow of the HAF 932, so my PC will take a performance hit as a result. In the end, I think I will buy a HAF 932 or a Sniper case. Not because I "need to," but as an upgrade.

http://digitaldaily.allthingsd.com/20091207/intel-shelves-larabee/
Damn, I wanted a 3-way Video Card war.
 
Is it really so small? theres more than 600 active, publicised, CUDA projects - many of those are used on setups between 4 and 100+ nvidia GPUs - and theres probably lots more we don't know about.

Indeed, our lab is now putting serious research time into this. Indeed, we may be hiring a new Post Doc full time to research CUDA based evolutionary algorithm, particle swarm optimization and ant colony optimization. We have 1 PhD student dedicating about half his time to CUDA based Genetic programming and reverse engineering of Genetic Regulatory Networks (as well as masters and semester student projects). Our current preliminary estimate is that we will get betwenn 50-80X speed up over an Intel i7, and therefore can repalce our 400 cor beowolf cluster with a few dozen CUDA boxes and achieve much higher performance. There are several labs in the university getting very excited about CUDA that I know of. the signal processing department has a research project setup, I know the motion capture department has replaced a $300,000 state of the VICON 3D racking hardware with $8,000 worth of computer and camera with some clever CUDA based vision algorithms. The parallel computing department is set to buy hundred on Nvidia GPUs, let alone of the particle physics lab. And thats just what I know just within 1 university.

Last month I attended a seminar on Reinforcement Learning and the motion capture of a table tennis player was done using software running on CUDA (similarly to replace a $200,000 VICON 3D tracker). So it is just not out university that seems to be on the ball.

A spin-off start-up company form our lab is investigating using a CUDA to do real-time localization, navigation and planning of flying robots. And in the long term they would like a small GPU to me mountable on the robots
 
Yea but its not like the nvidia drivers for the new cards (once there out) are going to stand still

No I am sure they won't. The problem is that it's 4 months before the cards out and then say another two months for decent drivers with performance boosts come out so ATI have a massive head start.

Say the Fermi was only going to be 10 to 20% quicker than a 5870 a month or so ago. If by the time the card is released the 5870 has already closed that gap and they are matching, it's then pretty hard to sell the Nvidia card especially if it is more expensive on the promise that future driver releases will make it faster than the competition again.

So either Nvidia will have to uncut ATI which will no doubt mean ATI will drop their prices which is good news for the consumer or if Nvidia don't then they will not sell very many of them until the drivers make them faster again.

Already ATI have sold 300,000 5 series cards. Mabye by the time the GTX3xx cards come out, that will be in excess of a million. That's got to hurt Nvidia losing out on all those sales.
 
Back
Top Bottom