• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GeForce GTX 590 Key Features Revealed

At this point, I don't think either cards is worth buying.

6990 would be worth considering when:
The crappy reference cooler get replaced with manufacturers' own quiet custom cooler.

GTX590 would be worth considering when:
The voltage or circuitry issue (hopefully) get sort out (custom PCB may be? Asus/Gigabyte/MSI?).

But for anyone with a motherboard that's capable of crossfire/sli, getting a pair of 6950/6970 or GTX570 would be a better option.
 
Last edited:
It doesn't surprise me to hear that the cards are blowing up. Given how much headroom the chips have the very low clocks suggest that Nvidia are really pushing the boundaries with the card. If there was any save headroom left on the cards Nvidia would have delayed a week and pushed the clocks up 5-6%(just 30Mhz) and have had the fastest card.
 
CUDA wont pick up in the long term ? CUDA architecture is being used all over the world, from teaching, universities, in movies, in science and much much more. Its out there, its becoming huge in certain industries and will only continue to grow.

Agreed

But its a closed platform. I reckon in the long term Open CL will take over but hey that is just IMO :)

Three of the five fastest super computers on the planet (including THE fastest) are powered using CUDA on Nvidia Tesla GPU's.

http://thetechjournal.com/electroni...-worlds-fastest-supercomputer-tianhe-1a.xhtml

Surely multi billion $ companies and governments wouldn't be using CUDA (and investing in it) if open CL was more open.

Audi are using it to power their latest car dash/ Nav/ in car computer system too: http://www.pocket-lint.com/news/30781/audi-a8-2011-nvidia-tegra

CUDA is an open format - anyone can download the SDK for free and write whatever they like for it (its C++ based)

Link Here:

http://developer.nvidia.com/object/cuda_3_2_downloads.html

;)
 
Last edited:
^^ There seems to be quite a lot of ignorance as to just how wide spread and embedded CUDA is industry wise, makes me laugh when I see some of the comments made about Open CL.
 
^^^

I think Intel has the advantage here tbh, they have the capital and infrastructure already in place, and the obvious will power to make it happen for them, and not to mention the obvious process technology advantage.
Intel's solution can run standard C++, so there's allot bigger pool of talent/knowledge you can call upon then writing apps, so to choose CUDA wouldn't make sense if there's an actual C++ based alternative, so I don't personally see CUDA lasting very long in a useful sense, but Nvidia may keep it twitching so it has another label to put on the box.

Allot of CUDA is over hyped, like CUDA video transcoding, it's only faster than a CPU because it does much less work and looks awful.
 
Last edited:
^^^

I think Intel has the advantage here tbh, they have the capital and infrastructure already in place, and the obvious will power to make it happen for them, and not to mention the obvious process technology advantage.
Intel's solution can run standard C++, so there's allot bigger pool of talent/knowledge you can call upon then writing apps, so to choose CUDA wouldn't make sense if there's an actual C++ based alternative, so I don't personally see CUDA lasting very long in a useful sense, but Nvidia may keep it twitching so it has another label to put on the box.

From my techreport link above.....

"Tianhe-1A, a new supercomputer revealed today at HPC 2010 China, has set a new performance record of 2.507 petaflops, as measured by the LINPACK benchmark, making it the fastest system in China and in the world today.Tianhe-1A epitomizes modern heterogeneous computing by coupling massively parallel GPUs with multi-core CPUs, enabling significant achievements in performance, size and power. The system uses 7,168 NVIDIA® Tesla™ M2050 GPUs and 14,336 CPUs; it would require more than 50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone.More importantly, a 2.507 petaflop system built entirely with CPUs would consume more than 12 megawatts. Thanks to the use of GPUs in a heterogeneous computing environment, Tianhe-1A consumes only 4.04 megawatts, making it 3 times more power efficient — the difference in power consumption is enough to provide electricity to over 5000 homes for a year.

Details: http://thetechjournal.com/electroni...t-supercomputer-tianhe-1a.xhtml#ixzz1I7gh1At3

So, no...intel can't do what you suggets without the Nvidia hardware mate, regardless whether they run CUDA on their CPU's or not. GPU infrastructure and design is much more massively paralell than any CPU and just is MUCH faster at mathematical/ raw computational tasks.;)
 
^^^

I think Intel has the advantage here tbh, they have the capital and infrastructure already in place, and the obvious will power to make it happen for them, and not to mention the obvious process technology advantage.
Intel's solution can run standard C++, so there's allot bigger pool of talent/knowledge you can call upon then writing apps, so to choose CUDA wouldn't make sense if there's an actual C++ based alternative, so I don't personally see CUDA lasting very long in a useful sense, but Nvidia may keep it twitching so it has another label to put on the box.

Allot of CUDA is over hyped, like CUDA video transcoding, it's only faster than a CPU because it does much less work and looks awful.

Why does Intel have the advantage? CUDA is written in C++ so surely they have this massive pool of talent and knowledge also that you refer to?
 
Allot of CUDA is over hyped, like CUDA video transcoding, it's only faster than a CPU because it does much less work and looks awful.

have you actually used a CUDA coded video editing package mate? Badaboom or Super lolioscope?

Its faster because a GPU is MASSIVELY more paralell than any CPU which means it can get MUCH more work done per clock cycle. Think about it....a GTX 480 runs at around 700 odd MHZ.......a 980 extreme i7 with 12 core (HT on) at 4.8 GHz will STILL be slower.

in summary, you're talking absolute codswallop mate.;)

Massively Paralell GPU computing via CUDA is being used as we speak by:

NASA
Wall street
Harvard University
Stanford University
Banking industry
Major hollywood film companies
Special Effects houses
the BBC
Sky
Countless disesase research projects

If they see fit to back it with all their wealth, wisdom, expertise and professional knowledge...then your opinion is, at best, unfounded and based on a very limited understanding of CUDA within the I.T. industry.
 
Last edited:
have you actually used a CUDA coded video editing package mate? Badaboom or Super lolioscope?

Its faster because a GPU is MASSIVELY more paralell than any CPU which means it can get MUCH more work done per clock cycle. Think about it....a GTX 480 runs at around 700 odd MHZ.......a 980 extreme i7 with 12 core (HT on) at 4.8 GHz will STILL be slower.

in summary, you're talking absolute codswallop mate.;)

I think he is referring to this, where the quality of the cuda based encoding is significantly worse than all the other methods of encoding tested.
 
^^ There seems to be quite a lot of ignorance as to just how wide spread and embedded CUDA is industry wise, makes me laugh when I see some of the comments made about Open CL.

Yep. I tried to correct a few misconceptions about it further back in this thread, but that ended up going nowhere good, with people blindly insisting that open = dominance.

if that were true Mathworks' Matlab would be nowhere, instead of being the absolute standard for scientific computing from academia to industry and beyond.
 
What I meant was thats just one software (application) implementation - not necessarily an indication of how well or otherwise CUDA handles transcoding.
 
Well I tried lot's of different software, non was anywhere near as good as CPU transcoding and was just a waste of my time & money.

It's clear that Intel's transcoding tech is far superior, and will only dominate even more when you can use all the new SB features when the CPU is overclocked and when your using a discrete GPU, this will effectively kill consumer CUDA based apps.
 
Last edited:
What I meant was thats just one software (application) implementation - not necessarily an indication of how well or otherwise CUDA handles transcoding.

Indeed but point is consumer level applications.

Just like Physx, Tessellation, ect.. all great tech but all badly implemented in consumer level software.
 
Well I tried lot's of different software, non was anywhere near as good as CPU transcoding and was just a waste of my time & money.

It's clear that Intel's transcoding tech is far superior, and will only dominate even more when you can use all the new SB features when the CPU is overclocked and when your using a discrete GPU, this will effectively kill consumer CUDA based apps.

I couldn't agree more. I do loads of transcoding and for certain the implementation of GPU accelerated transcoding is pants. The best apps, codecs and containers are still CPU only.
 
have you actually used a CUDA coded video editing package mate? Badaboom or Super lolioscope?

Its faster because a GPU is MASSIVELY more paralell than any CPU which means it can get MUCH more work done per clock cycle. Think about it....a GTX 480 runs at around 700 odd MHZ.......a 980 extreme i7 with 12 core (HT on) at 4.8 GHz will STILL be slower.

in summary, you're talking absolute codswallop mate.;)

Massively Paralell GPU computing via CUDA is being used as we speak by:

NASA
Wall street
Harvard University
Stanford University
Banking industry
Major hollywood film companies
Special Effects houses
the BBC
Sky
Countless disesase research projects

If they see fit to back it with all their wealth, wisdom, expertise and professional knowledge...then your opinion is, at best, unfounded and based on a very limited understanding of CUDA within the I.T. industry.

It's used by those because it's pretty much the only option available right now. If OpenCL managed to produce industry strength programs with either NV tesla or AMD's firething (can't remember the name) I'm sure we would see further advances in research etc. Seen as how NV want to monopolise their CUDA program and keep it as a closed platform it's a shame really but any company would do that if they managed to write a terrific C++ prog.
 
Back
Top Bottom