• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GeForce GTX 590 Key Features Revealed

6990vs590.jpg


Real or fake? looks like a GTX295 to me.
 
With it being a closed platform, restricted only to nVidia hardware means it'll die a painful death. Especially since AMD's gained a lot of marketshare. All those companies using CUDA will be investing in OpenCL research as we speak, it's in their best interests when it comes to software sales, and that's the bottom line of it.

Actually this is a common misconception. CUDA is not a closed platform. It is open and ATI is free to adopt it. But so far ATI hasn't adopted CUDA.

CUDA is actually in much wider circulation and use, is a more mature platform with more available tools. It would really help the industry if ATI stopped being petty and adopted CUDA. (My memory fails me but I vaguely remember some third-party implementation of a CUDA driver (proof-of-concept) for an ATI card that actually worked. Worth googling if anyone's interested.) AMD gaining marketshare in the GPU business also doesn't help AMD's position in research-based supercomputing where people by specific hardware, learn the toolset and use it to do their research. Only if CUDA apps became mainstream will this have any significance. But that's still a long way off.

Another misconception is the notion that an OpenCL program will run on all. It is important to remember that:
1) Learning CUDA makes learning OpenCL easy, and vice versa
2) Only if you closely stick to some standards any significant OpenCL app will run on both systems without change. Most of the time you'll be doing so many hardware-based optimizations that one program will not run natively on another hardware. i.e. there is such a thing as NVIDIA-OpenCL and ATI-OpenCL in any real world application (outside the pedantry involved in compiling and running a helloworld program that executes on both platforms)

Both these together defeat the argument that OpenCL has a significant advantage in this department.

At the end ofthe day people who use GPU computing are looking for the most finely tuned optimizations for their software because they;'re running complex algorithms. They're going to use hardware based optimizations. This is the reason the industry and academia have adopted CUDA. IT's been around longer, it's more mature and has had better supprot from NVIDIA from the start and ultimately you're going to be writing code targeted at specific hardware anyway.

For OpenCL to overtake it now it will have to do significantly better in the future.
 
Last edited:
Actually this is a common misconception. CUDA is not a closed platform. It is open and ATI is free to adopt it. But so far ATI hasn't adopted CUDA.
It's not a misconception at all. You may believe that AMD is "free to adopt" it, but in reality, nVidia isn't particularly trustworthy in such a situation. I can see it now, nVidia resorting to dirty tactics about how much better CUDA runs on nVidia.

CUDA is actually in much wider circulation and use, is a more mature platform with more available tools. It would really help the industry if ATI stopped being petty and adopted CUDA.
I never suggested that OpenCL was used more or in higher circulation, however it's not petty for AMD to not to want to use CUDA. It is better for the industry if we use a standard not owned and controlled by one of the hardware manufacturers.

(My memory fails me but I vaguely remember some third-party implementation of a CUDA driver (proof-of-concept) for an ATI card that actually worked. Worth googling if anyone's interested.)
I think that was PhysX, and what have nVidia done since? They've put in their drivers locks on PhysX if an AMD card is detected in the system. Let's not pretend nVidia really want anyone to be able to use it.

[/quote]Another misconception is the notion that an OpenCL program will run on all. It is important to remember that:
1) Learning CUDA makes learning OpenCL easy, and vice versa
2) Only if you closely stick to some standards any significant OpenCL app will run on both systems without change. Most of the time you'll be doing so many hardware-based optimizations that one program will not run natively on another hardware. i.e. there is such a thing as NVIDIA-OpenCL and ATI-OpenCL in any real world application (outside the pedantry involved in compiling and running a helloworld program that executes on both platforms)[/quote] I'm well aware of what it is, and I know OpenCL is essentially just a "layer" that runs on top of each manufacturer's Stream Computing API, however it'd be significantly easier if it was OpenCL, than simply just CUDA. No matter how you look at it, CUDA really is a closed standard that nVidia want to keep to themselves.

Both these together defeat the argument that OpenCL has a significant advantage in this department.
The significant advantage OpenCL has, is that it's not owned by nVidia or AMD, and therefore won't be horribly biased to either one.

At the end ofthe day people who use GPU computing are looking for the most finely tuned optimizations for their software because they;'re running complex algorithms. They're going to use hardware based optimizations. This is the reason the industry and academia have adopted CUDA. IT's been around longer, it's more mature and has had better supprot from NVIDIA from the start and ultimately you're going to be writing code targeted at specific hardware anyway.
The only reason CUDA has been used as the standard was because it was the only option for a long time. It's taken AMD a long time to get serious about GPU computing, and they're still not quite there, but my point still stands that OpenCL is more important long term than CUDA.

For OpenCL to overtake it now it will have to do significantly better in the future.

That's the idea, why do you think PhysX is barely used? I know it's slightly different, but it's still an example of a closed standard that isn't guaranteed to be there in years to come. OpenCL development is slow, but it's getting there, and the industry really is pushing for an open standard. Look at V-Ray, they've started OpenCL development, which will be, in my opinion, a big leap when it comes to its usage. That's my main interest when it comes to GPU computing, using it to accelerate rendering.
 
That's the idea, why do you think PhysX is barely used? I know it's slightly different, but it's still an example of a closed standard that isn't guaranteed to be there in years to come. OpenCL development is slow, but it's getting there, and the industry really is pushing for an open standard. Look at V-Ray, they've started OpenCL development, which will be, in my opinion, a big leap when it comes to its usage. That's my main interest when it comes to GPU computing, using it to accelerate rendering.

Dirty business tactics is certainly another thing altogether. There may be a tinge of paranoia going with this as well.

CUDA's acceptance isn't necessarily because of it entered the market first. It's important to understand that Openness isn't a huge advantage in the research market as it is to the mainstream market. The reason is because to researchers all these things are just tools to get their work done. Their research is not about the technology. It's about entirely unrelated things altogether.
so in such a niche application it doesn't make a big difference that CUDA is not more widely used. PhysX on the other hand is something consumers use. It is mainstream. When things hit the mainstream broader industry acceptance will be important. Until then in research CUDA will remain popular because of its acceptance, and because open standards like OpenCL don't give significant enough advantages, and in fact come with some disadvantages compared to CUDA (at least it did circa 2009/2010)
Case in point: lots of proprietary technology is the cornerstone of academic and industrial research: e.g. everyoen prefers MATLAB to the open source alternatives. Silicon Designers tend to use specific propietary tools like Mentor Graphics' ModelSim and System Vision. The "Open software" argument just doesnt factor as significantly in research environments, though it might seem like it's important to people in IT. Researchers usually don't care. Often these commercial propietary technologies (which they use as tools in their research) are superior to any available open technologies. I can tell you in all my years in engineering before switching over to research in mathematics i have not once encountered open technologies being touted as anything awesome. In fact I can't for the life of me remember a time open technologies were presented as preferable to something propietary. whatever gets the work done, basically, is what people tend to go with. As it stands more propietary stuff gets the work done than open standards.

It may be a whole different situation in IT. But people in IT don't have much use for running massively parallel algorithms.
 
Last edited:
By open if you mean it's not maintained by NVIDIA then yeah. But it's open for anyone who wants to do an implementation of it to do an implementation. You should check your facts.


Its only open in the sense that you are leaving yourself open to be shafted.

The discussion has been had many times before. CUDA under Nvidia is going nowhere, its doomed.
 
Its only open in the sense that you are leaving yourself open to be shafted.

The discussion has been had many times before. CUDA under Nvidia is going nowhere, its doomed.

Actually right now it's a thriving "industry". Lots of research areas have been investigated. hundreds to thousands of papers have been published. several conferences all over the world., scores Companies have sprung up providing analysis through it. PhDs have been minted. PhD studentships offered. Fellowships and other research positions created. All around CUDA. It's looking great, if anything.

If you mean it's doomed for maisntream apps, then I don't know. Perhaps. It's certainly not doomed for what NVIDIA is pushing its use for.

On the other hand, it feels fairly difficult to imagine CUDA being used for the maisntream. The average person isn't designing aerofoils or predicting the weather, so I can't really see people using this tech much for mainstream. Then again I will agree that I could be hopelessly wrong about that. After all, "640K ought to be enough for anybody," right?
 
Actually right now it's a thriving "industry". Lots of research areas have been investigated. hundreds to thousands of papers have been published. several conferences all over the world., scores Companies have sprung up providing analysis through it. PhDs have been minted. PhD studentships offered. Fellowships and other research positions created. All around CUDA. It's looking great, if anything.

It's "thriving" because there's been no other alternative to it, simple as really. When it comes to those making and selling the software, they want as wide a target audience as possible, this is why CUDA is doomed to fail, it doesn't matter that it's thriving now, saying it's doomed doesn't mean it's not thriving, just that its lifespan is very limited.


On the other hand, it feels fairly difficult to imagine CUDA being used for the maisntream. The average person isn't designing aerofoils or predicting the weather, so I can't really see people using this tech much for mainstream. Then again I will agree that I could be hopelessly wrong about that. After all, "640K ought to be enough for anybody," right?
You seem to only think it's used for research type of things, GPU acceleration is very important to the "mainstream" heavy computer users. It's very important for computer animation, which is arguable a much larger market than "aerofoils" and predicting the weather, in terms of, there'll be a lot more people ready to buy software for computer modelling and animation, that uses GPU acceleration, that weather predicting.

Computer animation being accelerated by GPUs is stagnating because of CUDA, with an open standard, it'll explode because near enough anyone will be able to use it, and the advantages that it'll bring will be ridiculously big. But the reason I say it's currently stagnating, is because most software devs simply don't want to commit to CUDA when it's only usable by a portion of their target audience.
 
no. it's thriving because its exactly what research needs. i used it myself recently in my work on factal geometry. and in solving nonlinear equations. my supervisor and his team uses it for image processing. i had the option of using opencl but it was a lot better to cut out the middleman and just go with cuda.

oh and...
http://www.maximumpc.com/article/news/nvidia_provides_support_get_cuda_running_a_radeon


as for computer animation thats what the gpu does natively. what cuda is useful for, in addition, is clunky to manage with in something like HLSL which u can use very well for animations.
the industry has long been looking for a killer app that would make gpu algorithms viable for mainstream desktop users, but still no such app has been found. the peroblem lies in the fact that everyday apps are just not easy to convert to something like a massively parallel algorithm. some problems are just better suited to parallelization, such as the discrete fourier transform useful for solving partial diff eqs.
 
Last edited:
Back
Top Bottom