Perhaps. Perhaps CUDA was adopted because it was the only game in town. I just don't think that's why it was adopted though. My experience in academia/research, I feel, supports that.
Come on, it's common sense. CUDA offered the power they need, there's nothing else to use, therefore CUDA gets adopted. There was nothing else to choose from, so they either had to use CUDA or go without the performance. It's like saying wheeled motor vehicles are only popular because of how good they are, when in reality they're the only viable means of personal transport at high speed, you either use or go without since there's no alternative.
Perhaps if OpenCL becomes awesome and widely accepted it may become more popular than CUDA. Perhaps. I wouldn't be too quick to jump on that bandwagon in agreement.
Really, come on, I'm not even talking about which one is "best", I'm talking about simple user base. Far more people have OpenCL capable hardware than CUDA only hardware. On top of it being in everyone's best interests for it to be an open non proprietary standard.
These things are usually far more complicated and the results can be surprising, a la Betamax-vs-VHS (This was intensely studied at the Santa Fe Institute as an example of deterministic chaos in a nonlinear system. You can read a highly accessible account of it in Waldrop's book "Complexity).
That is what I'd call a completely different and unrelated situation. It's not two different technologies fighting out for what's best as such. nVidia hardware can run CUDA as well as OpenCL applications.
The superior product doesn't necessarily always win. There are complex reasons for that. Because nonlinear dynamics happens to be my main area. I tend to be conservative when making a prediction, handwavingly, about how a nonlinear system (like this whole CUDA/OpenCL business) will evolve. Because it is easy to be terribly hopelessly incredibly wrong. they are inherently unpredictable. But that's me. I have my reservations.
I don't think it's unpredictable when you consider for the most part they're largely the same thing, except OpenCL can run on all hardware brands, CUDA can't. The end result will be no different, it's just about which one is currently being used, which is CUDA, therefore there's less you can do with OpenCL.
If you're convinced CUDA would not have succeeded had it competed against an equally fleshed out OpenCL or that it will eventually face its end to OpenCL then go right ahead. I'm going to hold on to my reservations about that though. I'll believe it when I see it.
Tell my *why* exactly CUDA would have became popular if OpenCL was in the same position as it? Same results, but one can work on all hardware, one's restricted to nVidia. No one would choose the closed standard if the open one is just as good.
Well this is fundamentally dealing with graphics. 3D graphics. Real-time or not. So I'd expect it to not be as clunky solving it using something like shader language as opposed to, say, trying to disguise the three-body problem in physics in such a way as to let the GPU solve it. I could be wrong. I don't know much about animation. Perhaps these guys at Pixar and stuff use, say, ray tracing -- which isn't easy to implement in HLSL as it's not a native feature of rasterised graphics.
When you talk about CUDA based applications, you're talking about GPGPU computing, for the most part, it has nothing to do with running games in the traditional sense, raytracing itself is a very parallel calculation which is why it's perfectly suited to running on a GPU. There's already examples as I've said, of raytracing running significantly better on GPUs than CPUs, like vray for example, which is a raytracing application.