This probably won't come off quite the way I mean it but (not trying to be high and mighty or anything) - some really really poor understanding of compute in this thread, aside from spoffle (who mostly seems to know what hes talking about on the subject) most people seem to have almost no understanding of it and even spoffle was a little inaccurate on a couple of things - being a little pedantic AMD had/has its own CUDA - last named Stream can't remember what it was called before that, lack of support has relegated it to an almost unused media accelerator. Also DirectCompute in a generalised sense is the same as Open CL and CUDA tho they all differ a little in what kind of processing they are designed around handling and how they go about it.
What I meant when I said that AMD doesn't have something like CUDA wasn't about the API, if you go over the part where I said it;
AMD haven't got something like CUDA, ie, a compute API that brings in revenue the way nVidia has with CUDA, so it makes sense why they'd want to do that regardless of how people feel about it
I was getting at the same thing you said, it doesn't get used, so I wasn't counting it as a valid API, and to be honest, I can't remember anything even using it aside from a few tools from AMD.
But I am well aware that AMD does have its own proprietary compute API, but they may as well not because it's not used.
I also know that Direct Compute is the same as CUDA and OpenCL in that they are all compute APIs, but when I say that they aren't the same, I am saying it in the way you might say it if someone said that OpenGL and Direct X are the same.
I was affirming that whilst they do the same job, they are different APIs
All the latest nVidia and AMD graphic cards support "compute" functionality, while double precision is severely gimped on the GeForce 600 series its not used that much in gaming currently and has sufficent single precision performance for gaming tho somewhat underwhelming on the 670/680 for high end GPUs.
This is what leads me to believe that nVidia had no intentions of pushing out GK104 as mid range, because even last gen's mid range nVidia GPU had significantly more double precision performance than GK104.
That shows a conscious effort was put in to reducing the die sizes (due to the unsustainable nature of 500mm²+ GPUs.
As well as the fact that GTX5xx series cards were definitely, to some degree, eating sales of Tesla cards.
I've read on more than a few occasions (well, quite a lot) on various 3D modelling/design/CAD forums that a fair amount of people held the opinion that Quadro and Tesla cards were poor value for money because you got pretty much the same end product by buying a GeForce card instead.
The main caveat was the lack of support from nVidia that comes with buying a Tesla or Quadro card. and occasionally the smaller amounts of RAM, but for the most part those who simply needed a card for Viewport performance would either go AMD (for the larger amount of RAM, which is helpful in viewport rendering) or with a GTX580 if they needed something for CUDA performance.
I think it was this, coupled with wanting to produce smaller die GPUs that pushed them over the edge for actually getting down to it and making their gaming chips smaller.
EDIT: Point of my post being - it might be a good idea for pretty much anyone other than spoffle that has posted on the subject in this thread to open up google and do a bit of research on it before posting about compute in future
I strongly agree with this.