Caporegime
Real or fake? looks like a GTX295 to me.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Nvidia strikes back with reasonable length PCBs!!
I still cannot understand why AMD has to make their higher end cards so flipping long??
and why are HIS cards so fugly.....that blue and red does nothing for them. i wouldnt buy HIS just for that reason
yeah, because you see the blue design when its upside down in the case don't you
Real or fake? looks like a GTX295 to me.
With it being a closed platform, restricted only to nVidia hardware means it'll die a painful death. Especially since AMD's gained a lot of marketshare. All those companies using CUDA will be investing in OpenCL research as we speak, it's in their best interests when it comes to software sales, and that's the bottom line of it.
Actually this is a common misconception. CUDA is not a closed platform. It is open and ATI is free to adopt it. But so far ATI hasn't adopted CUDA.
Real or fake? looks like a GTX295 to me.
That's complete rubbish. CUDA is not an open platform at all.
It's not a misconception at all. You may believe that AMD is "free to adopt" it, but in reality, nVidia isn't particularly trustworthy in such a situation. I can see it now, nVidia resorting to dirty tactics about how much better CUDA runs on nVidia.Actually this is a common misconception. CUDA is not a closed platform. It is open and ATI is free to adopt it. But so far ATI hasn't adopted CUDA.
I never suggested that OpenCL was used more or in higher circulation, however it's not petty for AMD to not to want to use CUDA. It is better for the industry if we use a standard not owned and controlled by one of the hardware manufacturers.CUDA is actually in much wider circulation and use, is a more mature platform with more available tools. It would really help the industry if ATI stopped being petty and adopted CUDA.
I think that was PhysX, and what have nVidia done since? They've put in their drivers locks on PhysX if an AMD card is detected in the system. Let's not pretend nVidia really want anyone to be able to use it.(My memory fails me but I vaguely remember some third-party implementation of a CUDA driver (proof-of-concept) for an ATI card that actually worked. Worth googling if anyone's interested.)
The significant advantage OpenCL has, is that it's not owned by nVidia or AMD, and therefore won't be horribly biased to either one.Both these together defeat the argument that OpenCL has a significant advantage in this department.
The only reason CUDA has been used as the standard was because it was the only option for a long time. It's taken AMD a long time to get serious about GPU computing, and they're still not quite there, but my point still stands that OpenCL is more important long term than CUDA.At the end ofthe day people who use GPU computing are looking for the most finely tuned optimizations for their software because they;'re running complex algorithms. They're going to use hardware based optimizations. This is the reason the industry and academia have adopted CUDA. IT's been around longer, it's more mature and has had better supprot from NVIDIA from the start and ultimately you're going to be writing code targeted at specific hardware anyway.
For OpenCL to overtake it now it will have to do significantly better in the future.
That's the idea, why do you think PhysX is barely used? I know it's slightly different, but it's still an example of a closed standard that isn't guaranteed to be there in years to come. OpenCL development is slow, but it's getting there, and the industry really is pushing for an open standard. Look at V-Ray, they've started OpenCL development, which will be, in my opinion, a big leap when it comes to its usage. That's my main interest when it comes to GPU computing, using it to accelerate rendering.
By open if you mean it's not maintained by NVIDIA then yeah. But it's open for anyone who wants to do an implementation of it to do an implementation. You should check your facts.
Its only open in the sense that you are leaving yourself open to be shafted.
The discussion has been had many times before. CUDA under Nvidia is going nowhere, its doomed.
Actually right now it's a thriving "industry". Lots of research areas have been investigated. hundreds to thousands of papers have been published. several conferences all over the world., scores Companies have sprung up providing analysis through it. PhDs have been minted. PhD studentships offered. Fellowships and other research positions created. All around CUDA. It's looking great, if anything.
You seem to only think it's used for research type of things, GPU acceleration is very important to the "mainstream" heavy computer users. It's very important for computer animation, which is arguable a much larger market than "aerofoils" and predicting the weather, in terms of, there'll be a lot more people ready to buy software for computer modelling and animation, that uses GPU acceleration, that weather predicting.On the other hand, it feels fairly difficult to imagine CUDA being used for the maisntream. The average person isn't designing aerofoils or predicting the weather, so I can't really see people using this tech much for mainstream. Then again I will agree that I could be hopelessly wrong about that. After all, "640K ought to be enough for anybody," right?