• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Discussion On The ATI Radeon 5*** Series Before They Have Been Released Thread

Status
Not open for further replies.
5870 1GB will do me VERY nicely.

Now if only they could make their own CUDA.

They don't need to.

nVidia have CUDA but will create a MS & OpenCL driver but push devs to use CUDA.
Microsoft will likely create their own GPGPU technology.
AMD, Apple and others are using OpenCL. Apple have this out already.
Intel I've not seen anything on but I'd assume they'd work with MS and OpenCL.

Now there are two big camps here and I don't expect CUDA to hold out in the long run.

It's interesting that they've moved to SIMD rather than continue with SPMD. Either way it doesn't matter as long as the thing works fast!

Hmm - Mac Pro twin quads, 16GB, 5870 2GB. Things could get interesting next year!
 
Last edited:
They don't need to.

nVidia have CUDA but will create a MS & OpenCL driver but push devs to use CUDA.
Microsoft will likely create their own GPGPU technology.
AMD, Apple and others are using OpenCL. Apple have this out already.
Intel I've not seen anything on but I'd assume they'd work with MS and OpenCL.

Just to add to that, AMD does have its equivalent to CUDA in the StreamSDK which provides a couple of ways to program for its graphics hardware, most notably Brook+ and CAL. I don't expect either Brook+ or CAL to be particularly popular, with the exception that AMD's OpenCL implementation is supposed to be built on CAL (CAL is a very close to the hardware language). Intel are supposedly making their own SDK for Larrabee (it's one of Larrabee's much trumpeted features - the fact that it should be very programmable, obviously it's Intel that has to accommodate this). And also Microsoft's GPGPU language is called the Compute Shader, being introduced with DX11.
 
CUDA doesn't really have a big impact on gaming... but CUDA isn't going anywhere until theres a very stable, tested and feature rich alternative - as its used extensively in education and medical establishments amongst other useage.

This is one of the reasons I dislike all the dogging of physx and CUDA - they've been out ages, they are well rounded, mature products and no one else can be bothered to create their own alternative or actively work towards a common standard... eventually we'll have open CL and physics implementations on that but it will be ages before its at the same calibre if even that as of CUDA and physx... unnecessarily holding up progress for months or even years.
 
I tried to find a thread from a couple of weeks ago in which those of us recommending just getting a £120 HD4890 now rather than waiting (just a week) for the new ATI cards were ridiculed by Kylew et al. I am just wondering if they are still thinking that a £120 HD4890 is a poor purchase following a- the lack of concrete details and b- the ever more possible £220 plus price tag?

I'm not trying to be argumentative with this one, just wondering if the extreme optimism of a 2x performance, £150 graphics card has subsided or if there is still a feeling that could materialise.
 
Idle Temps
28vsqwmjpgpng.jpg

2py6edhpng.jpg

23kukg7png.jpg

Load Temps
waq4ibpng.jpg

dbqljpng.jpg

jhtq3ljpgpng.jpg

k1572cpng.jpg


9q8wzppng.jpg
 
Status
Not open for further replies.
Back
Top Bottom