• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

1st Fermi review

Its up to me if I want to spend that much on something. Nobody else. Who said I was a fan boy? haha

Wow, a bit defensive there!

Relax, you can spend whatever you want on a graphics card, I can't and won't stop you :)

Its just the way you said it, maybe I misinterpreted it but you sounded almost over enthusiastic to buy it, even if it is very poor value for money ( not saying it will be).
 
Larrabbe will support ANY api throuh a wrapper, that's the whole design of it (and it will add support for any future revisions too without hardware upgrade). But that's primarily for graphics API's, why on earth would you want to run OpenCL on an x86 architecture that natively supports C, C++ etc..? You would lose perfromance.

I forgot to mention that Fermi actually supports native C++ btw, so they are already making moves towards a Larrabee like solution.

edit: also PCs out number Macs by 10:1 or so, they are not irrelevant but they certainly are not a driving force in API adoption.

While that's true, there seems to be a lot more "professionals" who will use a mac and OpenCL than just "normal" users.

Look at how many people use macs for photoshop work for example, adobe would be shooting themselves in the foot not to work on an OpenCL implementation.

In addition to that, unbiased render software companies are taking to using OpenCL in addition to CUDA usage.

It makes the most sense to really as the software coders have only to gain from making their software compatible with as many users as possible.

It only really takes one killer application of OpenCL for it to be established.
 
I did but I am just disagreeing. It will either be 2 5850s or a 480.

I think the point really is that you'd have to be mad to buy a GTX480 for gaming when 2 5850s will smash it performance wise for the same or possibly less monies.
 
I did but I am just disagreeing. It will either be 2 5850's or a 480.

This makes some sense assuming you don't care so much about any performance gains above single 480GTX. Two 5850s in Crossfire will do a better job in games but you may want a single GPU which is understandable. What is not, why you don't consider overclocked 5870 as an option. It WILL be cheaper than 480GTX and should offer you roughly the same or even better performance.
 
I think the point really is that you'd have to be mad to buy a GTX480 for gaming when 2 5850s will smash it performance wise for the same or possibly less monies.
Will the power consumption of 5850 X2 be much more than a GTX 480? Will 5850 X2 not lose out to the 480 with games that use PhysX to get high framerates though?
 
The results in the OP have been determined to be fake in other places as they list the GTX 295 running in DX11 mode alongside the GTX 480 and 5870.

A competitor site broke the NDA and posted some real results early, but then they got pulled down.

Raven posted the link above.

The power consumption is just lol:

96558056.png

No idea why the colour messed up on that :x
 
Last edited:
Will the power consumption of 5850 X2 be much more than a GTX 480? Will 5850 X2 not lose out to the 480 with games that use PhysX to get high framerates though?

Power consumption of 2 5850s looks to be lower than one GTX480.

As for PhysX, lawl! PhysX is just a joke at the moment, it's been out for years and hasn't really taken off.

As for framerates, there are what, 3 games that do hardware PhysX?

Most "PhysX" games listed on the nVidia website don't use hardware physics, and a good chunk of them don't even use any PhysX at all but rather Havok. (yeah I know, I don't know why they're listed as PhysX in the first place).
 
This is the power consumption of the whole system. Still high enough to be a con, as the 5970 is less power hungry even though it's dual GPU card.
 
While that's true, there seems to be a lot more "professionals" who will use a mac and OpenCL than just "normal" users.

Look at how many people use macs for photoshop work for example, adobe would be shooting themselves in the foot not to work on an OpenCL implementation.

In addition to that, unbiased render software companies are taking to using OpenCL in addition to CUDA usage.

It makes the most sense to really as the software coders have only to gain from making their software compatible with as many users as possible.

It only really takes one killer application of OpenCL for it to be established.

I think i'm not getting my point across here, CUDA = successful today, OpenCL/DirectCompute = as yet not well established.

It's no good taking anything as a given since we are in a twighlight period. CUDA is definately a massive selling point in the professional field today, and it won't lose it's relevance in six months or a year, or even 2 years most likely.

The other important thing to remember is that CUDA can only be an advantage for Nvidia, seeing as they also support OpenCL and DirectCompute as well. So they have the lead in the immediate future and full support for whatever API wins out in the longer term. ATI has to wait for the full scale adoption of those since Stream has essentially been a failure.
 
This is the power consumption of the whole system. Still high enough to be a con, as the 5970 is less power hungry even though it's dual GPU card.

Whole system of course, just shows that the GTX480 is using more power than 2 5850s.
 
no one seems too bothered about core i7 920's @ 4ghz ridiculous power consumption, not sure why they're bothered about the gtx 480 load power especially on an enthusiast forum, at idle it seems fine
 
I was thinking that as well. Is this not an ethusiast site you shouldnt owrry about Power consumption and you shouldnt worry about heat that much as long as its below 100 its afe right? Performance is everything remember :)
 
no one seems too bothered about core i7 920's @ 4ghz ridiculous power consumption, not sure why they're bothered about the gtx 480 load power especially on an enthusiast forum, at idle it seems fine

Probably because firstly, it's overclocked by 50% odd and secondly, even overclocked, they don't use that much power.
 
I think i'm not getting my point across here, CUDA = successful today, OpenCL/DirectCompute = as yet not well established.

It's no good taking anything as a given since we are in a twighlight period. CUDA is definately a massive selling point in the professional field today, and it won't lose it's relevance in six months or a year, or even 2 years most likely.

The other important thing to remember is that CUDA can only be an advantage for Nvidia, seeing as they also support OpenCL and DirectCompute as well. So they have the lead in the immediate future and full support for whatever API wins out in the longer term. ATI has to wait for the full scale adoption of those since Stream has essentially been a failure.

Sure thing, whoever needs CUDA now, will buy a Fermi graphics card. Nonetheless, it's not a gaming GPU.
 
I was thinking that as well. Is this not an ethusiast site you shouldnt owrry about Power consumption and you shouldnt worry about heat that much as long as its below 100 its afe right? Performance is everything remember :)

A hot card heats the rest of your PC up which can have an adverse effect on the rest of the PC's performance and overclockability.
 
Sure thing, whoever needs CUDA now, will buy a Fermi graphics card. Nonetheless, it's not a gaming GPU.

They can claim it's not a gaming GPU all they want, the simple fact that they've put a fermi GPU in something they're marketing and selling as a gaming graphics card says otherwise.
 
Back
Top Bottom