• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

***Nvidia GTX480 & 470 reviews & discussion***

Hi

I was going to go for 2 of the 480s for my new complete rig but i`m not sure now, so do you guys think I should if money is no problem???

Looking at ultimate performance, GTX480 Sli has absolutely no chance at matching overclocked HD5870s.

GTX480s have very very low overclocking headroom whereas 5870s have a lot of overclocking headroom, up to 20%.

So even if money is no issue, overclocked crossfire 5870s are going to give you much better performance than GTX480 SLi for 2/3 the cost. You'd have to be mad to choose the 480s for games.
 
Fud has reported EVGA will be launching a superclocked version with the stock cooler !!!

Also, just saw over on semiaccurate that in order to prevent screen flicker when running dual screens in sli mode, that they had to up the power consumption. SLI idles at 95 degrees !!!
http://www.semiaccurate.com/forums/showthread.php?t=2011

LOL, I wonder what the NV botnet are thinking after years of steretyping ATi with hot GPUs
 
Very disappointing reading CPC review this morning. I was hoping for a marginal 10% increase over he 5870 but it appears this is not to be and with higher power requiremenst plus a whopping 30% price over it's competitor, I feel that the 5800FX days have come back to haunt them.

However, with this all said, I'm still wondering that the current games aren't making the best use of the new architecture such and tessellation and this still maybe the trick up Nvidia sleeve. Maybe better drivers and optimised software may see this exceed it's rival but i'm not conviced.. Especially with their pricing policy.. What do they think they are doing?

If this is the new FX5 series, then I look forward to their next round. I think my 6800GT was one of the best buys.
 
But wouldn't nvidia have to encourage developers to start using tessellation ?

As it stands now there are very few games which need a current gen card let alone a GTX480. And most games coming out today are console ports. I don't think we will see true next generation graphics in games until the next wave of consoles hit in 2012 sometime.
 
Now we have seen comparisons between the 5870 and GTX 480 all over the net, why cant someone do a comparison of crossfired 5770s and GTX 480 to really show how bad the thing is.

And why cant the comparisons be done with the HDs 5000s @ 1000 / 1300 Mhz? The difference between the Fermi and 5870 is even smaller when you overclock them that much.

Of course, they can overclock the Fermis too, but remember that they already reach 92 - 94 degrees at stock frequencies.
 
Dunno if anyone has seen this, but maingear (they case guys nvidia used to pimp their fermis at various shows) benched tri sli 480's against 5970 quadfire :)

Makes some interesting reading - as suspected, suttering and low framrates in quadfire, and fermi scaling better in very high res with all details turned up. Having said that, they used nvidia reliables like Batman and Uniengine demo. But Crysis did seem to suck on 2x5970's.

http://www.maingearforums.com/entry.php?24-So-You-Want-To-Buy-A-GeForce-Part-2
http://www.maingearforums.com/entry.php?23-So-You-Want-To-Buy-A-GeForce
 
Last edited:
See, look at how it performs against the 4870 X2:

perfoc.gif


And the 5770 crossfire at 1000 / 1300 is like 5000 vantage points faster than the 4870 Crossfire, for £240.

5770 crossfire is So underrated because no one does this comparison with them.
 
See, look at how it performs against the 4870 X2:

perfoc.gif


And the 5770 crossfire at 1000 / 1300 is like 5000 vantage points faster than the 4870 Crossfire, for £240.

5770 crossfire is So underrated because no one does this comparison with them.

It was like 4850CF - for £200 you had 280GTX destroying performance at the time.
 
If this is the new FX5 series, then I look forward to their next round. I think my 6800GT was one of the best buys.

Indeed :)

I'm still convinced that Fermi has a lot more to offer. Hardware and drivers aside, if the games aren't using the new programmable architectures of Fermi, then the results are really inconclusive and therefore, not comparable. The hardware isn't as fixed as it once was giving more freedom to the developer.. It maybe that Nvidia may still have cards up their sleeves but from what I've read, it really is down to the developers to utilise and embrace the new technology to get the most of it. Nvidia haven't invested in their new architecture for nothing. However, I fear we are long way from feeling the true affects what Fermi could and can give us. but I maybe wrong.

All i do know, this isn't the card for me, not at those prices.
 
Indeed :)

I'm still convinced that Fermi has a lot more to offer. Hardware and drivers aside, if the games aren't using the new programmable architectures of Fermi, then the results are really inconclusive and therefore, not comparable. The hardware isn't as fixed as it once was giving more freedom to the developer.. It maybe that Nvidia may still have cards up their sleeves but from what I've read, it really is down to the developers to utilise and embrace the new technology to get the most of it. Nvidia haven't invested in their new architecture for nothing. However, I fear we are long way from feeling the true affects what Fermi could and can give us. but I maybe wrong.

All i do know, this isn't the card for me, not at those prices.

By the time all these new features are able to be exploited, we'll be looking at refreshes/new lines I reckon.
 
I dunno an older NV card is enough for CUDA :D
Depends what you're doing. Quite a few customers do advanced data analysis and have very very nice code written for CUDA. They're running quad SLI in more than 10 machines.
The development platform plus tools are far in advance of OpenCL, the development / conversion time would be prohibitive in the extreme.
 
Depends what you're doing. Quite a few customers do advanced data analysis and have very very nice code written for CUDA. They're running quad SLI in more than 10 machines.
The development platform plus tools are far in advance of OpenCL, the development / conversion time would be prohibitive in the extreme.

Those people don't care about anything other than CUDA though, so to them it doesn't matter if the card gives 1fps in games.
Besides, shouldn't they be buying Teslas then?
 
If you already have capable graphics at this point and want a new card, wait for the ATI refresh or their next HD 6000 series due out later this year.

If buying a new card just for DX11, it isnt too worth it because the HD 5000 series has fixed tesselator performance that isnt really too good (as demonstrated in Heaven Benchmark with Extreme Tesselation - a 5970 still has the same minimum frames as Xfire 5770s), and the Fermi is far too expensive and hot - by the time you actually need DX11 features, much better cards will be available and both the Fermi and 5800s are going to have been replaced.

The problem with Tesselation is a lot of games currently and even the heaven benchmark are wasting a lot of GPU power for no increase in model quality.

Take this example No Tesselation Tesselation

The alien's tail has a ton more polygons in the tesselation image and yet it doesn't actually really alter how the tail looks at all, most of those extra polygons are doing nothing to increase the quality of the model but will be reducing performance for little gain.

Indeed :)

I'm still convinced that Fermi has a lot more to offer. Hardware and drivers aside, if the games aren't using the new programmable architectures of Fermi, then the results are really inconclusive and therefore, not comparable. The hardware isn't as fixed as it once was giving more freedom to the developer.. It maybe that Nvidia may still have cards up their sleeves but from what I've read, it really is down to the developers to utilise and embrace the new technology to get the most of it. Nvidia haven't invested in their new architecture for nothing. However, I fear we are long way from feeling the true affects what Fermi could and can give us. but I maybe wrong.

All i do know, this isn't the card for me, not at those prices.

The problem is that games aren't supposed to be programmed for a specific architecture, they are written to an API. Whether Fermi is more programmable or not shouldn't make a difference because ideally developers shouldn't be targeting specific cards but just writing code compliment to the DirectX API.
 
Last edited:
Take this example No Tesselation Tesselation

The alien's tail has a ton more polygons in the tesselation image and yet it doesn't actually really alter how the tail looks at all, most of those extra polygons are doing nothing to increase the quality of the model but will be reducing performance for little gain.

Indeed, the aim is to use as FEW as possible polygons and yet show all the intricate details, many games seem to have just gone for the "omg let's tessellate lots" approach rather than do it correctly
 
Back
Top Bottom