• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GTX580 specs and benchmarks

Soldato
Joined
18 May 2003
Posts
4,894
http://vr-zone.com/articles/report-...includes-128-tmu-benchmarks-leaked/10202.html

Following up on previously leaked specifications, Chinese website eNet has filled in some missing information - notably TDP and TMU count. The TDP of GeForce GTX 580 is at 244W, slightly lower than the GeForce GTX 480. The texture fillrate had been viewed by many as one of the bottlenecks for GF100, and eNet reports that GF110 effectively doubles the TMU count to 128 TMUs. Apart from this substantial improvement in TMU, the GF110 is a "full revision" and fixed version of GF100.

The GTX 580 is set to release in one week's time, on November 8th/9th (depending on where you live), but the jury is still out on the extent of availability on release. While strong rumours suggest virtually no availability on launch, eNet insists that AIC partners will have GTX 580s on sale on November 9th. However, the quantity of cards for sale on November 9th is not mentioned.

That TDP is looking unlikely IMHO

5715343_1288611776397_1024x1024.jpg
 
Last edited:
So if the numbers are to be believed, it's a mere 20% quicker than the 480. Poor availability, no doubt high cost, will be interesting to see just where the 6970 ends up in relation to it.
 
Poor result for Crysis if true and without knowing the resolution its pretty meaningless. I don't see anything to make me want to trade my 5870 in just yet. 2560 resolution will be interesting.

If it's power efficient and scales well with resolution over my 5870 I will consider buying one. Knowing Nvidia though .....
 
well 244W vs the 250W the 480gtx was rated at will likely mean a REAL TDP of around 290W, thats not altogether terrible when you consider the slight clock/shader increase on the original gf100 would be another 20-30W more on top of that.

Ignoring the last four results, two actual tech demos/benchmarks and the 2 "Nvidia" games that are so ridiculously backed by Nvidia they aren't worth talking about, it seems odd it gets a 40% boost in 3dmark and in almost every other situation(all the other games) its getting 15-20% extra performance.

But its exactly what you'd expect, give a GF100 32 extra shaders and 50-75Mhz and you'd have identical performance, its truely, ridiculously awful they are calling it a 580GTX.

I've said before if AMD launched CYPRESS last year then finally delivered a 5870 with only 1440 shaders, then a full year later relaunch a 1600 shader cypress and call it a 6870, laughable, of all Nvidia's stuff this is the single worst rebrand they've done.

If the 580GTX had significantly less shaders but the same performance, so increased performance/shader, or significantly more shaders, new gen, fine.

Even if they'd launched Fermi last November as 480SP architecture, they'd at least have a leg to stand on, but they've actually physically "launched" an all but identical architecture WITH 512SP's and the performance the "580GTX" will provide, and that card should have been the 380GTX let alone a 480gtx.
 
So if the numbers are to be believed, it's a mere 20% quicker than the 480. Poor availability, no doubt high cost, will be interesting to see just where the 6970 ends up in relation to it.

Actually, looking at that graph, if true, the gtx580 varies from only 5% quicker than a gtx480 in RE5 to 37% in Vantage. Stone Giant is 22%.

In fact most games are in the 5% to 20% range so parhaps 15% gain on average if I am being generous.

It certainly won't be worthwhile for anybody with a gtx480 to upgrade especially if their gtx480 is overclocked already.
 
When did the 5870 beat the GTX480 in Crysis?

This is one of the things that bothers me no end. On the 5870 release, the AMD cards were generally beating their Nvidia counterparts in Crysis WARHEAD< but behind in Crysis. 480gtx release reviews, it was behind the 5870 in Warhead.

Since then, and this is the part that irks me, many sites have shown differing results, with one 460gtx review Nvidia's ahead of AMD across the board in Warhead, in another review on another site, AMD is still ahead in Warhead.

IF those are Nvidia's own numbers, they seem to concede being behind the 5870 in Warhead, even stranger, they included a game no one plays in LP2, why didn't they go with the original Crysis rather than Crysis Warhead, as the results probably show Nvidia in a better light.

I'm just starting to trust individual reviews less, and less and less. I mean Nvidia will happily use whatever setting in whatever game they want to maximise the difference, general high settings even with DOF/Tess on I don't think the 480gtx is almost 60% ahead of AMD, with DOF/Tess off the gap is closer, yet in all the settings they could change for Warhead they couldn't find one where Nvidia pulled ahead of AMD.

So how do other reviews show a change and that AMD is now behind in Warhead, no idea, bad results, cheating, I really don't know. Benchmarking has become a bit of a joke, review sites do constant dodgey things, rehash old results vs newer drivers on other cards. MANY reviews of 460gtx's are done vs old 5870/5850/480/470gtx results so the 460gtx has 10-15% faster drivers, more in some specific games.
 
Actually, looking at that graph, if true, the gtx580 varies from only 5% quicker than a gtx480 in RE5 to 37% in Vantage. Stone Giant is 22%.

In fact most games are in the 5% to 20% range so parhaps 15% gain on average if I am being generous.

It certainly won't be worthwhile for anybody with a gtx480 to upgrade especially if their gtx480 is overclocked already.

This is one of the main problems I see, if the 480gtx can hit say 900Mhz(I'll be honest, I don't have a single clue at all what they average in overclocks), and 580gtx's can also only hit 900Mhz, the MAJORITY of extra performance in the new card is from clock speeds. If they both hit the same overclocks that performance advantage more than halves, and you're left with basically 6.5% extra shaders, in games where clock speed matters more than shaders then there might be next to no performance gap at all.

Honestly especially for the price of a 480/580gtx you'd be mad not to get every single last % of performance out of it, the 580GTX needs to be able to overclock 5-10% further or it will be an absolute waste of a card. IF they've managed a 4890 and added something that helps it overclock, then it might be worthwhile, if they've increased how far it can overclock rather than maintained the overclock then the performance difference might extend to 30% maybe both at max overclocks.

The 4890 was semi worth it as the 4870 didn't overclock well, the difference between an overclocked 4890 and a 4870 was fairly decent.
 
If those numbers are to be believed. 580 gtx looks bad....
Hope it can over-clock well...because it doesn't even come close to 1 year old 5970 (dual gpu). Fail.
 
Won't stop people buying it tho, because its important to have the single fastest GPU, unless of course that single fastest GPU isn't the brand they currently favour :)
 
If those numbers are to be believed. 580 gtx looks bad....
Hope it can over-clock well...because it doesn't even come close to 1 year old 5970 (dual gpu). Fail.

Is there any information showing that ATI's next single GPU card will beat the 5970?

Because you are making rather large assumptions when infact you may end up eating your own words.

On a seperate note, Nvidia should call this the 485 as it is not a new generation as such. Still i suppose they are all at it.
 
Is there any information showing that ATI's next single GPU card will beat the 5970?

Because you are making rather large assumptions when infact you may end up eating your own words.

On a seperate note, Nvidia should call this the 485 as it is not a new generation as such. Still i suppose they are all at it.

lol this is same guy that thinks AMD's MLAA looks better than 32xAA on Nvidia cards.

Even AMD don't expect the 6970 to be faster than the 5970.

amdhd6800presentation07.jpg
 
Last edited:
Back
Top Bottom