• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

This is how good Fermi could have been (theoretical analysis)

Soldato
Joined
25 Sep 2009
Posts
10,218
Location
Billericay, UK
Now it's fair to say the GTX480 & GTX470 have come into hail storm of criticism on these boards concerning it's performance and power usage and with that in mind I wanted to find out just how good Fermi could have been IF it were as efficient as the HD5970, HD5870 & HD5850 in terms of power usage.

This first chart shows the average frame rate of the cards in the games below and also the load power draw in Furmark. I got these numbers from Anandtech who are seemingly reputable.

FermiOrigdata1.jpg


Here you can see the cards listed in performance order HD5970, GTX480, HD5870, GTX470, HD5850 and at the bottom there power draw and the % increase in performance they have over each other. Note that despite the huge power draws on the two Fermi cards that there lead over the respective ATI cards can be measured in single digits.

Now put that to one side and lets examine just how good Fermi could have been if the chipset was as efficient as ATI's. I've increased the FPS by the % difference in power load between the GTX480 and HD5870 and GTX470 and HD5850.

Fermitherodata1.jpg


I can't help and feel Nvidia has missed the boat here, the GTX480 could have been as fast (by cats whisker) as the HD5970 and a lot cheaper to boot and the GTX470 would have forced ATI to make serious price cut backs on it's two other cards.
 
Last edited:
I think the top G100 was supposed to be 512 cuda cores/ 750 core/ 1500 hot clock, which is an extra (512/480)*(750/700) = 14% of performance. Bandwidth could have been increased by 14% quite easily too. So say 12% overall boost and we get
Compared to 5870 - 1.12 x 1.085 = 22% faster not near the 38% that has been calculated

However, price 480 vs 5870 = 100/399 = 25% more expensive;
50% (500mm2 vs 330 mm2) larger die area; and
(assuming same 250W vs 188 TDP); 33% more power.

Still very similiar in most metrics vs the 5870. If anything it would have forced the 5970 down to USD499 since it would be a more compelling product.

PS: Didn't do an analysis against an "upgraded" 470 because, I simply do not know what nvidia had in mind for that product
 
Last edited:
Thats an interesting take... but to get a better picture IMO you need to lose the 2560x and 1680x results. Some of the cards, the 5970 especially, don't come off at there best at 1680x and the 2560x results are randomly skewed by VRAM, different AA settings and implementations, etc.
 
Remember ati sacrificed one generation of cards to effetively test the waters at 40nm(can't remember all the details about that...). it's a real shame nvidia didn't do this too, as the fermi seems fantastic. hopefully they'll sort it out for 32nm and we'll have some amazing cards!
 
Remember ati sacrificed one generation of cards to effetively test the waters at 40nm(can't remember all the details about that...). it's a real shame nvidia didn't do this too, as the fermi seems fantastic. hopefully they'll sort it out for 32nm and we'll have some amazing cards!

Not a generation, it was the 4770, first 40nm GPU. Quite a nice card that suffered from the problems that plagued the process.
 
I think the top G100 was supposed to be 512 cuda cores/ 750 core/ 1500 hot clock, which is an extra (512/448)*(750/700) = 23% of performance. Bandwidth could have been increased by 23% quite easily too. So say 20% overall boost and we get
Compared to 5870 - 1.2 x 1.085 = 30% faster

....

A highly theoretical analysis for a GTX 448 which is theoretically possible! What's the same for the GTX 480?
 
Remember ati sacrificed one generation of cards to effetively test the waters at 40nm(can't remember all the details about that...). it's a real shame nvidia didn't do this too, as the fermi seems fantastic. hopefully they'll sort it out for 32nm and we'll have some amazing cards!
Nvidia DID sacrifice a generation of cards, the 380GTX was binned and Fermi accelerated to take it's place.
 
Thats an interesting take... but to get a better picture IMO you need to lose the 2560x and 1680x results. Some of the cards, the 5970 especially, don't come off at there best at 1680x and the 2560x results are randomly skewed by VRAM, different AA settings and implementations, etc.

They'll revoke your Nvidia's membership badge if you keep suggesting more fair and rounded testing :p

I agree with you(if feels so strange inside to do that :( ) that a better picture would be gotten without the randomly odd Vram limited tests, I think throwing out physx tests would also help but would obviously take back Nvidia's advantage somewhat.


But the problem, the fundamental one in power issues is, the 480gtx has almost 50% more transistors and is almost 60% bigger. It would never in a million years use the same power as the 5870.

THe issue is power usage is not completely but "roughly" speaking not bad on the 480gtx, its using about 60% more power on a 60% larger chip, it IS power efficient, the reason it has to be at 1.4Ghz and hot as hell and loud as hell is because higher, as they wanted brings an exponential increase in heat and power.

The question is NOT its power usage, but the performance it gives for that power, and thats where its a completely inefficient architecture. 60% more transistors will never really use less than 60% more power, to suggest otherwise is to ignore the laws of physics.

The ONLY way the 480gtx would use 50% less power, would be to cut35-45% of the core.

You really need to get the idea out of your head that its inefficient in power per transistor, its not, its using EXACTLY what you'd expect 60% more transistors to do.

I'll give you a hint, if you added 60% more transistors to the 5870, it would have the same power load.

The architecture is inefficient, there is no improving or getting around that, heck, AMD's is massively efficient WHEN it can use every shader, even the 5870 isn't that efficient in general use as its next to impossible to program a game to use all 1600shaders on every clock, its lucky to hit 60% average, and yet its hugely more efficient already. If programming every got to the point it was easy to get 90% of the performance out of a 5870, well, it would blow the 480gtx out the water for little to no extra power usage or cost to consumer. Nvidia don't have any massive performance gains to be made by better programming, its a programming wise, very efficient to program for architecture.

Your two options are thus, cut 40% of the transistors and probably 30% of the shaders on the 480gtx to get similar pwoer levels, in which case it would be trading blows with a 285gtx, or increase transistor count of the 5870 by 45% to get similar power usage.

Either way plainly 5870 will win, badly.
 
Only fair way to do physx tests is with a dedicated GPU in each system i.e. GTS250 sitting alongside the rendering card.
 
Only fair way to do physx tests is with a dedicated GPU in each system i.e. GTS250 sitting alongside the rendering card.

Not a huge amount of chance of that happening, didn't the 4890 with a dedicated card for physx beat the 285gtx in batman also with its own dedicated physx card.

I'm still a touch undecided on the reviews in terms of fairness, some sites did show it mostly in its best light, things like Metro 2033, no one really mentioned the quality difference between dx11 with the two main options on or off.

Unfortunately we're still rather unaware of basic things like, how available it will be when its "available", if they will stick with very few batches and how many of the cheap pre-orders never appear in stock with customers weeks later offered other cards when certain models are discontinued.

Will the gainwards at £320 around the place all end up £350 to buy. 75% price increase over a 5850 is just not going to cut it, the 480gtx is better value than the 470gtx, but only because the 5870 is already so much worse value than the 5850.

This gen, the 5850 is the card to get no matter what performance level you want, 1,2,3 or 4 cards, get 5850's. Ok if you have 2k to spend on watercooling and 3 screens and 3d glasses, get 2x 5970 4gb's, or 3x 480gtx's, but there really isn't a situation the 5850 in any number is not better value with miles more performance for the price.

Nvidia should just pay AMD a licence to produce them and then we can get both companies putting the majority of their 40nm allocation to 5850's and we'd all be happy :p
 
the 470 seems to be more on par with 5870 performance though, in some cases better and worse in others.

I reckon if we see a more mature driver release, the 470 will be able to best the 5870.
 
the 470 seems to be more on par with 5870 performance though, in some cases better and worse in others.

I reckon if we see a more mature driver release, the 470 will be able to best the 5870.


hardocp

GeForce GTX 470

Starting from the bottom up, we would say the least relevant video card is the GeForce GTX 470. In all our gameplay testing today, not once did the GeForce GTX 470 provide a superior gameplay experience compared to the Radeon HD 5850, even in Metro 2033. In fact, performance was very close between both video cards, and in some cases the Radeon HD 5850 proved to provide faster framerates. Looking strictly at performance, these video cards are equal.



Of course, we have to look beyond just performance, as metrics such as cost and power consumption come into play. The GeForce GTX 470 is more expensive than the Radeon HD 5850. We are seeing some great deals on Radeon HD 5850 cards starting to crop up, and the fact is that the Radeon 5850 can be purchased with a lot less of your cold hard cash. Looking at power, the GeForce GTX 470 consumes a good bit more power than the Radeon HD 5850, while also producing higher thermals.

The HD 5850 is the clear value winner when compared to the GTX 470. We have been telling you that the Radeon 5850 is the best value in enthusiast video cards since last year and the GTX 470 does nothing to change that.

http://www.hardocp.com/article/2010/03/26/nvidia_fermi_gtx_470_480_sli_review/8
 
the 470 seems to be more on par with 5870 performance though, in some cases better and worse in others.

I reckon if we see a more mature driver release, the 470 will be able to best the 5870.

No it's not, have you not read the chart (based on Anandtech benchmarks)? The GTX470 over 10% slower then the HD5870 and less then 5% faster then the HD5850.

Driver improvements will increase the cards performance but it won't close the gap on the HD5870 which will also be getting better drivers don't forget this card was meant to be out in September last year so Nvidia have all this time to get the drivers primed for existing games like Crysis and Far Cry 2.
 
I wouldn't rule out nVidia driver improvements... I mean the 8800GT even got a boost in the latest set - granted it was 5-7.5% in a handful of games - but that puts it over 30% faster in some stuff than when it was released.
 
nVidia originally planned on a 40nm 200 series core with DX10.1 (GT212,214,216, etc.), which was mostly canned but now a few versions of it exist under the 300 series name.
 
Yeah, as far as I can remember the GT220 core was going to be the 380GTX but was quietly shelved presumably as a result of ATI going with DX11.
 
Back
Top Bottom