• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi NDA ends today

Not such a shame IMO - tho I have nothing against them - but I don't know anyone whos bought a BFG card in the last 5 years that hasn't had to RMA it.

My 8800GTS still works, and my GTX280 is (surprisingly) still functional. Both are BFG overclocked versions.
 
My BFG 260 died around a year after I got it - RMA'd - replacement working fine for now. Had another BFG card awhile back that died around a year on too - didn't bother to RMA it as it was cheap card. Friend of mine built a 9800GTX+ SLI setup with 2x BFG watercooled cards and both died within a short period of each other. I know people who've had to RMA an 8800GTX and a 280 as well... which is also the sum of everyone I know in person thats bought BFG.

I've got a ton of gainward, gigabyte, asus and sparkle GPUs here and none of them has failed except one of the asus cards which turned up DOA.
 
Havn't ever purchased bfg but i have never had a gfx card fail on me without some stupid action causing it like my 4870X2 when a cousin decided to see how it looked with cooler off and borked the pcb dismantling it and thats the only card i have had ever fail on me and couldn't blame the gfx card.
 
In my case I've got replaced like for like - sent out a 65nm maxcore - got a 65nm maxcore back (different one I checked serials and it didn't have the same marks on the cooler as mine) my friend got like for like on his water cooled cards too.
 
In my case I've got replaced like for like - sent out a 65nm maxcore - got a 65nm maxcore back (different one I checked serials and it didn't have the same marks on the cooler as mine) my friend got like for like on his water cooled cards too.

Of course, I guess that doesn't really apply if the card that has died is still in production/they have adequate stock of it. I was referring to their surprising generosity in terms of replacements when that particular generation of cards has been out of production, as opposed to acting like some kind of saint like upgrade fairy.
 
Got to agree with the others, somethings not right here, as they would be spunking from the rooftops about it, they'd be Fermi This, Fermi that, Fermi, Fermi, Fermi!!!, but doing ziltch, no specs, nowt, all they doing is blabbing on about how fast it can render a Buggati Veryon.

Wheres the incentive to make you wait and not go ATi Nvidia.
 
Got to agree with the others, somethings not right here, as they would be spunking from the rooftops about it, they'd be Fermi This, Fermi that, Fermi, Fermi, Fermi!!!, but ziltch, no specs, nowt, all they doing is blabbing on about how fast it can render a Buggati Veryon.

Wait, have you actually read anything on Fermi? Apart from the clock speeds the specifications are very clear now for the potential highest-end core:

512 shader cores
64 texture fetch/256 texture filter
48 ROPs
16 'Polymorph' geometry units (each of which includes a tessellation unit)*
384-bit memory bus.

See here:
http://anandtech.com/video/showdoc.aspx?i=3721&p=2

*Given the number of units and the maximal performance increase over Cypress in tessellation (approximately 6x), you can probably guess what inspired them to architect this kind of setup. It will probably work in Nvidia's favour, so good on them, hopefully AMD will think of something to counter this in their next generation.
 
Wait, have you actually read anything on Fermi? Apart from the clock speeds the specifications are very clear now for the potential highest-end core:

512 shader cores
64 texture fetch/256 texture filter
48 ROPs
16 'Polymorph' geometry units (each of which includes a tessellation unit)*
384-bit memory bus.

See here:
http://anandtech.com/video/showdoc.aspx?i=3721&p=2

*Given the number of units and the maximal performance increase over Cypress in tessellation (approximately 6x), you can probably guess what inspired them to architect this kind of setup. It will probably work in Nvidia's favour, so good on them, hopefully AMD will think of something to counter this in their next generation.
I'm no good with graphics cards details, do any of those present a good base to judge how much more powerful it is than a 5870/ 5970? If so can you give me a rough estimate. I'm only interested in gaming so if these specs don't tell me how good it is at gaming then they're not very useful to me.
 
Wait, have you actually read anything on Fermi? Apart from the clock speeds the specifications are very clear now for the potential highest-end core:

512 shader cores
64 texture fetch/256 texture filter
48 ROPs
16 'Polymorph' geometry units (each of which includes a tessellation unit)*
384-bit memory bus.

Your missing the point slightly

1. Nobody knows except for one fc2 benchmark and one section of Heaven how those shader cores perform especially if lots of tesselation is being used. Where's the crysis, dirt 2 or even Batman benchmarks?

2. Nobody knows what speed the shaders will use on the retail version. The benchmarks could be cherry picked ones at 1600Mhz and there are rumours the retail ones will only be 1200Mhz so 25% slower.

3. Nobody knows how many 512SP cores there will be and most likely the 448sp version will be the one in numbers with perhaps the 512SP version as an extreme/rare/expensive card and will be the one they send to reviewers. Yet again that will be a 12.5% drop in performance on the 448 version so all the extra gains over a 5870 may well be wiped out by these "compromises"

And the Fermi doesn't have dedicated tessaltion units AFIK. It uses clever use of algorithms and the polymorph features to use the shaders for tessellation. Hoever if the shader are processing tessellation, they can't be processing the normal graphics stuff so there's a payoff.

And lastly, no 4. Price. That is going to be the biggy vs performance.
 
I'm no good with graphics cards details, do any of those present a good base to judge how much more powerful it is than a 5870/ 5970? If so can you give me a rough estimate. I'm only interested in gaming so if these specs don't tell me how good it is at gaming then they're not very useful to me.

It's hard to tell and will be very case-by-case dependent. But from what I can gather, I'll say expect 20-30% faster than the 5870 in contemporary DX9/DX10 games, and potentially 60-70% faster in DirectX 11 games once they've had some time to mature, but variation will be wild because the architectures are hugely different and have strengths in completely different areas.
 
after reading the white paper listed on the new nvidia page...and yes it dosnt really tell us anything we didnt already know....anyway heres a few charts from said white paper for you all to look at.

chart4.jpg


chart2b.jpg


chart5.jpg


chart3.jpg


chart1w.jpg
 
Sorry, I'm going to answer this post in the most lazy way possible, as a direct response to each of your points.

Your missing the point slightly

1. Nobody knows except for one fc2 benchmark and one section of Heaven how those shader cores perform especially if lots of tesselation is being used. Where's the crysis, dirt 2 or even Batman benchmarks?

True, but when was the last time that a graphics vendor even hinted at the real-world performance of a major new architecture potentially months before release?

2. Nobody knows what speed the shaders will use on the retail version. The benchmarks could be cherry picked ones at 1600Mhz and there are rumours the retail ones will only be 1200Mhz so 25% slower.

Agreed, I'd never argue otherwise. In fact there's even a good chance the benchmark numbers for that weren't even from a run we would consider comparable, it could've been a mess of polygons for all we know, it hasn't been independently verified.

3. Nobody knows how many 512SP cores there will be and most likely the 448sp version will be the one in numbers with perhaps the 512SP version as an extreme/rare/expensive card and will be the one they send to reviewers. Yet again that will be a 12.5% drop in performance on the 448 version so all the extra gains over a 5870 may well be wiped out by these "compromises"

Again, agreed, but we know what's in the core and what is likely to be in reviews - chances are that's all we'll know for a while after they've come out, I doubt they're going to give away their entire immediate graphics lineup to the general public. I'd like to see the last time that happened.

And the Fermi doesn't have dedicated tessaltion units AFIK. It uses clever use of algorithms and the polymorph features to use the shaders for tessellation. Hoever if the shader are processing tessellation, they can't be processing the normal graphics stuff so there's a payoff.

Polymorph is just the name of the geometry processing unit, as far as I've been able to discern there is in fact a dedicated tessellation unit in each of those units. Many places have been saying Fermi no longer has a fixed function pipeline, this is true. However, don't confuse that with the GPU having no fixed function hardware what so ever (that would be utterly ridiculous), it's just been modularised.*

And lastly, no 4. Price. That is going to be the biggy vs performance.

I'll give 'em a fiver for one.

I'd say it's also worth noting that Nvidia has practically said themselves that GT300 is great with antialiasing, mostly because of what they've done in terms of enhancing the raster abilities of GT300. Moreover, in HAWX, it's 1.6x faster than the GTX 285 when using 4x anti-aliasing, which is actually just barely faster than the 5870 judging by the results from here:

http://www.guru3d.com/article/radeon-hd-5970-review-test/15

(est. GF100: 88FPS, 5870: 77FPS)

Also, in HAWX, it seems as though the 5870 is almost 200% the speed of the GTX 285 itself at 8xAA, depending on resolution, going by this:

http://www.tomshardware.com/reviews/radeon-hd-5850,2433-10.html
(I typically wouldn't use THG for this sort of thing, but they seem to be the only people with numbers for this particular test)
It seems Nvidia may have cherry picked a scenario where antialiasing becomes a bottleneck in GT200 and have used it to make GF100 look better.

For your convenience:

GTX 285 vs 5870 in HAWX @ 8xAA

2560x1600: (22FPS vs 43FPS) 95% increase
1920x1200: (35FPS vs 60FPS) 71% increase
1680x1050: (44FPS vs 67FPS) 52% increase

Where GF100 offers a 133% increase at undisclosed settings. I think it's fairly safe to assume they would have picked the best looking results for this test.
 
Last edited:
Back
Top Bottom