• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

nVidia GT300's Fermi architecture unveiled

My point was more upon the performance than the price obviously, NV's are always stupid money but you pay for what you get right?

The point is that everybody knew (expected) Nvidia's cpu to be faster and more expensive than a 5870. Nobody knows by how much on both counts yet.

I can't see why anybody who spent £300 on a 5870 wouldn't be any unhappier than they were last week?

The only thing that 5870 owners could possible be unhappy about is that they could have saved £100 and got a 5850 and have almost the same performance as a 5870.
 
They may be unhappy that 2X 5870's for £600 (currently) wont be faster than the GT300, but like you said we're only guessing off a transistor count and the stupid amount of 6GB ram
 
Fact is we still have no idea how it'll perform in games. Don't get me wrong - I'm a "green" man to the bone, but I'm doubting its going to utterly destroy the 5870 in gaming..... however, theres a bit more to this than just gaming now as the PC as a gaming platform seems to have pretty much stagnated.
 
Thought this was quite interesting.

Link
A Few Thoughts on Nvidia’s Fermi


Today was the start of Nvidia’s GPU Technology Conference. It’s really still just the NVISION conference, because it’s not much of a “industry-wide” conference if ATI and Intel aren’t there. The biggest announcement of the show is undoubtedly the unveiling of Nvidia’s next-generation GPU, code-named Fermi. I’m not sure why they named the chip after Enrico Fermi, who is best known for his work with radioactive substances and controlled nuclear reactions and stuff. But as code-names go, physicists are cool, so I’ll let it slide.

I won’t bother to summarize all the individual features that were revealed today. Tech Report has a excellent article on it, so does AnandTech. I’m just going to editorialize a bit with some of my thoughts based on what we know (and don’t know) so far.


First, boards based on Fermi are going to cost a considerable bit more than the Radeon HD 5870 and 5850, which are ATI’s competing DX11 cards that just launched. The RV870 GPU powering ATI’s cards is 334 mm2. It has a 256-bit memory interface. Nvidia didn’t talk about GPU size, but it did say that Fermi is 3.0 billion transistors – 40% bigger than RV870’s 2.15 billion. So, figure a chip somewhere around the 460-480 mm2 mark. That’s huge.

The chip being 40% bigger doesn’t mean 40% more expensive to produce, though. Imagine chips A and B. Both are 40nm chips made at TSMC. Chip A can fit 100 chips on a wafer, and Chip B can fit 60 chips on a wafer, because it’s 40% bigger. But as chip size grows, it’s harder for the whole chip to come out without flaws, so the yields are worse. Chip A has a yield of 75% – three-fourths of all the chips on the wafer function properly within the intended specs. Chip B has a yield of 60%, because it’s so much larger. That means you’ll get 75 good chips on a wafer for Chip A, but 36 good chips for Chip B. That’s less than half.

In other words, depending on how the yield situation works out, Fermi could be twice as expensive to produce as RV870. Hell, it could be worse. We really have no way of knowing, except to say that a 40% larger chip is usually well more than 40% more expensive to make.

It’s not just the chip, either. A 384-bit memory interface means Fermi-based cards will likely have either 768 MB (not likely) or 1.5 GB of RAM, so that’s higher RAM costs. It also means more PCB layers on the board itself. So aside from higher chip costs, the board costs of Fermi-based products will be higher than Radeon 5800 products.

So if Fermi-based products are going to be considerably more expensive than Radeon 5800 products, what about performance? Well, all Nvidia has talked about so far are the chip design elements that impact GPU compute, rather than traditional graphics. There’s quite a lot there. Nvidia has clearly spent a fair chunk of the transistor budget doing things like dramatically improving double-precision floating point performance, increasing cache sizes, ECC memory support, and so on. These things typically do nothing at all for typical graphics performance (games and stuff). So the chip is 40% more transistors, but that won’t necessarily translate into 40% higher frame rates.

Nvidia seems to be gearing the world up for this. The mantra they keep chanting is that “graphics performance isn’t enough anymore.” Compute really matters a whole heckuva lot, they tell us. This sounds like PR code for “the card is going to be 50% more expensive than the competition and not 50% faster in games, so please place as much importance on GPU compute apps as possible so we look like a better value.”

You know how drill sergeants tell recruits to begin and end everything they say with “sir?” Sir, yes sir! Sir, I didn’t mean to shoot the sergeants toe off, sir! That’s what it’s like listening to Nvidia these days, only with “CUDA” instead of “Sir.” For over a year, Nvidia has told everyone who will listen that GPU compute is super duper important, and has very aggressively flogged PhysX and CUDA. And you know what? Consumers just don’t care all that much. Maybe one day, when there are robust standards and quite a few GPU-accelerated applications that normal people use all the time, the average consumer will want a graphics card to make its non-gaming apps go faster just as much as it wants it to make its games go faster and look better. But we’re not there yet, and we’re not going to be there in the next six months, as much as Nvidia would like us to be.

So Nvidia’s facing a tough sell in Q1 2010 (or maybe late 2009) when the first Fermi-based cards go on sale. They’ll almost certainly cost $399 or more, judging by what we know so far. ATI has a chip and board design that will let them push Radeon HD 5850 cards below $200 and 5870 cards below $250 within the next six months, if they want to. Is a modest increase in frame rate and much higher performance in GPU compute apps going to be worth such a broad difference in price? Will it be a moot point, because the cards will be out of the cost and power budget for most consumers (and OEMs)?
 
Are the rumors floating around these forums true about nVidia maybe moving away from the gaming performance market??? Because if it's indeed true then it`ll be bad for us.

No more competition for ATI, means they can jack up the prices to whatever they want and less performance from there future graphics cards as well.
 
Last edited:
Thought this was quite interesting.

Link[/URL]

That's exactly what I have being posting on these forums the last few days (although worded much better)

After reading all the stuff which came out from Nvidia's little press launch, I came to the same conclusion that a lot of the improvements and transistors in the new gpu have gone into things to improve computational power and not gaming performance. Nvidia have broken the mould and taking in a different direction to normal gaming performance. I think this will become important in the future but the tendancy is that early adopters/pioneers tend not to have their products received very well.

The fanboys reading the transistor count and specs expecting a card which is 50%+ faster in games are going to be very disappointed IMO.
 
That's exactly what I have being posting on these forums the last few days (although worded much better)

After reading all the stuff which came out from Nvidia's little press launch, I came to the same conclusion that a lot of the improvements and transistors in the new gpu have gone into things to improve computational power and not gaming performance. Nvidia have broken the mould and taking in a different direction to normal gaming performance. I think this will become important in the future but the tendancy is that early adopters/pioneers tend not to have their products received very well.

The fanboys reading the transistor count and specs expecting a card which is 50%+ faster in games are going to be very disappointed IMO.

I feel the same unless you using cuda a lot, not sure it will be worth it because it will cost a lot,we will see,nvidia might surprise us on price but i don't think so because hpc market has a bigger mark up then games and it seams they going down that street.
 
This isnt even a retail press launch so I think nvidia is focussing on bringing together other segments of their target market because they already have a flourishing gaming market at the moment.

Thats a tesla part btw so it will probably fetch them £500+, so its not worth debating pricing with the stats we've received so far.
 
Just to make a correction; on the Jason Cross article below he mentions ECC support as being something which does not benefit performance.

Actually ECC will make RAM overclocking more stable, increasing a card's speed. If you push it too far, the ECC will kick in to make the memory stable and result in a performance drop. As such there will now be a 'peak' performance for the RAM, rather than a point where it crashes or doesn't.
 
Actually ECC will make RAM overclocking more stable, increasing a card's speed. If you push it too far, the ECC will kick in to make the memory stable and result in a performance drop. As such there will now be a 'peak' performance for the RAM, rather than a point where it crashes or doesn't.

Personally I see that as a downside, not a plus point for me. Up until now I have always overclocked my cards by having furmark running to heat them up and testing with atitool until it artifacts and then back off a bit.

Followed up by a test run of 3dmark06/vantage with furmark still running and I was 100% sure I had a stable maximum overclock for my graphics card.

Now I will have to up the memory speed, run 3dmark06/vantage and repeat until the test results starts decreasing. Going to take a lot longer to find the max overclock now. :(
 
lol fun article and seems to have a point. do hope though that nvidia have a decent card to properly compete on tech and price. lack of competition is bad
 
Back
Top Bottom