• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia's 40nm GT212 to have 384 SPs, 96 TMUs, and use Hynix 7Gbps GDDR5

Caporegime
Joined
8 Jul 2003
Posts
30,063
Location
In a house
According to a recent report from Hardware-Infos, the specifications of Nvidia's coming flagship 40nm GT212 architecture have been identified and revealed from unnamed sources near Nvidia, as usual. To start matters off, GT212 is the successor to 55nm GT200b which is currently rolling into production. In comparison, the upcoming architecture is essentially following two similar footsteps of 65nm G92 last March - a die shrink to a smaller fabrication process and a decrease in memory interface width.

The memory interface of GT212 will decrease from 512-bit on GT200 to 256-bit, which is quite similar to what occurred with AMD's RV770. To compensate, however, Nvidia will follow suit by incorporating Hynix 7Gbps GDDR5 into its GT212 as well as into the rest of its 40nm lineup. In addition, the stream processor count of GT212 will increase from 240 in GT200 to 384, and the number of texture mapping units will increase from 80 to 96.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=11237&Itemid=1

Sounds like a beast. :D
 
God only knows the price.

Probably work out about the same. 60% more SP's, but the die shrink means that they are only using (At most, probably significantly less) 16% more wafer, which is the expensive bit.

Then consider that they are moving from 512 back to 256bit memory interface, that saves more money.

As long as ATi have something competitive I think prices will stabilise at around £300 after the initial couple of weeks of fanboy gouging.
 
Sounds beastly. Although the source is Fudzilla so you can expect it to be totally wrong, I do wish people would stop reading that joke of a website.

Still waiting to hear what DAAMIT have planned for their next gen.
 
Going to a 256bit-GDDR5 design should mean it is quite a bit cheaper to produce the PCB.

Cheaper PCB for sure, thiner, less copper, less signalling issues, cheaper power circuitry and a far smaller pin on the die aswell. Not sure how much die space will be saved by the drop in mem bus, its more pin out and routing in the die and increase traces thats a problem rather than actual die space for the controller. Sounds like it will still be a VERY big core and hugely bigger than AMD's which is still going to cost them big. Its still going to be not hugely different prices to go 2 amd cores on a card vs Nvidia's biggest core, personally i don't care which, as we move further into a parralel programming industry crossfire/sli will get better and better and eventually be the only cards around for top performance.

The problem will lie in the fact that AMD pumped a lot of money, man hours and support to the memory makers to push GDDR5 out before they had planned it, so AMD could well own some of the patents/licensing on gddr5 making and could most certainly be getting a profit aswell as possibly charging Nvidia a small fortune to use it. I can see GDDR5 being expensive for Nvidia due to that, though they should save more on cheaper pcb's and core's than they spend on memory.

Not particularly suprising specs tbh, maybe bigger than people thought and the old rumours of only 1000 shaders up from 800 on ATi's next doesn't sound to promising except if that was true they would be quite a bit smaller than these cores so even cheaper, or they'll chuck in a lot more shaders than was originally thought which is more likely IMHO.
 
The problem will lie in the fact that AMD pumped a lot of money, man hours and support to the memory makers to push GDDR5 out before they had planned it, so AMD could well own some of the patents/licensing on gddr5 making and could most certainly be getting a profit aswell as possibly charging Nvidia a small fortune to use it.

That is what I thought as AMD were the ones who invested heavily in GDDR5 in order to get the 4870 ready in time for launch. Do they have any actual licensing on GDDR5 though so they could charge Nvidia?
 
That is what I thought as AMD were the ones who invested heavily in GDDR5 in order to get the 4870 ready in time for launch. Do they have any actual licensing on GDDR5 though so they could charge Nvidia?

You would think after chucking lots of money in that they'd have their hands on some part of it, I'm not sure of course but its fairly usual for partners who collaborate to get credit somewhere along the line via being named on patents, partial licensing rights or whatever else. Not least because AMD would obviously know GDDR5 spanks gddr4/3 so badly and so well and is really a step on while gddr3/4 are more or less evolution of ddr2 rather than big jump up, they'd obviously know it would save Nvidia money via cheaper to make chips with lower bus and knew they could make money off that fact, knowing all that you'd think any agreement made would take that into consideration. Frankly I'm sure things Nvidia have made use idea's AMD have licenced and vice versa, same for Intel/amd, both use and pay for licencing of things the other made.
 
Is GDDR5 really that good anyway? I thought the 4870 cards with GDDR5 were only about 10-15% faster than the 4850's when both GPU's are clocked at the same speeds.

Perhaps when even faster memory modules become available ATI and NVidia will save costs by cutting back to 128bit or 64bit interfaces....

256bit is so 2005:).
 
Last edited:
Is GDDR5 really that good anyway? I thought the 4870 cards with GDDR5 were only about 10-15% faster than the 4850's when both GPU's are clocked at the same speeds.

Perhaps when even faster memory modules become available ATI and NVidia will save costs by cutting back to 128bit or 64bit interfaces....

256bit is so 2005:).

The 4870 is not making full use of what GDDR5 has to offer I assume.
 
Last edited:
Back
Top Bottom