• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia GT 220 Reviews

Aren't EVGA heading towards trouble in the near future, last I heard they only sell nvidia cards?

As for the card, I'm guessing it's roughly the same performance as a X1900XT, which in all honesty is still adequate for the majority of games at medium settings.
 
Fermi is estimated to be 50% larger in area on equal fabrication..
Area = GPU processing power * floating point bit size.

The old games GPU only requires 16bit floating point and doesn't require additional logic for IEEE compliancy, overflow etc. Whereas the GPGPU starts with 32bit "single precision IEEE" FP and if nVidia want to move into supercomputing they will need at least 64 (double) or 128bit (extended precision).

Although it's possible to use error functions to make use of 32bit, in the end you still loose information at high precision - or further reduce the processing speed in half.. remember all this talk about FLOPS is usually quoted at 16bit "single" point for graphics cards rather than say 64/128bit.

So without 64/128 bit they loose their top end supercomputer market whilst the cost of fabricating Fermi makes it a high cost commodity sale (even in volume) thus suffer a very reduced market or slow acceptance (unless they release a mid range Fermi at the same time but the fab probably won't be viable initially for low cost parts).

After saying all that, what exactly is your point? single/double precisions has nothing to do with the size of the core logic. Essentially, every shader is linked to memory on an Nvidia core, with various bits connected to every single shader. Essentially, by having 5 shaders in a cluster, you can cut down the required amount of connections to each shader, which massively cuts down core size.

Fact is the 5870 does double precision aswell, nothing fancy about that, both Nvidia and ATi's last gen did it, just not as efficiently, last gen Nvidia were still 50% larger die size(well maybe a little more IIRC), the 5870 improved its double precision efficiency(in comparison to its overall power vs last gen) just as Nvidia have done this time around.

The reasons Nvidia's core is large are basic design decisions, NOTHING to do with double precision in any way shape or form.


Likewise, the funny news is the first GT220 listed in the states, is $120, for the DDR2 version, IE the cheapest version is priced higher than a 4850, and priced higher than a 5750, with, what, 1/4 of the power? Gah, looks like plenty around the $80-90 in the states, though a lot with ddr2/3 rather than gddr3, still a stupid price for a crap card. Needs to be something like $15-25 cheaper to offer a reason against going ATi.
 
Last edited:
not really, as has been consistently pointed out the 4670 is cheaper and faster! Why would an OEM bother?

Remember the 4670 is cheaper, faster and when you use them, you don't have to threaten to sue for a $200million pay off every couple years to cover all the faulty parts you were sold, can't think of any reason an OEM might prefer a 4670 :p
 
The reasons Nvidia's core is large are basic design decisions, NOTHING to do with double precision in any way shape or form.

The increase in the number of double precision capable units directly increases complexity and indirectly increases pressure to increase the size of registers (and number) and memory caches.


The problem with paper FLOPS counts are they are based on your perfect utilisation of GPU resources rather than real life computation. So I can see this becoming a war over shader/gpgpu compiler technology.
 
The increase in the number of double precision capable units directly increases complexity and indirectly increases pressure to increase the size of registers (and number) and memory caches.


The problem with paper FLOPS counts are they are based on your perfect utilisation of GPU resources rather than real life computation. So I can see this becoming a war over shader/gpgpu compiler technology.

THe point being, AMD increased their double precision power also, significantly, yet they don't have a 3.15billion transistor part. Its NOT the reason and I really don't know how to make that more clear. They've not massively or hugely increased the ability to do double precision, its not some new thing only the Fermi can do. AMD roughly doubled the size of their core in terms of transistor count from last generation with double the shaders, double everything else DX11 improvements and better double precision power.

Nvidia have only roughly doubled their transistor count.

Your idea, seemingly, is that Nvidia have massively increased the double precision power of the card, that its somehow exponentially faster than ever before, and thats why its double the size. But thats all it is, double the size, going from a 240sp card, to a 512sp card.... hmmm, wait, it looks to me its the exact size you'd expect simply doubling the counts on the last generation card.

Where exactly is this massive increase in transistors just to do double precision?

It was 50% bigger before, its still 50% bigger, its got entirely nothing to do with the double precision count, if that added sooo much to the size, it would have been far bigger than it is.


Bigger cores, harder to make, worse yields, more money wasted and slower production, its just bad business every which way you look at it. Nvidia have been selling 260gtx's at cost for almost a year, with their exclusive partners leaving left right and centre. most crucially, double precision is entirely and completely useless for gaming, which is still the massive, massive, massive majority of where their sales go.

IN terms of gaming, they may aswell advertise that the pcb will be so large you can use it as a paddle for your boat. Sure, the 5870 can't do that, but its just as irrelevant to gaming as double precision power is.
 
You can spin it either way really (especially as Fermi does its DP calculations on the same units it does its SP operations on (similarly to the radeon chips) - unlike GT200 which had separate units for it), but looking at the transistor count:units ratio shows more or less perfectly linear scaling since G92.

G92: 754M transistors, 128 MAD/clock
GT200: 1400M transistors, 240 MAD/clock
GT300: 3000M transistors, 512 FMA/clock
 
OEM's have bothered, as they've had these cards since July, they are only releasing them as retail now.

Going along with its plans to transition to 40nm, Nvidia has another desktop graphics card for us today dubbed the GT 220. Our readers are well aware of the name GT 200 for some time now, and while Nvidia launched the GT 220 OEM version back in July, it was time for the retail version as well. The aforementioned OEM card had a quiet introduction, and since you couldn't purchase it without buying an entire rig, we guess some of you didn't even know it existed.
 
Last edited:
You can spin it either way really (especially as Fermi does its DP calculations on the same units it does its SP operations on (similarly to the radeon chips) - unlike GT200 which had separate units for it), but looking at the transistor count:units ratio shows more or less perfectly linear scaling since G92.

G92: 754M transistors, 128 MAD/clock
GT200: 1400M transistors, 240 MAD/clock
GT300: 3000M transistors, 512 FMA/clock

Its just the exact size expected for the same basic architechture, likewise, the 4870 was, I can't actually remember, but somewhere around the G92 in terms of transistors, just a much smaller process, which has since roughly doubled to make the 5870.

THe Nvidia core is big, because its big, nowt to do with SP/DP capability.

But its not competitive at the high end in cost/performance in a business sense, sure you can sell any card at any price. But AMD are happy and make a profit on a £140 4890, Nvidia make a loss on a 275/285 priced the same pretty much.

THe difference is the smaller nature of the design, means AMD's cores scale well, keep power and get tiny for the mid/low end. Hence a 55nm 4670 being cheaper to make and still making profit, compared to a 40nm slower more expensive part, their basic fundamental design, is too big to be competitive.

Fermi may/may not be fast, it sure as heck won't be competitive on price, and any low/mid end derivitives of it can't match ATi's versions in terms of profit in the segment/price range they need to compete in.

It actually shocks me that a company can be so dense as to not forsee TSMC being utterly crap again and making a huge core was just waiting for a 2900XT "disaster" to happen again.
 
Back
Top Bottom