• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The first "proper" Kepler news Fri 17th Feb?

It looks like the GK104 has a similar die size to Tahiti, though maybe smaller, however, the Tahiti die burns up die space for the 384 bit bus. Not pushing the memory controller so hard means that Tahiti uses lower spec RAM modules reducing costs. Also a wider bus reduces the chances of running into irreconcilable bandwidth issues limiting GPU performance.
You may be correct regarding the die sizes being different mostly due to the 256/384bit memory interfaces. However, the most expensive Graphics Card component to manufacture is the GPU, followed by the PCB, then VRAM. GPU costs rise significantly with transistor count and PCB costs increase significantly with the number of layers/lanes. VRAM costs I would guess should be quite low due to the general low price of memory modules (you can buy low end cards with 1gb to 2gb of GDDR5 very cheaply).

Tahati will certainly cost AMD significantly more to manufacture than GK104. Bigger (more expensive) GPU + more compicated (more expenive) PCB + 50% more VRAM (also more expensive) = MORE EXPENSIVE. If NVidia can manufacture a cheap card and make it as fast as a relatively expensive to produce AMD card, AMD should be very worried.

Think how cheap AMD's 5800/6900 series cards were to produce compared to the 384bit Fermi's. Now imagine if AMD had managed to match Fermi's performance with their cheaper cards (which they did not). Despite being a little late, and despite possibly poor yields, NVidia may have AMD bent over a barrel if GK104 really is a close match for the 7970. Yields will increase, and costs will reduce for both parties, but ultimately GK104 looks set to provide a much better performance vs production cost compromise.

In the above case, the only thing that will keep AMD in the game is if NVidia does not enforce a price war. Ultimately, NVidia could sell it's cards cheaper and still make profit, whilst forcng AMD to sell at a loss or zero profit. It depends whether NVidia prefers short term profit (high priced greed with GK104 @ ~£400) or to cause long term damage to it's main competitor selling a bargain basement prices. Tahiti vs Kepler could be another Bulldozer vs Sandy Bridge for AMD, whereby AMD's top parts can only really compete against the oppositions middle order.

GK104 @ <£300 would blow AMD out of the water. GK104 @ £400 will provide AMD a reprieve, and will feed their R&D budget. This is assuming that GK104 is anywhere near as good as curremt "leaks" suggest.

AMD may be winning 1-0 at the moment, but the game is only 10 minutes in and NVidia may be about to bring on Messi, Ronaldo and Van Persie:).
 
Last edited:
148a.png

The penalty for designing charts like this should be electrodes under the toenails. :mad:
 
I've done a few charts in my time. I did this one when the 7970 launched.

Please note. This picture is a little naughty and definitely not safe for children.

Edit. Actually no, don't want to get in trouble :D
 
I've done a few charts in my time. I did this one when the 7970 launched.

Please note. This picture is a little naughty and definitely not safe for children.

Edit. Actually no, don't want to get in trouble :D

hahaha i saw that link :D Total genius this would sale it to the enthusiast market easy :p
 
hahaha i saw that link :D Total genius this would sale it to the enthusiast market easy :p

Haha it got me banned from another forum :D

There was a thread set up for 7970 owners to wave their willy and if you dared post absolutely anything but bumlicks for the 7970 they banned you.

So I made them a little chart. :D
 
Fair point... That would certainly allow for more flexibility in tuning the core and shader clocks to their individual limits.

Now that I think about it - wasn't this also the case for the 8800GTX (and related architectures), and also GT200?


edit: Yes it was - see the table towards the bottom of this page: http://www.anandtech.com/show/2549

I wonder if the switch to "shader clock = 2 x core clock" was a compromise to help improve inter-chip communication in Fermi? If the core and shader clocks communicate every clock cycle, internal latencies could be reduced...

Yeah, used to be the case :) ... I think what NVIDIA may be going for is higher clocked geometry domain (higher than 0.5x) so they can get the same performance out of smaller circuitry with the saved die space being used for additional shaders. Either that or they felt their design with cores in the region of 1536 benefitted from clocking the geometry clock higher rather than increasing circuit complexity.

And by decoupling they do all this without the design disadvantages of shader clocks in the region of 2000+ MHz.
 
Back
Top Bottom