• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The first "proper" Kepler news Fri 17th Feb?

You may be correct regarding the die sizes being different mostly due to the 256/384bit memory interfaces. However, the most expensive Graphics Card component to manufacture is the GPU, followed by the PCB, then VRAM. GPU costs rise significantly with transistor count and PCB costs increase significantly with the number of layers/lanes. VRAM costs I would guess should be quite low due to the general low price of memory modules (you can buy low end cards with 1gb to 2gb of GDDR5 very cheaply).

Look at the difference of the HD6950 1GB and 2GB for an idea of the price of adding an extra 1GB of RAM. They use the same spec modules too; 5Hz.

Tahati will certainly cost AMD significantly more to manufacture than GK104. Bigger (more expensive) GPU + more complicated (more expensive) PCB + 50% more VRAM (also more expensive) = MORE EXPENSIVE. If NVidia can manufacture a cheap card and make it as fast as a relatively expensive to produce AMD card, AMD should be very worried.

nVIDIA, I believe, have a different pricing model to AMD when it comes to GPUs so at the moment, even with a larger GPU, they may pay less per functional GPU part than nVIDIA. This is pure speculation on my part though regarding the cost per die.

Think how cheap AMD's 5800/6900 series cards were to produce compared to the 384bit Fermi's. Now imagine if AMD had managed to match Fermi's performance with their cheaper cards (which they did not). Despite being a little late, and despite possibly poor yields, NVidia may have AMD bent over a barrel if GK104 really is a close match for the 7970. Yields will increase, and costs will reduce for both parties, but ultimately GK104 looks set to provide a much better performance vs production cost compromise.

I think that die size, cooling requirements also have to be factored in in that comparison.....continued...

In the above case, the only thing that will keep AMD in the game is if NVidia does not enforce a price war. Ultimately, NVidia could sell it's cards cheaper and still make profit, whilst forcing AMD to sell at a loss or zero profit. It depends whether NVidia prefers short term profit (high priced greed with GK104 @ ~£400) or to cause long term damage to it's main competitor selling a bargain basement prices. Tahiti vs Kepler could be another Bulldozer vs Sandy Bridge for AMD, whereby AMD's top parts can only really compete against the oppositions middle order.

GK104 @ <£300 would blow AMD out of the water. GK104 @ £400 will provide AMD a reprieve, and will feed their R&D budget. This is assuming that GK104 is anywhere near as good as current "leaks" suggest.

AMD may be winning 1-0 at the moment, but the game is only 10 minutes in and NVidia may be about to bring on Messi, Ronaldo and Van Persie:).

And that may turn out to be the case, but as I have said, AMD would likely have the opportunity to respond by releasing a faster card and maintain a positive price differential if that's how nVIDIA wanted to play. Bare in mind that AMD has only priced these chips compared to how they perform compared to the current generation, not according to how much they cost to produce.
Therefore, AMD have been likely making a pretty penny on each card that they have sold so far.

At this point though we've been getting into rampant speculation because I'm guessing that neither of us have the BoM for HD5800/6900/7900 and GTX480/580/680 series of cards.

If nVIDIA don't reach a reasonable parity/lead, even with heavily clocking the card, then they are bringing on Rob Green, James Milner and Peter Crouch and hoping that they are good enough so that their fans will still buy into them ;)
 
The GT640M has 384 Kepler cores running at 625MHZ and has the shaders running at 1250mhz with a 128 bit memory controller and 900MHZ DDR3 . The GT555M used in the review has 144 Fermi cores running at 590MHZ with a 192 bit memory controller using DDR3. Despite lower memory bandwidth the GT640M was running BF3 around 15% faster than the GT555M! Just Cause 2 was around the same speed on the GT640M too.

Considering the GT555M was running in a full sized Alienware M14X laptop and the Acer was using a low voltage CPU,it looks a decent result.

It also seems Kepler still has hotclocks too.

Edit!!

It seems due to the Turbo Boost technology,the GPU was running quite hot(it seems Acer set the boost margins relatively high).
 
Last edited:
A 144 core Fermi is 15% slower than a 384 core kepler and that's a "good" result for Kepler........ now? if you assume essentially 1 Fermi core becomes to keplers, you're comparing effectively a 288core card vs a 384core card.....

It's also a little rich to say its 15% faster in one game, and around the same speed in another..... which comes across as "its 15% faster in both games".

In 1 of 3 games the 640 is faster, in the other 2, its slower, the review also says the laptop hits 128degree's near the gpu and 105degree's on the laptop and is unacceptably warm, its using dynamic clocking, so is likely actually overclocked in BF3 where it was running stupidly hot and its still not much faster even with effective 35% more shaders its slower in 2 out of 3 games listed.
 
?? I only mentioned the core counts since it is the first comparison of a Kepler and Fermi GPU by any well known website. The GT640M when compared to the previous GT540M is a very large improvement in performance.

The M14X is a standard 14" performance laptop whereas the Acer tested is at the higher end of the UltraBook scale in size.

Dawn of War 2 is quite CPU limited and the Acer uses a 1.6GHZ Core i5 dual core and PC Perspective used a 2.3GHZ Core i7 quad core in their M14X review.

In two of the DX11 benchmarks,ie, 3DMark11 and BF3,the GT640M scored higher than a GT555M with a much faster CPU,and in Just Cause 2 which is also DX11 the scores are very close together(the GT640M is only around 6% slower),and this is with much lower memory bandwidth too.

I also did mention already the Turbo Boost(or whatever Nvidia calls it) margins have been set high too,its not some sort of conspiracy.
 
Last edited:
This is boring me to hell now.

If I don't hear anything solid in the next week then two 7970 are bought.

I've had a month of waiting since I sold my cards and that is enough.

This.

The worst thing is it keeps appearing in this thread. Everytime proper 'kerplunk' news pops up top it gets me excited. Fed up of it now, its become a battleground filled with ATI vs Nvidia arguments which are pointless in this thread at the moment. If you have to **** Nvidia off for past efforts people please do it elsewhere, as some people want to find out if its any good when its released not just compare 4** or 5** to 6*** or 7*** before its even released!!!!
 
Regardless of the performance of kepler when its released, im sure amd have made plenty of profit on the 7900 cards they have sold over the last 2+ months. It was their time to make hay while the sun shone, and thanks to nvidia the sun has been shining a long time.
 
A 144 core Fermi is 15% slower than a 384 core kepler and that's a "good" result for Kepler........ now? if you assume essentially 1 Fermi core becomes to keplers, you're comparing effectively a 288core card vs a 384core card.....

It's also a little rich to say its 15% faster in one game, and around the same speed in another..... which comes across as "its 15% faster in both games".

In 1 of 3 games the 640 is faster, in the other 2, its slower, the review also says the laptop hits 128degree's near the gpu and 105degree's on the laptop and is unacceptably warm, its using dynamic clocking, so is likely actually overclocked in BF3 where it was running stupidly hot and its still not much faster even with effective 35% more shaders its slower in 2 out of 3 games listed.
but GK104 will have 1536cores vs 384 for the GTX560TI or 512 for the GTX580. Assuming 1:2 core peformane, the GK104 GPU would still be 2x as powerful a 560TI or 50% more powerful than a GTX580. Memory bandwidth may limit performance within some areas, but a ~50% fasfter GPU with better clock headroom should be a decent step forward.
 
but GK104 will have 1536cores vs 384 for the GTX560TI or 512 for the GTX580. Assuming 1:2 core peformane, the GK104 GPU would still be 2x as powerful a 560TI or 50% more powerful than a GTX580. Memory bandwidth may limit performance within some areas, but a ~50% fasfter GPU with better clock headroom should be a decent step forward.

But what, where did I even mention the 560ti, 580, 680, anything.

As for the rest, yes, as expected, as everyone should always have expected, the GK104 is the upper midrange part that is effectively twice the shaders of a 560ti....... perfect scaling would mean 50% faster, which rarely happens. it SHOULD be 30-40% faster, with good efficiency and good scaling it should be closer to 40% than 30, maybe even getting up to 45%. Since when is that news.

From a lot of the rumours this card, for the TDP it was probably aimed at, same as 560ti, it was going to be circa 750-800mhz and lose to Tahiti, they instead have seemingly made it a higher TDP card and moved the clock speeds considerably so its 10-20% faster than Tahiti at stock.

Tahiti can still both overclock now a long way with a higher tdp, and can be released as a 7980 with higher clocks with a higher tdp very easily.

Last gen, 580gtx, 15% faster than a 6970, 15% faster than a 560ti. This gen, "full clocks" tahiti 15% faster than gk104, or 15% slower with its underclocked nature.

I wouldn't be remotely surprised if GK104 with around 1Ghz clocks and 10-15% faster than the current 7970 has a 20-30% higher average gaming power use... and a overclocked 7970 with a similar average gaming power will beat GK104.
 
As someone who isnt a fanboy of either amd or nvidia I find it amusing watching the same people squabbling constantly and ignoring the other sides arguments .

As someone impartial I want nvidia to release something better than 7970 purely to change pricing , a competitive market can only be a good thing for the consumer , I will buy the product which offers best value for my money
 
Back
Top Bottom