• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The first "proper" Kepler news Fri 17th Feb?

Seems pretty obvious to me. GK104 with 512 shaders is ~10-15% faster than 7970 so will now be 680, and with 448 shaders ~=7970 which will be 670.

GK106 will be 660 and 650.

GK110 will now be held back and be released later as 7xx series.........and everyone will be really grumpy about it.
 
Yeah agree with this,can't believe nvidia wont release anything to rival the 7970 until the end of 2012.
Looks like I will be keeping my trusty 580 until 2013.
We can't make that conclusion yet. Even if Kepler is a mid-range part it could still rival the 7970, as the 580 isn't that far behind anyway. Potentially it could offer better performance (though AMD is in a good position to respond with a refresh card). If they price Kepler correctly - which nVidia doesn't exactly have a good record for - it could be competitive. Certainly it's disappointing that it looks like their high-end cards are due year's end, especially when AMD has opted for a particularly high price point for their high-end cards.
 
I dont like the way comparable cards (GTX570/6950, GTX460/6850 etc) are so close in performance.

Seems that the two companies are conspiring to hold back performance etc to me, I remember the good old days when one product would blatantly be a lot better than the other (ATI9700pro/4600Ti, 8800GTX/2900XT) none of this "10% better in one game at a certain resolution" it use to be a clean sweep across the board.

Where have these architectural performance milestones gone?
 
Last edited:
So it's faster than a 7970 in BF3. That doesn't surprise me at all tbh.

Still, the turbo feature is a nice idea :) might help to smooth out framerates.
 
so let me get this straight the GK104 will be the GTX680 and compete with the 7970 but then there will eventually be a GK110 which will be the flagship card?

if this is the case why not bring out the GK110 and blow the 7970 out of the water or is it because they dont really have anything to blow the 7970 away with yet?
 
It looks like your HD7970 will be holding its ground reasonably well then.

Yeah. TBH nothing ever indicated it would be in trouble tbh.

I sort of see what Nvidia meant when they said they were underwhelmed by Tahiti, but at the end of the day this card that will make mine look underwhelming exists only in folklore and isn't out for many a month.

And they haven't seen Tahiti yet, just a cut down version with the real big toothed version waiting around the corner.

AMD have played this very well the cunning gits lol. They're going to have the lead for quite a while all told.
 
Turbo boost sounds like something that could be incorporated into any card in drivers doesn't it?
 
I like the sound of that. Maybe somebody who is clued up will tell me the cons to this but on a quick read the speed boost looks good and makes good sense.

If it works "as advertised" then there really shouldn't be any downside to it. Maximum framerates will suffer (and perhaps also averages as a consequence), but performance should be more consistent, and more significantly, power consumption will be reduced.

The downside will come if it's poorly implemented - i.e. if it downclocks when framerates are still fairly low, or if it fails to spin back up for whatever reason. There are plenty of potential issues with the implementation, so I guess we just need to wait and see how it performs in the real world.

In theory it should act like a "reverse powertune" - reducing power draw when the extra GPU horsepower is not needed, rather than capping it at a fixed maximum. It'll be interesting to see how well it works... With power draw increasingly becoming the limiting factor with every process shrink, I think we'll continue to see more advanced power containment systems from both Nvidia and AMD.
 
Turbo boost sounds like something that could be incorporated into any card in drivers doesn't it?

Yep - this should be mostly a software level thing. Some feedback from the hardware will be required to quantify the current power draw, and the rate each of the GPU components is working at, but beyond that the control should be handled by the drivers. Hopefully this will give the user options - to disable it entirely (overclocking manually for maximum sustained performance), or to increase / reduce the severity of the downclocking.
 
If it works "as advertised" then there really shouldn't be any downside to it. Maximum framerates will suffer (and perhaps also averages as a consequence), but performance should be more consistent, and more significantly, power consumption will be reduced.

The downside will come if it's poorly implemented - i.e. if it downclocks when framerates are still fairly low, or if it fails to spin back up for whatever reason. There are plenty of potential issues with the implementation, so I guess we just need to wait and see how it performs in the real world.

In theory it should act like a "reverse powertune" - reducing power draw when the extra GPU horsepower is not needed, rather than capping it at a fixed maximum. It'll be interesting to see how well it works... With power draw increasingly becoming the limiting factor with every process shrink, I think we'll continue to see more advanced power containment systems from both Nvidia and AMD.

Great explanation thanks Duff-Man.
 
Back
Top Bottom