• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Any GTX 600 Series info?

Associate
Joined
7 Jun 2009
Posts
394
Location
East mids.
The only place I can find any info on them so far is on Wikipedia, and although the information there is usually kept pretty well, in this case it seems unrealistically high and doesn't quote any sources. Does any one know of any other sources that might be able to clarify the info on Wikipedia?

When I say unrealistic, I am talking about the huge increase in GFlops/watt, which shows even the cheapest 6 series card (GeForce GTX 650) as having 3072 GFlops, thrashing the previous generations flagship dual card GTX 590... and doing so with far less transistors and far less power.... something isn't adding up here.

I'd guess it's just vandalism of wikipedia, surely XDR2 memory can't make all that difference, can it?
 
General consensus seems to be they're going to show their hand (even a little via behind closed door media events etc) at CES - 10th of this month. Basically they know AMD are first to market with 28nm and will be for a few months so will be wanting to take the sting out of that somehow - it's going to be an interesting month to see how they respond.
 
The only place I can find any info on them so far is on Wikipedia, and although the information there is usually kept pretty well, in this case it seems unrealistically high and doesn't quote any sources. Does any one know of any other sources that might be able to clarify the info on Wikipedia?

When I say unrealistic, I am talking about the huge increase in GFlops/watt, which shows even the cheapest 6 series card (GeForce GTX 650) as having 3072 GFlops, thrashing the previous generations flagship dual card GTX 590... and doing so with far less transistors and far less power.... something isn't adding up here.

I'd guess it's just vandalism of wikipedia, surely XDR2 memory can't make all that difference, can it?

Best to just wait, I always find that even with all nice specs and promises, how it reflects in real world scenarios is always different, especially with the quality of drivers! I'm interested in the specs too though, I think I will have a look at buying one when available. The price is lower than amd's offering (typo perhaps :P) and the specs sure do look meaty.
 
There seems to be a persistent rumour that the high end will be 7xx rather than 6xx. And yeah, the Wikipedia page should be taken with a pinch of salt at the moment.
 
As above, nothing firm yet. The wikipedia page is... very optimistic to say the least - I'd ignore it entirely. We should find out more this month, but as far as rumours go:

- Most suggest that they will be named the 7-series, to bring them in line with AMD.

- Several rumours suggesting the following:

a) A GK104 "mid-high" end card, featuring 640-768 CUDA cores [GTX580 has 512], and a 384-bit memory interface (implying 1.5Gb or 3Gb cards). To appear March / April time - dual-GPU version to follow.

b) A high end part (named either GK100 / GK110 / GK112), featuring 1024 CUDA cores and a 512-bit memory interface. To appear sometime in the second half of 2012.

- One rumour has appeared suggesting that Nvidia have done away with their "hot-clocked" shaders [see here]. If this is true, then Kepler may not perform quite the way we are all expecting. EDIT: Here is the full article on this, from 3dcenter. They're suggesting 1024 base-clocked CUDA cores in GK104, and 1532 in GK110. I suppose this is feasible, if the size of the shaders can be reduced due to their lower operating speed.


As far as performance goes it will all depend on the spec. If the hot-clocks remain, then a 768-core GK104 part on 28nm should certainly outperform the 7970. If not, then it's anyone's guess...
 
Last edited:
However according to rumours the core clock of NVIDIA GPUs will now be well over 1Ghz. So that might even things out, perhaps.
 
It could... But consider; The GTX580 has 512 shader units, operating at 1544Mhz stock. A 768-shader Kepler running at 1000Mhz, then, would have 3% less raw shader-power than a GTX580.

Unless Nvidia have found a significant increase in per-shader efficiency (highly doubtful...), then they will likely require either: a) more than 768 shaders, or b) Clockspeed of ~1200Mhz to better the 7970.

I guess we just need to wait and see...
 
Last edited:
It could... But consider; The GTX580 has 512 shader units, operating at 1544Mhz stock. A 768-shader Kepler running at 1000Mhz, then, would have 3% less raw shader-power than a GTX580.

Unless Nvidia have found a significant increase in per-shader efficiency (highly doubtful...), then they will likely require either: a) more than 768 shaders, or b) Clockspeed of ~1200Mhz to better the 7970.

I guess we just need to wait and see...

It's the core clock which is rumoured to be 1GHz+ so presumably the shaders would be 2GHz+ if true.
 
It could... But consider; The GTX580 has 512 shader units, operating at 1544Mhz stock. A 768-shader Kepler running at 1000Mhz, then, would have 3% less raw shader-power than a GTX580.

Unless Nvidia have found a significant increase in per-shader efficiency (highly doubtful...), then they will likely require either: a) more than 768 shaders, or b) Clockspeed of ~1200Mhz to better the 7970.

I guess we just need to wait and see...


"well over 1Ghz" should mean at least 1200Mhz methinks :) But yeah I don't suppose it's as high as 1544Mhz. Saw a few other recent slides suggesting ridiculous shader numbers for these parts.
9f3bdbbf_jbvizJakv5UDew.png

Perhaps they are fake. But at least we know for certain they are releasing a G104x2 based GPU. This could be possibly have a high number of shaders to account for any reduction in clock.



It's the core clock which is rumoured to be 1GHz+ so presumably the shaders would be 2GHz+ if true.

What he's referring to is the few reports that NVIDIA will do away with X geometry/2X shader clock and just have a single clock. And to compensate they will increase the clocks to "well above 1 GHz". So no 2Ghz+ shader clock. Just one clock signal to rule them all.
 
Last edited:
772MHz to 1GHz+, is that the largest jump in core clock between generations we could ever see?

No, because the clocks aren't comparable, that 772Mhz is also over 1500mhz shader clocks, the new cards are rumoured to have 1000Mhz + SHADER clocks as well as everything else, ie no hot clock. So part of the core will speed up, the majority of the core will actually run significantly slower.

The rumours are all over the place, from your general doubling of Fermi which seems/seemed most likely, IE 512 shader x 2, with an increase in bandwidth, tmu/rops to go with it and a likely minor clock speed bump 10-15%(if the process was great, as little as 5% if it wasn't working well for them).

The flip side is people saying they are going more AMD style, no hot clocks, higher default clock speed but much lower shader clock speed, and many more shaders.

I think in general people are vastly overestimating how many extra transistors it takes to double pump the clock speed of the shaders, and dropping the doubled clock speed won't magically allow them to fit 30-50% more shaders in(compared to a 1024shader "double Fermi style" Kepler, not a 512shader actual current Fermi).

Will we hear anything from CES, who knows, usually its some idiot website who confuses "tells 12 people behind a closed door with a strict NDA and only happens at a trade show because...... that is when those 12 people will be together anyway so its just easier" with "talks about Kepler at trade show, launch, a billion cards available, reviews and detailed specs, woooo".

Though normally its purposeful misreading to make AMD seem like they've missed a launch :p

I think one of the biggest problems with Kepler is, the few minor bits of info that seem almost confirmed, seem to suggest a radically different launch schedule to normal, but this makes more sense with problems making your fairly standard Nvidia 500mm2+ core than massive change in architecture and strategy from Nvidia.
 
^^^ I agree that the rumours are all over the shop this time around. We usually start to get some convergence in the last few weeks / months before release, but this time we're seeing the opposite. About the only thing that's consistent between the rumours is the name "GK104".

Saw a few other recent slides suggesting ridiculous shader numbers for these parts.
9f3bdbbf_jbvizJakv5UDew.png

Interesting - I always like these tables, even if they are 99% fabricated!

I find it hard to believe that GK104 will have 1536 shaders though... Even without the hot-clocks, and assuming an optimistic reduction in per-shader transistor count, that would make for a very large GPU. Certainly it would swamp the bandwidth of a 256-bit bus... Three times the number of shaders and more than twice the floating point power of the GTX580, but it has even less memory bandwidth? Nah...

... and the GK100 at 2304 shaders? I don't buy it. I can't see there being more than 2048 shaders at the very most (and I'd expect more like 1536). plus the TMU / ROP count seems unbalanced across the board. GK106 has *half* the shader count, but the same number of TMUs as the GK104? Doubtful...

To be honest, I'm not buying what that chart is selling.
 
Last edited:
Yeah does seem a bit odd - as I mentioned previously the most reliable information I've seen so far indicated 60% increase in shaders, 20% increase in clockspeed over the GTX480 and that it would be the top end part for that range - no indication if it was GK100, 106 or what but seems to correspond to the information for the GK106 somewhat with a 768 SP part. So far heard nothing to indicate that they'd do away with the hot clock domain on GeForce parts but it wouldn't suprise me to see compute orientated parts having a different core/shader config.
 
Yeah some of the specs look out of whack. There was a slightly more realistic looking one floating around but I can't seem to find it. The 2304 is someone taking 768 and multiplying it by 3. The x2 makes sense as GK110 = GK104x2 according to an earlier slide. But the x3 makes no sense. A 2048 shader version for the 512-bit part (GK112 I think?) would've been more believable
 
Okay, here are the super-accurate indisputable specs for Kepler (okay, so I just pulled them from my rectum, but whatever):

I've done two tables - my prediction for the resulting specs, in the case of a "traditional" hot-clocked shaders part, and another for the case of base-clocked shaders.

For the hot-clocked version I've assumed the same 32/4/3 grouping of shaders/TMUs/ROPs as found in Fermi. For the base-clock version, I've assumed a 64/4/3 grouping - this results in fewer ROPs and TMUs overall, which is to be expected since these parts of the GPU are running faster than their equivalents in the hot-clocked version.

I've assumed in each case that the die-size of the GK100 is around 500-550mm^2 or so, the GK104 around 400mm^2, and the GK106 around 250mm^2. I have assumed that all parts are "full" designs, though there is always the possibility that some modules will be disabled to improve yields.

Hot-Clocked Shaders



Base-clocked shaders


Note: 512-bit will imply 2Gb or 4Gb memory, 384-bit will imply 1.5Gb or 3Gb, and 256bit => 1Gb or 2Gb.


Anyway, that's my guess. As always, feel free to hold my feet to the fire if / when they turn out to be completely incorrect!
 
Last edited:
Duff-Man, are you a hardware engineer? You seem to have quite a depth of insight into this stuff, at quite a fundamental level.

Heh :D Thanks for the vote of confidence, but no - I'm just a mechanical engineer / mathematician who has an interest in this kind of thing :)

Having a general background in engineering / physical sciences certainly does help in understanding the fundamental concepts of GPU design though.
 
Heh :D Thanks for the vote of confidence, but no - I'm just a mechanical engineer / mathematician who has an interest in this kind of thing :)

Having a general background in engineering / physical sciences certainly does help in understanding the fundamental concepts of GPU design though.


yerrss... And everyone knows mechanical engineering is like ... the lamest engineering (after civil) :D viva la electrical.
 
Back
Top Bottom