• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Kepler vs 500 series specs released - **HOLY CRAP**

YES!


GTX570 for me.

All I want is the ability to get over 60fps at above 1920x1200 as that's my next monitor res :p

A nice bump from my GTX460 :D
 
GTX 760 SPECIFICATIONS (allegedly)
exclusive.png

http://www.obr-hardware.com/2012/01/exclusive-28nm-geforce-specs.html

560Ti replacement/GK104 specifications allegedly released. 3.2 billion transistors. 576
cores, less geometry units, but higher core clock to compensate, 4800 MHz memory and higher theoretical GFLOPs. The more shaders are to account for non-hot clocked shaders I assume.

The link says it might be a little faster than 580 but I imagine it will be roughly the same speed or ever so slightly slower. Which would put it at precisely the speed we see the GTX 760 in the benchmarks we've seen on this thread.

If this is true, this should overclock really really well and that might make it a better buy than the 580.
 
GTX 760 SPECIFICATIONS (allegedly)
exclusive.png

http://www.obr-hardware.com/2012/01/exclusive-28nm-geforce-specs.html

560Ti replacement/GK104 specifications allegedly released. 3.2 billion transistors. 576
cores, less geometry units, but higher core clock to compensate, 4800 MHz memory and higher theoretical GFLOPs. The more shaders are to account for non-hot clocked shaders I assume.

If this spec is genuine, then Nvidia must have made significant changes to the CUDA cores, alongside removing the "hot clocks". Specifically, each CUDA core must do 4 floating-point computations per-clock, rather than 2-per-clock as has been the case since GT200. This would make the "new" CUDA cores directlly comparable with the "old" design, in terms of FP performance:

"old" CUDA core: Performs two floating-point calculations at each shader clock, so four per base clock.
"New" CUDA core: Performs four floating-point calculations at each base clock.

The 2.0 TFLOP arithmetic performance stated in that table is ONLY obtainable from the specs above if four FP operations are carried out every base clock. So, in short, both the old CUDA core design performs "two lots of 2" computations each base-clock, whereas the new ones perform "fours lots of one" per clock.





These specs aren't too far away from what I was predicting for the GK104 (see below), except that I assumed each CUDA core would still perform two operations per clock, so all my CUDA-core counts below are all out by a factor of two:

Okay, here are the super-accurate indisputable specs for Kepler (okay, so I just pulled them from my rectum, but whatever):

Base-clocked shaders

It looks like, with my estimate, I was greatly over-predicting the clockspeed (1100 vs 905), and slightly under-predicting the shader-count (512 equivalent vs 576). Anyway, the GK104 should be able to perform similar to the 7970 at stock - perhaps coming in a little behind it.

What concerns me most is the TDP... Given Nvidia's focus on power-efficiency, I was expecting a chip of this size to come with a ~200W TDP. Of course the quoted TDP values are only approximations, but it certainly doesn't appear that any massive leaps forward have been achieved on this front. I imagine that, with this card, performance-per-Watt should also be somewhere around that of the 7970.
 
Last edited:
I hope the cards are not going to be extortionately priced, I fancy one of the 780's but I know it'll be over £500 :( Don't think I could justify paying that on a single component (no matter how good it might be :p)
 
I vowed to never spend more than £300 on a GFX card and intend to stick to that rule!
 
This is their mid-range, priced at $399. 580 performance at that price sounds like a good deal for me. And hopefully the prices drop lower to current GTX 560Ti levels after a few months.
 
Except your memory bandwidth didn't make sense, until I realised you may have mistakenly used a small b instead of a big one (Gb/s vs GB/s).

Ah. Not sure the convention... It's supposed to be bytes not bits anyway :)


I have to say, if the shaders really are operating at base-frequency and performing double the number of FP calculations per-clock, then the transistor count is lower than I would have expected. If it really IS in the region of 3.2Bn, then that should put the die size at around 300mm^2.
 
Ah. Not sure the convention... It's supposed to be bytes not bits anyway :)


I have to say, if the shaders really are operating at base-frequency and performing double the number of FP calculations per-clock, then the transistor count is lower than I would have expected. If it really IS in the region of 3.2Bn, then that should put the die size at around 300mm^2.

Yeah, if the scaling is precisely the same as the GTX 580 then potentially it could even be a little smaller than 300mm^2. i.e. with the same transistor density as a GTX 580 you'd get...
(520 / (3 000 * (40^2))) * (3 200 * (28^2)) = 272 mm^2

A 272mm^2 GTX 580 replacement would be exciting, indeed! IF for nothing else than how well it would overclock and low prices (at least eventually, after the NKOTB dust settles)
 
Back
Top Bottom