AMD might do a small refinement to these cards, but typically they don't. 30% on the clocks is possible, but maybe not realistic.
Why not realistic? Take a look at the figures for power consumption on the significantly overclocked 7970s.
http://hardocp.com/article/2012/01/25/asus_radeon_hd_7970_video_card_review/8
Then think about the kind of cooler a card like that will need.
Cards sold at retail will be put into all sorts of awful systems, and they need to be quite robust as a result.
I don't see why Nvidia won't bring the CUDA Core count up to 768, then use the spare ASIC to either shrink the die, increasing profit, or enhance the acceleration features mentioned on other websites.
It's rather sweet you think a big kepler will use the same or less power than a 580gtx, it won't. A bog standard card that is 45% faster than a as currently stock clocked 7970 will NOT use under 300W, likely much more.
Also two key things to point out, top end card sell almost smeg all in Dell's and the likes, Dell barely give a damn about the high end cards, their entire business is based around making $50 on a $400 x 10's of millions a year. The couple thousand $3000 computers with a $500 profit makes almost nothing comparitively.
Secondly, big kepler is going to use silly power, thirdly, furmark is ENTIRELY irrelevant to everything, in every single way possible.
Average power draw of the 7970 in a demanding game like (I forget if it was metro of crysis 2) 165W.
Further, the Asus card dealt with that, so, actually no it wouldn't need another cooler.
Furmark peak power usage means smeg all, average power usage in games is the only metric, because peaking at 200W in games doesn't matter, it will happen for a second, but the heatsink doesn't suddenly overload in half a second, that is why you have heatsinks as it for all intents and purposes averages out the heat load. 165W average power usage in games is nothing, its not hugely more heavily overclocked.
Big kepler will come out, have power usage that makes the 7970 look poor in performance and great in performance/watt, then once Nvidia breaks the 300W single card barrier...... big time, AMD release either a 8970 with 1200Mhz clocks and close the gap significantly, almost to nothing, or the release a 7980 with the same clocks.
I care less about using 300W average in games than using the 80W idle the 4870 used which was horrible. 80W all day every day, or 15W all day every day and 300W for a few hours here and there. Idle is basically "fixed" on AMD and Nvidia these days, though for dual screens its not where it should be(which is nuts, for both companies).
Honestly the last bit I don't know what you are talking about though? THeres nothing wrong with 768 shaders, the issue is, manufacturing is imperfect off a wafer you will get X% of cores that don't work at all, x% that work but not fully and x% that work fully. You rarely if every disable more than 20% of a core to "bring yields up" meaning that a native 768shader die, which is what GK104 will likely be, means there WILL, without question always be a bunch of cores that won't work with 768shaders working but will work with 10-15% less. Every generation of gpu's, cpu's, always have salvage parts.