• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The first "proper" Kepler news Fri 17th Feb?

What matters is not when Kepler is released, but rather how well it performs and a what price once it arrives.

7x00's are not selling well becase of AMD's fantasy pricing, so NVidia have probably not lost too many sales so far. What is most likely is that people are sitting on the fence, waiting for the 7900's to drop, or for whatever Kepler brings. The 7900's are good cards but cost 50% above last gen's launch prices. The 7700's are too expensive and frankly offer disappointing performance when compared to previous gen equivelents. The 7800's offer very decent performance versus the 6800's, but they are also overpriced. The best value 7x00 card so far is the 7850 which is not even on the shelves yet.

If the launch of Kepler GF104 provides GTX580 to 7970 performace at a much lower pricepoint, NVidia will be on to a winner. The rumoured price of $299 (~£250) will be much more tempting to most people than a 7900 @ £350 to £450 or old GTX580's @ £340.

If however NVidia do an AMD and release GTX580/7900 performance at 580/7900 prices then it will be another own goal. Most people will not spend >£350 on a graphics card that will not even be the top Kepler part. My guess is that the launch price will be pretty close to rumours @ £250-£275 for better than GTX580 performance. Maybe it will even put up a fight against the 7970 for significantly less money.

The only high-end cards I would consider buying before Kepler arrives are 6950's or the GTX480 specials for £185. I cannot see these dropping much in value, but the 7900's and GTX500's will almost surely plummit.

This is quite possibly the most sense talked on this forum in the last month.
 
Was either 660Ti or 670 not 670Ti AFAIK.

I heard awhile back that the 660Ti/670 i.e. top of the mid-range would have 60% more shaders (or shader performance not clear on that) than the GTX480 and 20% higher clockspeed (no mention of dropping hot clocks) which would put it approx between the 7950 and 7970 on performance and would be about ~£299 in the UK. The "680" part (I've not actually heard it called the 680) was supposed to be coming out 4-5 months later (i.e. around July) but there are multiple relatively credible sources online indicating it would be sooner than that.

My source for this information does work in the industry but does NOT work for nVidia so may or may not be correct but has been right more often than wrong in the past.
 
It looks like mostly bad news if you really think about it.

Shader count on its own means not a huge amount(if the info is correct, its VR zone, which is mostly news by Theo the dope, who posts almost entirely useless info, same Nvidiot who posts on, I forget if its BSNews or Fud, thinkt he former), 768 "fat" shaders or 1536 "thin" shaders. People are confusing double shaders for hotclocks being the trade off, it isn't. It will be more instructions per clock essentially or less and hotclocks, these are two seperate things.

So this card is just 1536 shaders instead of 768, that isn't a huge deal, the only issue being more shaders, more difficult scheduling and almost always a drop in efficiency(not necessarily huge). 768 or 1536, different designs, would end up not far off each other either way, you're talking about a card with 50% more shaders(effectively either way) than a 580gtx, that isn't bad at all its where you would expect, as the 560ti had roughly 50% more shaders than the 285gtx.

However, if it still has hot clocks, and stock clocks have gone down dramatically.... despite a much better process that IS bad news, really bad news. But there would be two possibilities, Nvidia screwed up Kepler completely and yields/power are horrible and that is all they could release at. The other option is simply, stupid high memory speeds at stock because they have woefully inadequate bandwidth. If they are doing 6Ghz stock, it probably means Nvidia is using a lot of power getting every last mb of bandwidth out of the controller.... because its desperately bandwidth limited.

So either Kepler's clocks suck, or they've simply dropped them to where they save power but don't lose performance because of bandwidth limits. IE with 6Ghz memory, 800Mhz core clock would be pointless as due to lack of bandwidth you're getting effectively no more performance over 700Mhz, but voltage/power wise their only choice is to get the card as low power as possible.

That could mean way more performance is there, but with memory unlikely to go much further it might be pointless. Maybe they can hit 900-1000Mhz overclocked, but no bandwidth means ridiculously awful scaling.


Or again, the news is just wrong, but ridiculously low clocks looks like bad news just about which ever way you cut it.
What you say makes sense, but there are other alternatives.

The GTX580 had ample memory bandwidth, evidenced by how little overclocking the VRAM made to performance, and how close an overclocked GTX560TI's (256bit mem versions) could get to the GTX580. It may well be that the 384bit bus was overspecced compared to Fermi GPU performance. Given that the theorectical bandwidth for both the GTX580 and GTX 670/680 is the same, there may well be sufficient headroom for significant GPU driven performance boosts. Other changes may also have been made to maximise bandwidth efficency.

What is probable is that the most common bottleneck will indeed move from the GPU towards memory bandwidth. This will be largely dependent upon applications (some games eat shader power, others eat bandwidth, some are balanced).

Assuming that GF104 will have approximately 50% more shader/GPU power and equal memory bandwidth, we should see performance increases within the range of 0 to 50%. For arguments sake, GTX580 + 25% = 7950/7970 or thereabouts.

Low core clocks could indicate process issues, bowing to power saving requirements, or possibly simply not needing miore MHz to compete with the 7900's. Perhaps NVidia are leaving overclocking headroom open, in much the same way as the 7900's have very high ceilings. Perhaps NVidia simply do not want this card to perform too closely to the full fat Kepler when that eventually arrives. However, 700-800MHz does seem very low for 28nm but until it is confirmed it is speculation.

Don't forget, this will not be NVidia's top Kepler part and it is not expected to trounce the 7900's. If it merely competes with them whilst using 30% less shaders and a much cheaper production process it will be a damn good card.

GTX580 to 7970 performance from a £250 mid-high end card. Yes please:).
 
Last edited:
^^ I can up the VRAM frequency on my GTX470 (even when clocked past stock GTX480 core performance) past 2000MHz (stock 1674MHz) and in most cases no change to performance and in a very limited number of games a ~4% fps increase - can also back the VRAM down to below 1200MHz before seeing any performance drop off in most cases.
 
It does seem a bit on the low side, given that games like BF3 suck up RAM at higher settings. But gaming above 1200p is hit or miss at the moment and when I buy a top end graphics card(s) I like to max out all my games and hit a minimum of 60fps. The reviews I saw for 2 x 7970 in Crossfire shows that BF3 only managed an average of around 45fps at 1600p and I imagine it will be similarly disappointing for 3 monitor setups. And if it has better core performance that can help makeup for it.

Is all multi-GPU rendering based upon alternate frame or can they split the image in half and render separately? If it's the latter then VRAM isn't as important; if it's the former then large VRAM is critical. I haven't really kept up to be honest. It seems that all games are optimised for 1080/1200p and everything above that is a minefield, with some performing flawlessly and others falling apart.

I imagine that Kepler will have a decent performance lead. The question is whether nVidia has to do it in a dirty way (poor power efficiency, low overclocking headroom) and how much of a premium they charge. But I am certainly interested to see whether 2GB of RAM limits the card at higher resolutions.

Having 2 7970's i can tell you at 1440p it serves well over 60 fps on full graphics i have turned AA off and running about 100-120fps which is silky smooth

Here it is running 3x HD 64fps http://www.hardwareheaven.com/revie...ossfire-performance-review-battlefield-3.html
 
Last edited:
Lets hope it isn't an announcement about an announcement :D

Inb4 anticipation of the wood launch.

^^ I can up the VRAM frequency on my GTX470 (even when clocked past stock GTX480 core performance) past 2000MHz (stock 1674MHz) and in most cases no change to performance and in a very limited number of games a ~4% fps increase - can also back the VRAM down to below 1200MHz before seeing any performance drop off in most cases.

Doesn't memory correction kick in at some points with your card? It should do that on all GDDR5 cards.
 
Last edited:
I've seen no effect of it if it does - memory appears to be stable upto a little over 2000MHz with no drop off in performance at any stage until the point which it goes totally unstable.
 
I've seen no effect of it if it does - memory appears to be stable upto a little over 2000MHz with no drop off in performance at any stage until the point which it goes totally unstable.

But if there was no performance improvement, memory correction had to kick in at some point.
 
But if there was no performance improvement, memory correction had to kick in at some point.

My point was memory bandwidth isn't probably an issue - I can underclock it too by quite a margin before I see any performance drop off - which would also probably indicate error correction isn't kicking in and that the extra bandwidth isn't helpful in most cases.
 
Back
Top Bottom