• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The first "proper" Kepler news Fri 17th Feb?

It's just so hard to know tbh.

When Fermi launched they were apparently on death's door. I even read articles titled "The end of Nvidia?" which explained all of the problems Nvidia supposedly had.

From whom? Charlie Demerjian? :p

Some people will always, given the opportunity, write articles questioning the future viability of a company. A dramatic proposition in the headline draws in readers - it doesn't mean the speculation within the article is realistic.




... While we're on the subject of Mr Demerjian, lets take a look at some of his predictions for Kepler. Certain people on here seem to think he's some kind of all-seeing sage, so take a look at his predictions prior to release, and see what information turns out to be accurate.

Lets start in December, here we have this article:

Charlie D said:
Nvidia has recently warned AIBs that the Kepler launch will be delayed “past March”. I wonder who has options expiring soon?

Charlie D said:
GK104 comes later, and that is the mainstream part sporting 384-bit memory and PCIe3. Given the memory bandwidth, it should have a healthy performance boost over the current GF104. If the power envelope allows, something that 4Gamer hints may be a problem.

So, Kepler delayed past March, GK104 even later than that, having 384-bit bus.


Fast forwarding to late January (well after the 7970 launch, so plenty of time for Nvidia to adjust their pricing targets), we have this article, which no doubt drew in a lot of viewers.

Charlie D said:
We hear that Nvidia (NASDAQ:NVDA) has sent out Kepler pricing to AIBs in the far east, or will once the New Year party dies down. A few green-tinged moles, we think it’s the New Year’s celebratory hair dye, tell SemiAccurate that the initial Kepler/GK104 cards will be priced around the $299 mark.

So, $299 release pricing. Sounds very attractive, right?


Also this famous article, which no doubt also drew in a lot of readers:

Charlie D said:
A lot of people have been asking about Kepler/GK104, and we finally have some hard information ... The short story is that Nvidia (NASDAQ:NVDA) will win this round on just about every metric, some more than others. ... Nvidia wins, handily.

So, we have Nvidia winning "handily", and beating the 7970 on 'just about every metric'. Even better.



Moving on into February, we have a backtrack / clarification article, which makes a completely different set of claims:

Charlie D said:
Sources tell SemiAccurate that the ‘big secret’ lurking in the Kepler chips are optimisations for physics calculations. Some are calling this PhysX block a dedicated chunk of hardware, but more sources have been saying that it is simply shaders, optimisations, and likely a dedicated few new ops. ... That said, SemiAccurate is told Kepler/GK104 will be marketed as having a dedicated block, and this will undoubtedly be repeated everywhere, truth not withstanding

...So Kepler has a dedicated "physics block", either physical or driver based (?), which will massively improve performance in physx.


GK104 is the mid-range GPU in Nvidia’s Kepler family, has a very small die, and the power consumption is far lower than the reported 225W. ... The architecture itself is very different from Fermi, SemiAccurate’s sources point to a near 3TF card with a 256-bit memory bus.

... With the loss of the so called “Hot Clocked” shaders, this leaves two main paths to go down, two CUs plus hardware PhysX unit or three.

... So now we're looking at a 256-bit memory bus, with almost 3 TFLOPS of theoretical compute performance (almost twice the 1581GFLOPS of the GTX580). Hot clocked shaders are now gone.


Charlie D said:
Kepler’s 3TF won’t measure up close to AMD’s 3TF parts. Benchmarks for GK104 shown to SemiAccurate have the card running about 10-20% slower than Tahiti. On games that both heavily use physics related number crunching and have the code paths to do so on Kepler hardware, performance should seem to be well above what is expected from a generic 3TF card.

Charlie D said:
The problem for Nvidia is that once you venture outside of that narrow list of tailored programs, performance is likely to fall off a cliff, with peaky performance the likes of which haven’t been seen in a long time. On some games, GK104 will handily trounce a 7970, on others, it will probably lose to a Pitcairn.

Charlie D said:
When Kepler is released, you can reasonably expect extremely peaky performance. For some games, specifically those running Nvidia middleware, it should fly. For the rest, performance is likely to fall off the proverbial cliff. Hard. So hard that it will likely be hard pressed to beat AMD’s mid-range card.

So NOW it's running 10-20% slower than Tahiti, and can't match the performance of a 3TF AMD card (i.e. sub-7950 performance). ...Except for where you have PhysX. Without PhysX you're looking at 7870-level performance at best.





I'll just leave those quotes up there without too much further comment, but those who claim Charlie Demerjian is "almost always right" should come back and take a look after release. It might be a sobering read. To my eyes, Semi|Accurate is more about pulling in interested readers who are hungry for information on a highly secretive upcoming product, than it is about reporting any kind of truth. Sure they get some things right, but if you make enough contradictory predictions, then at least some of them will come true!
 
Last edited:
16210598.jpg

whoever wrote that has failed to grasp the object–subject–verb format.

But yeah, that chart sure looks fake.
 
So NOW it's running 10-20% slower than Tahiti, and can't match the compute performance. Except where you have PhysX. Without PhysX you're looking at 7870-level performance at best.


I'll just leave those quotes up there without too much further comment, but those who claim Charlie Demerjian is "almost always right" should come back and take a look after release. It might be a sobering read. To my eyes, Semi|Accurate is more about pulling in interested readers who are hungry for information on a highly secretive upcoming product, than it is about reporting any kind of truth. Sure they get some things right, but if you make enough contradictory predictions, then at least some of them will come true!


Unfortunately as per usual you generally seem incapable of reading.

Firstly final price of GK104 being $299 being sent to AIB's isn't instantly untrue because it launches at $500. What do you do when you want people to think $299 is the price and get people to wait for it........ you tell everyone that is how much it will cost.

You're assuming the story was wrong because the price is now different, that is incorrect, you do not know that their intended pricing, or that they just lied and told their AIB's it would be $299, being any price now doesn't change THAT story and THAT information.

As for performance, in one article he said it will beat Tahiti in some benchmarks, in the follow up article he said it will beat Tahiti in some benchmarks.

The massive majority of Charlies articles are NOT BASED on consumer choices, but on the success/or lack of an architecture to Nvidia, not the end buyer. Fermi was fine for end buyers, good performance, but for Nvidia it cost massively more to make than AMD, it WAS a failure and a massive mistake for Nvidia, it was not so for the consumer. People can claim it was and wasn't a mistake from different angles and still be correct.

If Nvidia wins a bunch of benchmarks and claims a win, if as it happens Charlie see's a card early, as he claims and its 8 games where Nvidia win by 10-20%, and he reports that, that doesn't make him wrong, if 2 weeks, 2 days, or 2 years later he hears about results more generally from third party sources(from what I recall the numbers he claimed were said to come FROM Nvidia) and he changes his estimation of how fast it is, that doesn't make the initial story incorrect. It can be correct BASED ON THE INFORMATION GIVEN, and new information changes the story.

as for winning on ALMOST every metric, again this is both based on info he had at the time, can still be true and was talking about the architecture not a specific card AFAIK.

Either way, he is still mostly right and name another source that is right even 10% as often?

Who has claimed he is always right? The people who insist he is wrong generally use arbitrary arguments and make incorrect conclusions based on illogical arguments.

You managed to miss the part where Nvidia were saying Kepler in 2011, and Charlie said delayed past March... was that accurate? Who else posted that info?

What about stories over the past 4-5 months of Apple, supply troubles and Kepler parts not being ready for it? What about his Tegra 3 and Tegra 4 story when he broke various correct Tegra stories.

Information can change, and render old information currently inaccurate, that doesn't mean the information wasn't accurate at the time of publishing, those are VASTLY different situations.

Let's also mention where you casually throw in(without quoting it) where apparently Charlie claimed that GK104 will perform at 7870 performance AT BEST without Physx. This is something he did not claim, he said in some games it will be ahead of Tahiti both with physx or in games heavily optimised for Nvidia, in some games it will be on par or behind and worst case it will be competing with pitcarn.
 
Last edited:
I'll just leave those quotes up there without too much further comment, but those who claim Charlie Demerjian is "almost always right" should come back and take a look after release. It might be a sobering read. To my eyes, Semi|Accurate is more about pulling in interested readers who are hungry for information on a highly secretive upcoming product, than it is about reporting any kind of truth. Sure they get some things right, but if you make enough contradictory predictions, then at least some of them will come true!

No doubt. It's sensationalist journalese. Hey may have obtained facts from some source and those sources may be reliable. This may render some of those facts true. But that's beside the point. His posts are opinionated rants (which is bloggign 101 - they are advised to do this to generate traffic) and I'm not interested in that and rather contemptuous of anything that comes out of it. Anandtech, by contrast, much more reliably reports the facts, even at a intimately technical level, without trying to spearhead some propagandist angle.

It's almost like he went and drowned his sorrows and wrote a few positive things about NVIDIA after AMD lay-offs which seems to have hit some of his friends/"moles". He then went back to retract what he wrote in his fit of drunken rage. That's my opinion but it sure seemed like it when you also read some of his forum posts that accompanied some of those articles.
 
Unfortunately as per usual you generally seem incapable of reading.

Yes, yes, yes. I clearly have reading comprehension difficulties. I admit it. I simply can't read. Neither can I write. I struggled with that throughout my PhD and so forth :rolleyes:

Try mixing up your insults a bit - or better yet - try responding to the content of people's posts instead of instantly becoming aggressive and insulting.

I'll tell you what, instead of ranting on about "what could have been accurate" and "posting the best information he has at the time", why don't you be constructive and quote the sections of his reporting that you believe have been / will be accurate. Post release we can all have a look and decide where the balance of truth lies. Until then, continue with all your insults. They do amuse me these days...


To answer the one and only point you actually make in there:

Let's also mention where you casually throw in(without quoting it) where apparently Charlie claimed that GK104 will perform at 7870 performance AT BEST without Physx. This is something he did not claim, he said in some games it will be ahead of Tahiti both with physx or in games heavily optimised for Nvidia, in some games it will be on par or behind and worst case it will be competing with pitcarn.

The quote is right above it:

The problem for Nvidia is that once you venture outside of that narrow list of tailored programs, performance is likely to fall off a cliff, with peaky performance the likes of which haven’t been seen in a long time. On some games, GK104 will handily trounce a 7970, on others, it will probably lose to a Pitcairn.
...
When Kepler is released, you can reasonably expect extremely peaky performance. For some games, specifically those running Nvidia middleware, it should fly. For the rest, performance is likely to fall off the proverbial cliff. Hard. So hard that it will likely be hard pressed to beat AMD’s mid-range card.

... The 7870 being the top Pitcairn card, he suggests that it will struggle to beat the 7870, unless PhysX or other Nvidia middleware is present. Fairly clear, even to someone "who can't read". Unless you think that "AMD's mid-range card" refers to something else?
 
Heh...

Not much to explain really. Nvidia are handling the shaders differently, so we won't know the real performance figures until it's released. 1536 shaders doesn't mean so much until we know what each one is capable of. 6Gbps for the memory bandwidth makes no sense whatsoever, given that the GTX580 was 194GB/s. Now 6.0Ghz would be not too far off a realistic memory speed, but this seems like a pretty stupid mistake for Nvidia to make, so I'm going to say this is yet another fake.

Beyond that, the base clockspeed at 1006Mhz is very high (to me it would indicate the absence of 'hot-clocked' shaders if it were true), and the bump in speed to the overclocked 'turbo' frequency is very small (~5%). TDP of 195W would probably put its power consumption around that of the 7970. But like I said, I doubt this is genuine - I can't see such a schoolboy error making its way through in an official document (even an internal one).


Thanks Duff. You talk sense and I appreciate that.
 
Heh...

Not much to explain really. Nvidia are handling the shaders differently, so we won't know the real performance figures until it's released. 1536 shaders doesn't mean so much until we know what each one is capable of. 6Gbps for the memory bandwidth makes no sense whatsoever, given that the GTX580 was 194GB/s. Now 6.0Ghz would be not too far off a realistic memory speed, but this seems like a pretty stupid mistake for Nvidia to make, so I'm going to say this is yet another fake.

Beyond that, the base clockspeed at 1006Mhz is very high (to me it would indicate the absence of 'hot-clocked' shaders if it were true), and the bump in speed to the overclocked 'turbo' frequency is very small (~5%). TDP of 195W would probably put its power consumption around that of the 7970. But like I said, I doubt this is genuine - I can't see such a schoolboy error making its way through in an official document (even an internal one).

What they've done is decoupled the geometry clock from the shader clock. So while base clock may be 1006 the shader clock doesn't have to be 2012...

And honestly this (i.e. decoupled clocks) is what I remember the earliest report on that saying. Just too many sites picked up on it and reported it as there being no hot clocks/
 
What they've done is decoupled the geometry clock from the shader clock. So while base clock may be 1006 the shader clock doesn't have to be 2012...

Fair point... That would certainly allow for more flexibility in tuning the core and shader clocks to their individual limits.

Now that I think about it - wasn't this also the case for the 8800GTX (and related architectures), and also GT200?


edit: Yes it was - see the table towards the bottom of this page: http://www.anandtech.com/show/2549

I wonder if the switch to "shader clock = 2 x core clock" was a compromise to help improve inter-chip communication in Fermi? If the core and shader clocks communicate every clock cycle, internal latencies could be reduced...
 
Last edited:
Whilst I'm still waiting on all the official benches to come out it's still fun to watch this thread progress.
Also I'll drop my current evaluations in here:

It looks like the GK104 has a similar die size to Tahiti, though maybe smaller, however, the Tahiti die burns up die space for the 384 bit bus. Not pushing the memory controller so hard means that Tahiti uses lower spec RAM modules reducing costs. Also a wider bus reduces the chances of running into irreconcilable bandwidth issues limiting GPU performance.

Depending on the efficiency of each architecture, these two dies should perform pretty closely due to them having a similar number of transistors/die size.

If nVIDIA push hard on the clocks to get similar/better performance to the HD7970, expect the GTX670 part to be made up of parts that, don't reach those clocks, can't meet the clocks in the TDP (require higher core voltage), and salvage parts that make up the rest. Salvage parts will mean that the fully functional parts will have parts fused off, and won't reach the clocks when overclocking; at stock voltage.

AMD may respond with an 'official' part that means that they reclaim the GPU crown for the single GPU, but they don't really need to as the AIB partners are already producing overclocked HD7970 parts that will probably exceed the performance of the GTX680. An example being: The Sapphire 6GB part

There may be another reason for AMD to release a higher end part, and that is to justify maintaining higher ASPs on the HD7900 series. Everyone knows that the fastest cards justify their end prices by their being nothing faster that you can buy.
 
Looks like we have some people here who can talk sense,not like some Charlie's Angels here (DM and few other reds)
Thanks xsistor & Duff-man !!! +10000
As for insults from DM - LOL,just look at the Failldozzer thread - all stuff there,he failed epically with his predictions - typically
 
Back
Top Bottom