Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I dunno, but let's be honest I need me some 1.5.
It's just so hard to know tbh.
When Fermi launched they were apparently on death's door. I even read articles titled "The end of Nvidia?" which explained all of the problems Nvidia supposedly had.
Charlie D said:Nvidia has recently warned AIBs that the Kepler launch will be delayed “past March”. I wonder who has options expiring soon?
Charlie D said:GK104 comes later, and that is the mainstream part sporting 384-bit memory and PCIe3. Given the memory bandwidth, it should have a healthy performance boost over the current GF104. If the power envelope allows, something that 4Gamer hints may be a problem.
Charlie D said:We hear that Nvidia (NASDAQ:NVDA) has sent out Kepler pricing to AIBs in the far east, or will once the New Year party dies down. A few green-tinged moles, we think it’s the New Year’s celebratory hair dye, tell SemiAccurate that the initial Kepler/GK104 cards will be priced around the $299 mark.
Charlie D said:A lot of people have been asking about Kepler/GK104, and we finally have some hard information ... The short story is that Nvidia (NASDAQ:NVDA) will win this round on just about every metric, some more than others. ... Nvidia wins, handily.
Charlie D said:Sources tell SemiAccurate that the ‘big secret’ lurking in the Kepler chips are optimisations for physics calculations. Some are calling this PhysX block a dedicated chunk of hardware, but more sources have been saying that it is simply shaders, optimisations, and likely a dedicated few new ops. ... That said, SemiAccurate is told Kepler/GK104 will be marketed as having a dedicated block, and this will undoubtedly be repeated everywhere, truth not withstanding
GK104 is the mid-range GPU in Nvidia’s Kepler family, has a very small die, and the power consumption is far lower than the reported 225W. ... The architecture itself is very different from Fermi, SemiAccurate’s sources point to a near 3TF card with a 256-bit memory bus.
... With the loss of the so called “Hot Clocked” shaders, this leaves two main paths to go down, two CUs plus hardware PhysX unit or three.
Charlie D said:Kepler’s 3TF won’t measure up close to AMD’s 3TF parts. Benchmarks for GK104 shown to SemiAccurate have the card running about 10-20% slower than Tahiti. On games that both heavily use physics related number crunching and have the code paths to do so on Kepler hardware, performance should seem to be well above what is expected from a generic 3TF card.
Charlie D said:The problem for Nvidia is that once you venture outside of that narrow list of tailored programs, performance is likely to fall off a cliff, with peaky performance the likes of which haven’t been seen in a long time. On some games, GK104 will handily trounce a 7970, on others, it will probably lose to a Pitcairn.
Charlie D said:When Kepler is released, you can reasonably expect extremely peaky performance. For some games, specifically those running Nvidia middleware, it should fly. For the rest, performance is likely to fall off the proverbial cliff. Hard. So hard that it will likely be hard pressed to beat AMD’s mid-range card.
If the same results are seen in the reviews it looks pretty decent. Perhaps the magic bios ups the clocks significantly.
What would the scale on the left be?
So NOW it's running 10-20% slower than Tahiti, and can't match the compute performance. Except where you have PhysX. Without PhysX you're looking at 7870-level performance at best.
I'll just leave those quotes up there without too much further comment, but those who claim Charlie Demerjian is "almost always right" should come back and take a look after release. It might be a sobering read. To my eyes, Semi|Accurate is more about pulling in interested readers who are hungry for information on a highly secretive upcoming product, than it is about reporting any kind of truth. Sure they get some things right, but if you make enough contradictory predictions, then at least some of them will come true!
I'll just leave those quotes up there without too much further comment, but those who claim Charlie Demerjian is "almost always right" should come back and take a look after release. It might be a sobering read. To my eyes, Semi|Accurate is more about pulling in interested readers who are hungry for information on a highly secretive upcoming product, than it is about reporting any kind of truth. Sure they get some things right, but if you make enough contradictory predictions, then at least some of them will come true!
Unfortunately as per usual you generally seem incapable of reading.
Let's also mention where you casually throw in(without quoting it) where apparently Charlie claimed that GK104 will perform at 7870 performance AT BEST without Physx. This is something he did not claim, he said in some games it will be ahead of Tahiti both with physx or in games heavily optimised for Nvidia, in some games it will be on par or behind and worst case it will be competing with pitcarn.
The problem for Nvidia is that once you venture outside of that narrow list of tailored programs, performance is likely to fall off a cliff, with peaky performance the likes of which haven’t been seen in a long time. On some games, GK104 will handily trounce a 7970, on others, it will probably lose to a Pitcairn.
...
When Kepler is released, you can reasonably expect extremely peaky performance. For some games, specifically those running Nvidia middleware, it should fly. For the rest, performance is likely to fall off the proverbial cliff. Hard. So hard that it will likely be hard pressed to beat AMD’s mid-range card.
Heh...
Not much to explain really. Nvidia are handling the shaders differently, so we won't know the real performance figures until it's released. 1536 shaders doesn't mean so much until we know what each one is capable of. 6Gbps for the memory bandwidth makes no sense whatsoever, given that the GTX580 was 194GB/s. Now 6.0Ghz would be not too far off a realistic memory speed, but this seems like a pretty stupid mistake for Nvidia to make, so I'm going to say this is yet another fake.
Beyond that, the base clockspeed at 1006Mhz is very high (to me it would indicate the absence of 'hot-clocked' shaders if it were true), and the bump in speed to the overclocked 'turbo' frequency is very small (~5%). TDP of 195W would probably put its power consumption around that of the 7970. But like I said, I doubt this is genuine - I can't see such a schoolboy error making its way through in an official document (even an internal one).
Heh...
Not much to explain really. Nvidia are handling the shaders differently, so we won't know the real performance figures until it's released. 1536 shaders doesn't mean so much until we know what each one is capable of. 6Gbps for the memory bandwidth makes no sense whatsoever, given that the GTX580 was 194GB/s. Now 6.0Ghz would be not too far off a realistic memory speed, but this seems like a pretty stupid mistake for Nvidia to make, so I'm going to say this is yet another fake.
Beyond that, the base clockspeed at 1006Mhz is very high (to me it would indicate the absence of 'hot-clocked' shaders if it were true), and the bump in speed to the overclocked 'turbo' frequency is very small (~5%). TDP of 195W would probably put its power consumption around that of the 7970. But like I said, I doubt this is genuine - I can't see such a schoolboy error making its way through in an official document (even an internal one).
What they've done is decoupled the geometry clock from the shader clock. So while base clock may be 1006 the shader clock doesn't have to be 2012...
What would the scale on the left be?