Soldato
I really hope Kepler destroys this £550 joke of AMD's
And then it will be more expensive, but it's Nvidia, so that's ok, they are allowed expensive cards.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I really hope Kepler destroys this £550 joke of AMD's
And then it will be more expensive, but it's Nvidia, so that's ok, they are allowed expensive cards.
The idea that Nvidia will go from hugely less efficient per mm2 to much more so in one generation is silly.
From a technical point of view I have not been impressed by the 7970
Graphics card snob.
Let me get my time machine.Will Kepler do 3 monitors on the one card?
Far to expensive considering what I paid for a 6970 last year, is it me or is Nvidia switched focus away from gaming cards to focus on Tegra and GPU computing?
If thats true I can see us paying through the noise more for cards I so wish Intel had stuck with making the new GPU cards.
Not really.
AMD have switched from a VLIW design to a general compute-based architecture. Very little has remained the same, and the two architectures are not really comparable. Yes GCN is a lot more flexible, but a great number of transistors are dedicated to achieving this flexibility. Because of this, when compared to the Cayman design, AMD have lost a lot of per-transistor efficiency. Yes this still translates to an increase in the "performance per-unit-die-area", but only because 28nm allows double the transistor density (something else that AMD have not achieved this time around, but that's another discussion).
I fully expect that the Kepler mid-high end range will be very close to, if not better than, the 7970 in terms of performance per unit die-area. When this turns out to be correct then I'm going to be quoting your statement above suggesting that such a thing would be "silly"
EDIT: As an example: If you compare "performance per transistor" for the 7970 vs the GTX580, then the GTX580 is already ahead (7970 has 44% more transistors than the GTX580, but is only around 20% faster). So, if Nvidia can maintain the per-transistor performance of Fermi (a reasonable assumption if the architecture is similar) then they can come in at a 20% lower transistor density than AMD, and still beat the 7970 in terms of "performance-per-mm2".
I agree that I found the idea of Nvidia taking back the 'value' metrics (performance per Watt / per mm^2 / per transistor) this generation to be unlikely - but that was before the release of the 7970. From a technical point of view I have not been impressed by the 7970 (apart from its the overclocking potential).
Waiting for Kepler, even then I might wait for Maxwell or w/e it's called. Can't see my 580 getting pushed any time soon.
Actually in reality, on an early process at 28nm with process yield issues and other process problems, the 7970 has bang on double the transistor density of their previous chip on an early process with process yield issues.
I assume you're talking about 5870 to 7970? In which case: No - transistor density increased by 'only' 83% [2.15Bn@334mm2 vs 4.31Bn@365mm2], against 105% expected.
... And constantly comparing overclocked 7970 performance to stock 6970s? Is this going to be your new "thing"?
I get what hes saying though. Amd have not increased there power usage like with every other new gen so the performance gap is lower than it should have been. It really is obvious amd went very low on the stock clocks and all it can be down to is having no real competition and to get that power figure low.
Power restriction is certainly a major issue in GPU design, and as the process size shrinks (meaning more and faster switching transistors) this will only continue. With a ~300W practical limit on the heat output of a single chip, power-efficient processor designs will increasingly be the key to achieving performance.
Nvidia's research for Kepler and Maxwell seems to have primarily been directed towards power efficiency - at least according to their many claims about improving performance-per-Watt. If they've been as successful then we could see Kepler GPUs that can clock to fairly high speeds without drawing excessive power. If not, well Nvidia will face exactly the same issue as AMD about where to draw the line in the sand between power draw and performance.
It looks like 28nm will be here for the next two years at least, so we'll see at least one new generation from both parties. Performance-per-Watt could well become the key metric for 28nm and beyond.