• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

So are you getting a 7970 or waiting for Kepler?

Waiting for Kepler, even then I might wait for Maxwell or w/e it's called. Can't see my 580 getting pushed any time soon.
 
And then it will be more expensive, but it's Nvidia, so that's ok, they are allowed expensive cards. :)

Easy's too used to having it easy from AMD regarding their pricing strategy, but is his memory failing him to why AMD cards were so competitive and cheap when it came to the 4800 series?.

2900 (R600) series came out and due to Microsoft bending over regarding the DX10 specs for Nvidia AMD got hammered with software based AA vs hardware based AA (IIRC). So 6 months later the 3800 series came out at half the price, half the power draw and half the heat and I'm sure a die shrink?. This was to regain the thumping gave AMD due to Microsoft and their initial DX10 spec. AMD had the ability to do hardware based AA again (4870**) (IIRC) but as AMD had time spent with the new architecture it was cheap to produce with great results without the AA hit. This got AMD back in the driving seat and then the 5850 were a little bit more expensive at launch. The 6950 cards were the same price then went up due to financial woes of the world but now they are charging just a little more than the slower GTX 580 with a lot of overclocking headroom from what we've seen.

I may have got a few things wrong here but I'm sure I'm not far off the mark. My head is buzzing today.

I can see why AMD are getting top dollar for their card, they've earned that right and solidified their foothold since the HD4800 series. Now they're putting the STOMP in it but nobody is complaining about the 3GB GTX 580 prices which is baffling :confused:.

** Edit, forgot to add in about the HD4870
 
Last edited:
The idea that Nvidia will go from hugely less efficient per mm2 to much more so in one generation is silly.

Not really.

AMD have switched from a VLIW design to a general compute-based architecture. Very little has remained the same, and the two architectures are not really comparable. Yes GCN is a lot more flexible, but a great number of transistors are dedicated to achieving this flexibility. Because of this, when compared to the Cayman design, AMD have lost a lot of per-transistor efficiency. Yes this still translates to an increase in the "performance per-unit-die-area", but only because 28nm allows double the transistor density (something else that AMD have not achieved this time around, but that's another discussion).

I fully expect that the Kepler mid-high end range will be very close to, if not better than, the 7970 in terms of performance per unit die-area. When this turns out to be correct then I'm going to be quoting your statement above suggesting that such a thing would be "silly" :p


EDIT: As an example: If you compare "performance per transistor" for the 7970 vs the GTX580, then the GTX580 is already ahead (7970 has 44% more transistors than the GTX580, but is only around 20% faster). So, if Nvidia can maintain the per-transistor performance of Fermi (a reasonable assumption if the architecture is similar) then they can come in at a 20% lower transistor density than AMD, and still beat the 7970 in terms of "performance-per-mm2".

I agree that I found the idea of Nvidia taking back the 'value' metrics (performance per Watt / per mm^2 / per transistor) this generation to be unlikely - but that was before the release of the 7970. From a technical point of view I have not been impressed by the 7970 (apart from its the overclocking potential).
 
Last edited:
Far to expensive considering what I paid for a 6970 last year, is it me or is Nvidia switched focus away from gaming cards to focus on Tegra and GPU computing?

If thats true I can see us paying through the noise more for cards :( I so wish Intel had stuck with making the new GPU cards.
 
Far to expensive considering what I paid for a 6970 last year, is it me or is Nvidia switched focus away from gaming cards to focus on Tegra and GPU computing?

If thats true I can see us paying through the noise more for cards :( I so wish Intel had stuck with making the new GPU cards.

Yeh, and it's taking the mick. Who cares a out power efficiency and all that green-earth-hippie crap. I want raw performance, and I don't care if I have to build a nuclear reactor in my backyard to get it :D
 
Not really.

AMD have switched from a VLIW design to a general compute-based architecture. Very little has remained the same, and the two architectures are not really comparable. Yes GCN is a lot more flexible, but a great number of transistors are dedicated to achieving this flexibility. Because of this, when compared to the Cayman design, AMD have lost a lot of per-transistor efficiency. Yes this still translates to an increase in the "performance per-unit-die-area", but only because 28nm allows double the transistor density (something else that AMD have not achieved this time around, but that's another discussion).

I fully expect that the Kepler mid-high end range will be very close to, if not better than, the 7970 in terms of performance per unit die-area. When this turns out to be correct then I'm going to be quoting your statement above suggesting that such a thing would be "silly" :p


EDIT: As an example: If you compare "performance per transistor" for the 7970 vs the GTX580, then the GTX580 is already ahead (7970 has 44% more transistors than the GTX580, but is only around 20% faster). So, if Nvidia can maintain the per-transistor performance of Fermi (a reasonable assumption if the architecture is similar) then they can come in at a 20% lower transistor density than AMD, and still beat the 7970 in terms of "performance-per-mm2".

I agree that I found the idea of Nvidia taking back the 'value' metrics (performance per Watt / per mm^2 / per transistor) this generation to be unlikely - but that was before the release of the 7970. From a technical point of view I have not been impressed by the 7970 (apart from its the overclocking potential).

Actually in reality, on an early process at 28nm with process yield issues and other process problems, the 7970 has bang on double the transistor density of their previous chip on an early process with process yield issues.

When you look at transistor count vs die size increase on the 6970 you realise that transistor density went up noticeably more than die size, 15% larger roughly to 25% more transistors, and the 5870 had roughly 10% dedicated to dealing with process problems...... coincidence, no.

Likewise, the 580gtx is not massively underclocked to fit within a TDP, the 7970 is. The 7970 at what you can call safe clocks is a good 40% faster than the 580gtx already, at a pretty safe overclock its 80+% faster in Deus Ex.

More importantly at the same clocks its 70-100% faster than a 6970, something it hasn't doubled the transistor count over, its INCREASED in efficiency over last gen, not decreased it.

Seriously look at every gen and the TDP gain, then tell me the 7970 isn't massively throttled by TDP alone and nothing else.
 
Waiting for Kepler, even then I might wait for Maxwell or w/e it's called. Can't see my 580 getting pushed any time soon.

You could have a Maxwell now, although I don't see what drinking coffee has to do with it LOL!. I can't wait for Kepler cause my 580 is being pushed with
3D vision, the 7970 can go and **** itself.
 
Actually in reality, on an early process at 28nm with process yield issues and other process problems, the 7970 has bang on double the transistor density of their previous chip on an early process with process yield issues.

I assume you're talking about 5870 to 7970? In which case: No - transistor density increased by 'only' 83% [2.15Bn@334mm2 vs 4.31Bn@365mm2], against 105% expected.


... And constantly comparing overclocked 7970 performance to stock 6970s? Is this going to be your new "thing"?
 
I assume you're talking about 5870 to 7970? In which case: No - transistor density increased by 'only' 83% [2.15Bn@334mm2 vs 4.31Bn@365mm2], against 105% expected.


... And constantly comparing overclocked 7970 performance to stock 6970s? Is this going to be your new "thing"?

I get what hes saying though. Amd have not increased there power usage like with every other new gen so the performance gap is lower than it should have been. It really is obvious amd went very low on the stock clocks and all it can be down to is having no real competition and to get that power figure low. With oc cards on the table coming with core clocks over 1300mhz if those sapphire tables are correct, its not hard to see where dm is coming from. I have never heard of an oc card coming with clocks so much higher than stock, especially on the highend.
 
The fastest graphics card has always demanded a premium. The problem with the 7970 is that everyone knows it's reign at the top will be extremely short-lived. Unlike Fermi, which reigned supreme for almost 2 years, the 7970 will be there for 2 or 3 months. This provides a very poor value proposition for new buyers.

IMO, anyone who spends more than £500 on one of these cards is mad, and anyone who spends more than £400 on one has more money than sense. Wait for Kepler and then buy a cut price 7970 if you want the fastest AMD card. NVidia will no doubt charge a similar premium, but Kepler should remain at the top of the single GPU charts for 12-18 moths, providing a better long-term proposition.
 
Last edited:
I get what hes saying though. Amd have not increased there power usage like with every other new gen so the performance gap is lower than it should have been. It really is obvious amd went very low on the stock clocks and all it can be down to is having no real competition and to get that power figure low.

I'm not arguing that power-containment isn't important - I've said on here for years that we will increasingly be limited by power consumption as process size reduces (e.g...).

I posted something on this recently, so I'll repost that rather than going over the details again:

Power restriction is certainly a major issue in GPU design, and as the process size shrinks (meaning more and faster switching transistors) this will only continue. With a ~300W practical limit on the heat output of a single chip, power-efficient processor designs will increasingly be the key to achieving performance.

Nvidia's research for Kepler and Maxwell seems to have primarily been directed towards power efficiency - at least according to their many claims about improving performance-per-Watt. If they've been as successful then we could see Kepler GPUs that can clock to fairly high speeds without drawing excessive power. If not, well Nvidia will face exactly the same issue as AMD about where to draw the line in the sand between power draw and performance.

It looks like 28nm will be here for the next two years at least, so we'll see at least one new generation from both parties. Performance-per-Watt could well become the key metric for 28nm and beyond.



My issue is with constantly comparing the performance of a stock 6970 with a heavily overclocked 7970.

The stock-to-stock comparison is the most valid, from the basic consumer point of view. Comparing overclocked performance can give more insight into the operation of the hardware, and the quirks of the new technology process - but for it to have any validity you must compare overclocked vs overclocked, (ideally for both performance and power consumption).

Picking and choosing results (overclocked vs stock, compare 7-series vs 5-series etc) to reinforce a pre-existing assumption is just confirmation bias, and doesn't show much of anything. What we really need to see is a table of comparison, showing results across a wide range of games for a 6970 and 7970, with each at their maximum stable overclock, including power draw. From that we can draw some firm conclusions.



One final point: AMD have a very efficient way of restricting the power draw of the GPU in drivers (powertune). There would be nothing to stop AMD restricting the maximum power-draw to 225W (or 250W, or whatever else they wanted) and using higher stock clocks. In this way, performance would be maximised within the 225W window.

AMD have chosen not to do this - instead choosing to set clocks at a level where power-draw rarely exceeds the powertune limit. I suspect that we will see another card from AMD at some point, which has a higher stock clockspeed (maybe ~1100Mhz?), a higher TDP and powertune limit, and a bigger cooler, once Kepler arrives.
 
Back
Top Bottom