• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

7970: Another Disappointment from AMD

they are a business in it to make money, so while they have the best card out there even if its only 20% faster they are going to charge a premium, they could have charged more for the 5870 as they was top dog for a good while but they never now they are deal with it.
 
Bit-Tech said:
For now, AMD has certainly thrown down the gauntlet. The HD 7970 3GB is a huge improvement over the HD 6970 2GB and GTX 580 1.5GB, and it matches this performance with quiet operation and genuinely useful innovations such as the new ZeroCore technology

Toms Hardware said:
It pushes game performance in a big way, too. With months to go before Nvidia can retaliate with its upcoming Kepler architecture, AMD is able to claim it sells samples the fastest single-GPU graphics card—no small achievement for the company more known for its value proposition as of late.

HardOCP said:
AMD has taken the performance crown back, and currently has the fastest single-GPU video card for PC gaming. We experienced dramatic improvements over the Radeon HD 6970, and consistent improvements over an overclocked GeForce GTX 580. In our experiences, it was the newest DX11 games that pushed Tessellation which received the most improvement. If future games use more Tessellation, we may see the Radeon HD 7970 separate itself in a greater degree from the Radeon HD 6970 and GeForce GTX 580.

Hardware Canucks said:
Most of you reading this are interested in gaming performance and let’s be perfectly clear: the HD 7970 3GB represents a giant leap forward for AMD GPU performance. It left the HD 6970 in the dust and handily beat NVIDIA’s GTX 580 in nearly every single game. By looking at the results in individual cases, it’s apparent that AMD’s new flagship card excels in Shogun 2, Deus Ex and The Witcher 2 but it still retains a significant advantage over the GTX 580 in Battlefield 3 and Crysis 2, two areas where driver optimizations could improve performance further. The GTX 580 3GB makes things interesting at extreme detail settings but the result is still a rather convincing win for the HD 7970.

I can't quote every reviewer but I'm seeing a trend.

What I would point out is the when AMD went from the 4870 to the 5870 they also bumped up the TDP from 150 watts to 200 watts plus they had a die shrink to go along with it. The 7970 only has a die shrink and a new design and no extra power to play with it, typically the big increases we have seen over the years have meant a big increase in power consumption so AMD should be congratulated for making a very efficient design that gives a us a big boost in performance and without the extra heat and power.
 
I can't quote every reviewer but I'm seeing a trend.

What I would point out is the when AMD went from the 4870 to the 5870 they also bumped up the TDP from 150 watts to 200 watts plus they had a die shrink to go along with it. The 7970 only has a die shrink and a new design and no extra power to play with it, typically the big increases we have seen over the years have meant a big increase in power consumption so AMD should be congratulated for making a very efficient design that gives a us a big boost in performance and without the extra heat and power.

Most of those quotes don't really give any numbers and are just a lot of hyperbole. Some appear to suggest that in some cases the 7970 only trades blows with the GTX-580. If that's an indication of the best defence that can be given for the 7970, then I'd say my original comments were vindicated.
 
Most of those quotes don't really give any numbers and are just a lot of hyperbole. Some appear to suggest that in some cases the 7970 only trades blows with the GTX-580. If that's an indication of the best defence that can be given for the 7970, then I'd say my original comments were vindicated.

There quotes based on real numbers that the review has spent days gathering I think it's safe to say most of those people know what there doing. And they do not suggest it trades blows with the GTX 580, one said the 3Gb GTX580 makes things interesting but results are a convincing win for the 7970.

Come January the 9th you can either buy a GTX580 3Gb (if you can find one) for £400 or a Radeon HD7970 for £440. The Radeon costs 10% for anywhere between 10% to 50% performance, that's a lot of money granted but it is better value.
 
The other option would be to offer an intelligent rebuttal. Unless of course you're trying to audition for a job at AMD.

There is no intelligent rebuttal to make, especially to a post like yours. Plus, 99.99% of the people on this forum don't know squat (including me) hence it's mostly futile anyway. Crticise the end performance, fine, but I find it absurd when people criticise the actual design process, as if they could have done better, lol. In this respect, as I said before, your comments about transistors, clock speeds, etc is complete nonsense. I don't claim to know better, but I trust AMDs design thought processes far more than yours.
 
Mmm people aren't really getting that I'm not talking about the selling price at all: selling prices can go up and down depending on all kinds of strategies and business conditions. I'm purely interested in what AMD have been able to achieve with their technology per mm^2 of silicon, which will impact on their profits - what they choose to charge the consumer is another issue entirely. If AMDs silicon is uncompetitive with Nvidias, ultimately they will either have to make a loss or charge too much.
 
There is no intelligent rebuttal to make, especially to a post like yours. Plus, 99.99% of the people on this forum don't know squat (including me) hence it's mostly futile anyway. Crticise the end performance, fine, but I find it absurd when people criticise the actual design process, as if they could have done better, lol. In this respect, as I said before, your comments about transistors, clock speeds, etc is complete nonsense. I don't claim to know better, but I trust AMDs design thought processes far more than yours.

In summary: "your post was nonsense but I don't really know why". It is a terrible argument to say that just because you're not an expert in a field you can't criticise experts. Companies are assessed according to their relative results (aka 'competitiveness'), and if one is falling behind another in a design driven product, it's fair to say that their experts aren't performing.
 
If this thing was priced at a 6970 price point it would fly off shelves (or warehouses ;) )
 
There quotes based on real numbers that the review has spent days gathering I think it's safe to say most of those people know what there doing. And they do not suggest it trades blows with the GTX 580, one said the 3Gb GTX580 makes things interesting but results are a convincing win for the 7970.

It's just a matter of perspective:

Viewed from a "historically blind" perspective, they are great cards. The fastest GPU around, decent power consumption - what's not to like? If the reviews are taking this perspective then they are absolutely correct. After all, they can't judge the cards against people's expectations, or against the speculated performance of yet-unreleased hardware.

Taking into account the norms of GPU technology though, they haven't lived up to the expectations that many people had of them. The drop from 40nm to 28nm is (proportionally) the biggest we've seen in a long time, but that hasn't translated into performance the way that it has with previous die shrinks.


One thing has struck me though:
The 28nm process allows (in principle) a relative increase in transistor density of 2.05 times (i.e. +105%), whereas the actual transistor density has increased by 'only' 74%. This is quite out of character from previous generations, which have more closely followed the expected transistor packing density (you can check the numbers on the wikipedia pages).

It will be interesting to see how Kepler fares in this regard. It could be that the more intricate design of GCN requires a somewhat looser transistor arrangement than the VLIW cores. But, if nvidia are also producing a ~70-80% increase in transistor density then it's more likely indicative of issues with the 28nm process. That's not necessarily to say it's down to issues with the manufacturing process at TSMC - it could be more physical problems associated with running high-speed transistors at such small sizes. If this IS the case, then we might not see such large gains from Kepler either.
 
The 7970 has 60% more transistors than the 6970 so that's about the max increase in performance that we could have expected. As the chip is now very much a GPGPU a lot of those extra transistor's went into parts that won't improve the graphics performance.

There is also the fact that the card will gain a lot of performance with new drivers. The 6970 gained 10-15% over the last year and it is likely that the 7970 will gain even more as it's a much larger change in design.

So it's certainly not great GPU but it's no Bulldozer repeat.
 
If AMD launched bulldozer 6 months before the core i series and it was faster than the core 2 I think they would have been happy.
 
OP is trolling!! :o

They have done what was required and released a card that is better than the current king of the hill.

Why is it a fail?

Flip it upside down, and havent Nvidia failed for allowing AMD to take the performance crown? I doubt Nvidia are anywhere near ready with their next Gen.
 
It's just a matter of perspective:

Viewed from a "historically blind" perspective, they are great cards. The fastest GPU around, decent power consumption - what's not to like? If the reviews are taking this perspective then they are absolutely correct. After all, they can't judge the cards against people's expectations, or against the speculated performance of yet-unreleased hardware.

Taking into account the norms of GPU technology though, they haven't lived up to the expectations that many people had of them. The drop from 40nm to 28nm is (proportionally) the biggest we've seen in a long time, but that hasn't translated into performance the way that it has with previous die shrinks.


One thing has struck me though:
The 28nm process allows (in principle) a relative increase in transistor density of 2.05 times (i.e. +105%), whereas the actual transistor density has increased by 'only' 74%. This is quite out of character from previous generations, which have more closely followed the expected transistor packing density (you can check the numbers on the wikipedia pages).

It will be interesting to see how Kepler fares in this regard. It could be that the more intricate design of GCN requires a somewhat looser transistor arrangement than the VLIW cores. But, if nvidia are also producing a ~70-80% increase in transistor density then it's more likely indicative of issues with the 28nm process. That's not necessarily to say it's down to issues with the manufacturing process at TSMC - it could be more physical problems associated with running high-speed transistors at such small sizes. If this IS the case, then we might not see such large gains from Kepler either.

I personally don't think it's anything to do with the process but more over the TDP. A 7970 3Gb is 250 watts TDP which is the same as the HD6970 2GB, previously the 4870 was 150 watts TDP and the 5870 pushed it to 200 watt TDP looking at it like that suggests that AMD aren't will to go near the 300 watts TDP limit for a single GPU and that tells me Nvidia will follow the same path and performance increases in its range of cards will be very smiler to AMD's but we just have to wait longer.

I hope I'm wrong because if there an element of truth to my theory it means the laws of physics are hampering the dynamic performance increases that we have seen previously. It could be the case we have to wait >2 years to see any major performance jumps for here on out. :(
 
I hope I'm wrong because if there an element of truth to my theory it means the laws of physics are hampering the dynamic performance increases that we have seen previously. It could be the case we have to wait >2 years to see any major performance jumps for here on out. :(


+1
I was wondering if this was the case too. Well we may see if that is the case when companies like Intel bring out their next generation of CPUs (Haswell) and see how the ever decreasing node size is effecting performance and if the laws of physics are starting to hamper the expected performance gains we are use to seeing.
 
It's just a matter of perspective:

Viewed from a "historically blind" perspective, they are great cards. The fastest GPU around, decent power consumption - what's not to like? If the reviews are taking this perspective then they are absolutely correct. After all, they can't judge the cards against people's expectations, or against the speculated performance of yet-unreleased hardware.

Taking into account the norms of GPU technology though, they haven't lived up to the expectations that many people had of them. The drop from 40nm to 28nm is (proportionally) the biggest we've seen in a long time, but that hasn't translated into performance the way that it has with previous die shrinks.


One thing has struck me though:
The 28nm process allows (in principle) a relative increase in transistor density of 2.05 times (i.e. +105%), whereas the actual transistor density has increased by 'only' 74%. This is quite out of character from previous generations, which have more closely followed the expected transistor packing density (you can check the numbers on the wikipedia pages).

It will be interesting to see how Kepler fares in this regard. It could be that the more intricate design of GCN requires a somewhat looser transistor arrangement than the VLIW cores. But, if nvidia are also producing a ~70-80% increase in transistor density then it's more likely indicative of issues with the 28nm process. That's not necessarily to say it's down to issues with the manufacturing process at TSMC - it could be more physical problems associated with running high-speed transistors at such small sizes. If this IS the case, then we might not see such large gains from Kepler either.

This is an interesting response, and something that seemed odd to me too. As the poster below it indicates, AMD has only increased (useful?) transistor count by 64%, yet we would have expected around 100% by consideration of area alone. This seems to suggest that there is a lot of redundant silicon on the 7970. I read an article a while back that said that in the face of poor manufacturing processes, designers will double or triple up important elements that might not fab right, leading to redundant silicon. Not doing enough of this on an immature process is what led to nvidia's problems with the GTX480/470 release. So it could well be that 28nm is still not working at all well. By the time Nvidia come to market it might be working rather better, which would be bad news for AMD.

Against the most favourable reviews, I suppose an increase of ~40% on 64% more transistors isn't all that bad, especially if they can improve it with better drivers.

So perhaps it's a combination of a slight design fail and a big process fail.
 
I personally don't think it's anything to do with the process but more over the TDP. A 7970 3Gb is 250 watts TDP which is the same as the HD6970 2GB, previously the 4870 was 150 watts TDP and the 5870 pushed it to 200 watt TDP looking at it like that suggests that AMD aren't will to go near the 300 watts TDP limit for a single GPU and that tells me Nvidia will follow the same path and performance increases in its range of cards will be very smiler to AMD's but we just have to wait longer.

I hope I'm wrong because if there an element of truth to my theory it means the laws of physics are hampering the dynamic performance increases that we have seen previously. It could be the case we have to wait >2 years to see any major performance jumps for here on out. :(

Well, the key factor in performance will always be the number of transistors you have to play with. For a parallel device like a GPU, having twice the transistors allows you to build twice as many of your sub-components (compute cores, texture units, cache, whatever else). It won't necessarily double performance, but it's a good start. Of course, using more transistors leads to bigger, more expensive and unstable cores, that produce more heat. The only way around these limiting factors is with a reduction in the manufacturing process size.


You're definitely right about the heat output thing though... In the past, AMD and nvidia have been able to push their cards closer to the limits, in terms of clockspeed and voltage. This led to a general increase in power consumption (and so heat output) with each generation. This past couple of generations, as we've approached the 250-300W power consumption levels, we're starting to approach the limit of what can be cooled comfortably and quietly using air coolers, given that the heat is being emitted from such a small area (roughly 2cm x 2cm).

So, assuming power consumption stays capped at roughly today's levels (per-GPU), I agree we can't necessarily expect quite the same performance increases as in previous generations. I still maintain that process size is by far the biggest factor in GPU design though :p
 
This seems to suggest that there is a lot of redundant silicon on the 7970. I read an article a while back that said that in the face of poor manufacturing processes, designers will double or triple up important elements that might not fab right, leading to redundant silicon.

That certainly sounds plausible... It could also bode well for a refresh part later in the year.
 
Back
Top Bottom