• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

'Final' 6990 specs

i wasn't stating that 6970 would use 150 watt how can it do that while barts even if using 5d cluster using same amount . i was referring to ravens comment.

you jumped in and called him a troll because he stated Jigger's idea the 6970 could use 150W was mad, which apparently you agree with, so I can't see the reason to call him a troll for his comment.
 
Agree with everything but the will not improve power per shader, it won't inherantly do so, but won't necessarily "not" do that either, weird sentence.

I agree it's perhaps a bit strong to suggest that it "will not" improve, but pragmatically I find it to be highly unlikely. We will have to wait and see, but feel free to call me on it if I turn out to be wrong.

The 4d cluster uses less power than 5d cluster, purely because if its saving 10% die area, you're almost certainly using less power.

Power useage per cluster will undoubtedly drop, but from AMDs own slides we can speculate that power use per SP will rise: They state that a 4D cluster uses 10% less area than a 5D cluster... So a 10% reduction in area for a 20% reduction in the number of SPs. If you assume a uniform power draw per unit area (which is a reasonable base level to start from, all things being equal), you're looking at 12.5% increase in power per-shader.

Personally, I'm expecting roughly equal performance per Watt (which would in itself be a big achievement for a more advanced and scalable architecture, on the same process), but an increase in power-draw per shader.
 
I agree it's perhaps a bit strong to suggest that it "will not" improve, but pragmatically I find it to be highly unlikely. We will have to wait and see, but feel free to call me on it if I turn out to be wrong.



Power useage per cluster will undoubtedly drop, but from AMDs own slides we can speculate that power use per SP will rise: They state that a 4D cluster uses 10% less area than a 5D cluster... So a 10% reduction in area for a 20% reduction in the number of SPs. If you assume a uniform power draw per unit area (which is a reasonable base level to start from, all things being equal), you're looking at 12.5% increase in power per-shader.

Personally, I'm expecting roughly equal performance per Watt (which would in itself be a big achievement for a more advanced and scalable architecture, on the same process), but an increase in power-draw per shader.

Performance per watt for shaders, or the whole card, also kind of depends what you're comparing to, performance per watt vs Cypress should be SIGNIFICANTLY improved.

I mean, simply factor in the fact that a 1920 shader Cypress would be roughly 18-20% faster, and cost around 18% more power(as memory size staying the same would mean no real increase there). A cayman with 1920 shaders should honestly offer a fairly similar power draw, 20% more than 180W is give or take 215W, if each Cayman shader provides 20% more performance then its going to be 20% faster for a similar power draw, which will increase performance per watt.

Though if 6970 is a 2gb card, then a better comparison will be against a 2gb 5870 because memory power will change the seeming efficiency.

Vs Barts, I'd still expect performance per watt increase because again, a very small increase in power per shader, vs 20% more performance per shader will increase performance per watt quite easily. But with tweaks to the front end already its, well its significantly more efficient than Cypress already.
 
What about a 150 watt 6950? Any chance of that?

Very very unlikely, the main issue is, we have no idea how much power a 6970 uses, AMD were expecting over 225W for the 6970 and under for the 6950.

The huge issue no ones really brought up is, if the 6950 comes in very close to 225W, but doesn't have an 8pin connector overclocking might not be fantastic. I think I'd expect though both cards to have a 6 and 8 pin.

I wouldn't bet against very close to 200W for a 6950 and 230W for a 6970, but that also relies on knowing just how cut down the 6950 is, if its really a 1536shader card thats a 20% shader drop, the 5850 only had a 10% shader drop.

Its still impressing me how little info on the cards has leaked.

I'm thinking 230-240W for a 6970 if its 1920 shaders and 2gb mem, 205-215W give or take for a 6950 if its 10% less shaders, if the 6950 is 1536 shaders and 1gb mem, it could actually drop quite a lot of power probably not far off 5870 power usage.
 
I mean performance-per-Watt for the entire card. I'm expecting it to be similar... It would be nice to see an improvement, and it may well happen, but I'm not expecting it to be significant. I agree that the 2Gb 5870 would be the right Cypress card to compare to for this measure.

We've been down this road before so no need to rehash old arguments, but since the proportion of transistors given over to control logic within each SP group has increased (20% reduction in shaders for a 10% reduction in area), I'm expecting the extra power required to run these to roughly balance out the increase in efficiency that we will see from a 4D shader architecture.

It would be nice to see a small improvement though, and it's entirely realistic that we will. I wouldn't necessarily consider the architecture to be "a failure" if it doesn't show global performance-per-Watt improvements though, as the improved scalability will provide a platform for future generations.


edit:

AS for the last bit, I think I'm going to have to just insist the 6970 will use 150W, just incase AMD pull a bait and switch and it actually ends up on the 28nm process, because they'll you'll have to buy me one too? :p

Okay sure :p

Pretty damn sure that my money is safe though!
 
Last edited:
very interesting reading guys.
one though did occur to me after going back to look at that power meter slide again.
atipowercontainment.jpg

the way I'm reading this power containment thing is that it will completely replace overclocking via core clock and vcore adjustments, you will set the desired tdp and the hardware/software will do its things to keep below that tdp. now as drunkenmaster suggested earlier this might be different clock speeds for different parts of the core or someting not quite as sophisticated, hopefully as the slide does say "dynamically adjust clocks for various blocks to enforce tdp" it really will be a case of as more power is needed more stream processors will be brought up to speed, but we will have to wait and see.

the only down side i can see is that it might completely quackup overclocking if the hardware/software doesn't allow you to set a higher tdp then more performance is a no no.
I'm sure the overclocking community will find a way round it but who knows.
 
Last edited:
I don't understand why people are getting so obsessed with power draw, surely it's not because of the relatively tiny increase in your electricity bill???

As long as the card is quite, stable, pumping out high fps and not using an unreasonable amount of power, I couldn't care less about it.
 
...the way I'm reading this power containment thing is that it will completely replace overclocking via core clock and vcore adjustments

Absolutely...

Power containment will most certainly have an effect on overclocking; and as the power containment system becomes increasingly sophisticated, then overclocking (in the traditional sense of adjusting clockpeeds) becomes increasingly irrelevant.

Overclocking with a highly sophisticated power-containment system would then become adjusting the target TDP. I suspect that we will have some control over this, but that there will be a relatively conservative cap enforced. It is a very real concern that increasing the max power draw can cause damage to components - more so than traditional overclocking as a sophisticated power management system would work to try and keep the card as close to the max power draw as possible, in as wide a range of scenarios as possible. Whether this cap would be enforced in BIOS or at a hardware level is anyone's guess. If it's enforced at a hardware level it will be very difficult to circumvent.

Anyway, this is only the first iteration of an advanced power management system. We will have to wait and see how effective it is, how much flexibility it provides, and how much effect it has on overclocking.
 
As long as the card is quite, stable, pumping out high fps and not using an unreasonable amount of power, I couldn't care less about it.

Greater power draw increases heat output in direct proportion, which makes it increasingly more difficult to cool the GPU efficiently and quietly (unless you use large fans that dump the heat back into the case). It also increases stress on board components and power regulation hardware, which further drives up costs. Not to mention that larger and more stable PSUs are required to run the cards...

The concern is not so much for the present (a ~300W ceiling is manageable on all fronts I mention above), but preparation for the future, where the increasingly large (in terms of #transistors) designs will undoubtedly have the potential for even greater power draw. A sophisticated power management system allows you to maximise performance under the constraint of reasonable power draw.
 
I mean performance-per-Watt for the entire card. I'm expecting it to be similar... It would be nice to see an improvement, and it may well happen, but I'm not expecting it to be significant. I agree that the 2Gb 5870 would be the right Cypress card to compare to for this measure.

We've been down this road before so no need to rehash old arguments, but since the proportion of transistors given over to control logic within each SP group has increased (20% reduction in shaders for a 10% reduction in area), I'm expecting the extra power required to run these to roughly balance out the increase in efficiency that we will see from a 4D shader architecture.

It would be nice to see a small improvement though, and it's entirely realistic that we will. I wouldn't necessarily consider the architecture to be "a failure" if it doesn't show global performance-per-Watt improvements though, as the improved scalability will provide a platform for future generations.


edit:



Okay sure :p

Pretty damn sure that my money is safe though!

The only problem is the slides say logic was simplified and made smaller ;) The shader size is reduced by 10% for the same performance, it doesn't mention the logic but in other slides it says both the thread dispatcher and core logic was simplified(read reduced) because its a far more even and simple setup so core logic has also been reduced, which is, iirc, what I argued before :p

The reason its not 20% smaller for 20% less shaders(in a given space) is they've not moved from 4 simple + one complex shader, to 4 simple shaders, but to 4 "medium" shaders that can do a little more than the previous shaders. The new shaders are bigger than the 4 simple ones, but smaller than the complex ones.

The whole core right the way through is much more simplified with basically identical shaders throughout, theres no chance in hell it will end up the same performance per W as Cypress, but I expect only a small boost over Barts(because it has a lot of improvements already. Most likely we're talking about probably a 180W Cypress vs a probably 230-240W 6970, thats likely to be really 40% faster. It would really need to be pushing 250W with the same amount of memory to not improve performance/watt over Cypress.

As for another poster asking why we care about TDP, in this case its because the 6990/dual GF110(still really questionable if that will happen) will almost certainly WANT to fit within the 300W bracket otherwise LOTS of companies won't put it into their computers for sale. Yes, we on OCUK who might buy one(ok those who post here but by somewhere cheap :p ) don't care if it uses 600W in a single slot, but Dell sales vs OCUK sales, AMD/Nvidia care about Dell and OCUK sales numbers don't effect them in the slightest.

AS for overclocking, we've seen it pretty much confirmed there will be a TDP overclocking tool, infact I wonder if TDP will replace clock speed entirely, or both will work together, but it seems very likely we'll see a TDP limit either in CCC or AMD overdrives tool(the software for CPU overclocking) updated to offer the option.

Which will, actually poop on my consistant advice for no one to bother using furmark as no other games will load it as much.

For instance I normally argue that if you can overclock to say 900Mhz in Furmark and no further stable, thats got no direct effect on your maximum stable say Crysis overclock because a 580GTX for instance will pull 300W in Furmark and 250W in games, so a 580gtx might be stable 75Mhz higher in Crysis(or 2Mhz, basically Furmark tells you nothing).

However, if the GPU will auto up clocks to fit in a TDP limit you tell it to, knowing where you're stable at temps/clock wise for 300W will be very useful finally.
 
If these specs turn out to be accurate, the price is kept in check and the reviews are positive then I might look to sell off my 5970 and pick up one of these, particularly if they turn out to be decent overclockers. But I want to know how they're planning to keep temperatures within limits, as the 5970 is a bit quick to down-clock the second GPU when overclocked and under load.

Maybe they will put the VRM circuitry for the second core under the heatsink proper this time :p
 
I don't understand why people are getting so obsessed with power draw, surely it's not because of the relatively tiny increase in your electricity bill???

As long as the card is quite, stable, pumping out high fps and not using an unreasonable amount of power, I couldn't care less about it.

I for one find the sound of a jet engine coming from my computer rather annoying! :p
 

The fact is, if the HD6990 is using 2x1920 sp chips and is coming in under 300watt, ATI have plenty of room to manoeuvre with the HD6970 card.

Most people expected the HD6990 would be a dual Barts Pro card, and use a pair of 1600~ SP from the HD6950, with the Barts XT 1920 ~ SP going to make the HD6970. Either way something is not adding up properly.
 
The fact is, if the HD6990 is using 2x1920 sp chips and is coming in under 300watt, ATI have plenty of room to manoeuvre with the HD6970 card.

Most people expected the HD6990 would be a dual Barts Pro card, and use a pair of 1600~ SP from the HD6950, with the Barts XT 1920 ~ SP going to make the HD6970. Either way something is not adding up properly.

Isn't the rumour that the 6990 uses two 6950 GPUs at 1920 shaders, and that the 6970 is supposedly a higher shader GPU? That'd make a lot more sense when it comes to the TDP of the 6970 compared to the 6990.
 
Most people expected the HD6990 would be a dual Barts Pro card, and use a pair of 1600~ SP from the HD6950, with the Barts XT 1920 ~ SP going to make the HD6970. Either way something is not adding up properly.

I'm assuming you mean Cayman and not Barts there...

I agree that something is not "adding up" with regards TDP and power draw, although by making use of an advanced power management system we could see a dual-"full-fat" Cayman card that is constricted somewhat artificially to around 300W or a little more. This seems the only viable way a dual 1920SP Cayman card can operate within a ~300W window. How much this power containment affects performance remains to be seen.
 
Back
Top Bottom