• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

'Final' 6990 specs

Hi Duff-Man i know you was not suggesting GTX580 yet :-)
I think we have some exiting times ahead ;)
I am not sure if I get micro stutter but to play black ops I have to disable crossfire and I thought I would give BFBC2 a go and was rather surprised how smooth it was, lower FPS BUT SMOOTH ? but screen tearing seemed worse ?
 
Last edited:
Hm, i'm waiting for the 6900 series and really looking forward to my purchase, but i'm now wondering.. am I going to need a new PSU also?

I'm using a Corsair TX650, with:
MSI X58M
Core i7 920
6GB OCZ Gold
Intel SSD
2 WD Green 1tb drives
CD/DVD writer, a H50 and a few fans..
 
Well think about it, ATI have a lot of (60% maybe) redundancy in the tech and this will be most likely be the last serious round of 40nm cards so little point holding back if you dont have to. The power usage look like its been massively improved on and it, seems yields are ideal.

I think its likely TBH.
 
Last edited:
Hm, i'm waiting for the 6900 series and really looking forward to my purchase, but i'm now wondering.. am I going to need a new PSU also?

I'm using a Corsair TX650, with:
MSI X58M
Core i7 920
6GB OCZ Gold
Intel SSD
2 WD Green 1tb drives
CD/DVD writer, a H50 and a few fans..

I have the same PSU and was thinking exactly the same thing all day.
 
Well think about it, ATI have a lot of (60% maybe) redundancy in the tech

I'm not sure what you mean by this... 60% redundancy in what?

The power usage look like its been massively improved on and it, seems yields are ideal.

The stated TDP of the 1120 stream processor Barts core is 150W, and it draws a fair amount more than this under full load (furmark). If you believe that a 1920 SP chip with more memory is going to operate at around 150W then you are deluding yourself.

Also, what evidence are you citing to suggest that "yields are ideal"?

Until the cards are released you are free to believe whatever you like, but it simply isn't going to happen.
 
Hm, i'm waiting for the 6900 series and really looking forward to my purchase, but i'm now wondering.. am I going to need a new PSU also?

I'm using a Corsair TX650, with:
MSI X58M
Core i7 920
6GB OCZ Gold
Intel SSD
2 WD Green 1tb drives
CD/DVD writer, a H50 and a few fans..

I doubt it.

Spec in sig pulls just under 300w at load, your 650w PSU will be fine.
 
I'm not sure what you mean by this... 60% redundancy in what?
The stated TDP of the 1120 stream processor Barts core is 150W, and it draws a fair amount more than this under full load (furmark). If you believe that a 1920 SP chip with more memory is going to operate at around 150W then you are deluding yourself.

Also, what evidence are you citing to suggest that "yields are ideal"?

Until the cards are released you are free to believe whatever you like, but it simply isn't going to happen.



Well if you look at the this latest round of news, and its seem very likely the HD6990 is set to use the 1920 core, then the 1920 sp chip is around 150 watt...

So lets say the 1920sp part is form the HD6970 then its fair to think the cards going to come in at around 150 watts is it not ?

We cant take the barts chips numbers and apply them to cayman, and you have to keep my original post in mind, but if we take the 150 -160 watt max load numbers of the 1120 sp core, that might very well be a failed caymen chip, straight away we would have a variable sub 300 watt 2240 sp card.

So if this round of rumour is true and the 1920sp core is 150watt as it seems to be, then I think the HD6990 might be using two barts Pro chips, and not the full fat XT.

Its hard to say what the difference between the two will be, and it could just be clock speed, but its far from "delusional" to suggest we are not, going to see a higher spec core than what seems to have been planned for the HD6990, if anything it could be looked at the other way round.
 
Last edited:
Well if you look at the this latest round of news, and its seem very likely the HD6990 is set to use the 1920 core, then the 1920 sp chip is around 150 watt...

The "300W" TDP is a very flexible number. From the AMD slides that have been released it seems that AMD are implementing an adaptive power-draw limitation mechanism to ensure that cards do not exceed a particular power draw. How they will relate that power draw cap to the stated TDP remains to be seen.

So lets say the 1920sp part is form the HD6970 then its fair to think the cards going to come in at around 150 watts is it not ?

No. For one thing, adding a second GPU to a board does not require quite double the power (there are some efficiency savings). For another, the AMD slides list the TDP of the 6970 as "<300W", and of the 6950 as "<225W".

We cant take the barts chips numbers and apply them to cayman, and you have to keep my original post in mind, but if we take the 150 -160 watt max load numbers of the 1120 sp core, that might very well be a failed caymen chip, straight away we would have a variable sub 300 watt 2240 sp card.

Cayman uses a "4D" SP setup (stream processors arranged into groups of four, rather than groups of five as in Barts). So, there is no way that a Barts core is a "failed Cayman" chip. They are completely different architectures. Even if it were somehow a reduced Cayman, then it would use significantly less power than the full-fat chip.

You're right that you can't directly compare the two architectures, but per-shader, Cayman will only require MORE power than Barts... Why? because of the change to the 4D architecture. This should bring better "performance per stream processor" on average, and may even improve "performance per watt" (we will need to wait and see on this one), but it will not improve "power draw per shader".


So if this round of rumour is true and the 1920sp core is 150watt as it seems to be, then I think the HD6990 might be using two barts Pro chips, and not the full fat XT.

As above, there is nothing to suggest that the 1920 SP chip will be a 150W part. To be so would require it to use less power than the 1120SP Barts (which, as I have already explained, will not happen).

Its hard to say what difference between the two will be, and it could just be clock speed, but its far from "delusional" to suggest we are not, going to see a higher spec core, than what seems to have been planned for the HD6990, if anything it could be looked at the other way round.

You are arguing from the point of view of hope, rather than realism. If you insist on continuing to take this approach then that's fine, but every time you reinforce it will just work to make you look more foolish when the chip is released.

Okay, how about this for putting my money where my mouth is... If the 1920SP Cayman part uses less than 150W, under normal gaming situations at 100% GPU load (so 2560res in a decent game, but no need for Furmark), then I will buy one for you :p
 
Last edited:
Don't know what Jigger is smoking to be honest, but a good point has been raised if the 6990 is 300W under max load, but other loads are 250W, this is where the seeming power management will come in, dynamically upclock parts of the core so while a just take an example clock speed of 750Mhz, in Furmark would produce 300W but in Crysis produces 250W, if it can use CPU like Turbo to maintain the 300W TDP even in Crysis, it looks very much like Cayman might well dynamically increase clock speeds so for instance at stock it will run 750mhz, but in Crysis it will upscale that to 825Mhz or something to use a full 300W in Crysis.

As for wattage,

Don't forget we don't know the tdp of a 6970 yet, so its all guesswork, and don't forget the best binned 5870 cores were used, with all shaders for the 5970, with double the memory, and still came in inside that 300W limit. So you've got 2x 180W cores running at 300W rather than 360W, likewise a 5850 was 151W, yet a 5970 uses full shader cores and manages with similar clocks(are they both 725Mhz, or was the 5970 750Mhz?).

In other words the 5970 manages to give the same power output as 2x5850's, but has 20% more shaders.

The difference between the best chips in the centre of the die, and the worst chips on the outer edge is more than people estimate.

As for 60% redundancy, I have no idea where you came up with that, it doesn't, its as simple as that, apart from via redundancy I don't think any is built in. People have assumed based off one picture thats in no way clear and can be interpreted a dozen different ways people always assumed there was extra shaders on a 4870 and therefore 5870/6870/6970, its incredibly unlikely, even more so in a small core strategy as you'd rather lose most of the advantage, and its entirely unnecessary as they sell ones without fully working shaders as lower end cards, but to guess at 60% is sheer lunacy.

If the 6970 is only 215W, it will still be very difficult to get 6990's at 300W, however with better power management and clocking up when the load is smaller, some very fancy things might happen.

Whats more interesting is, if it can clock different parts of the core separately we could see epic performance across all games, because some games are rop limited, some tesselation(not many yet, but in the future) some shader, some memory. What if the card is load balancing every part of the core, this game is waiting on ROP's not shaders, rather than clock everything up to 825Mhz, why not keep the shaders which aren't maxed out at 750Mhz, and the rops up at 950Mhz for the same overall TDP.

The power management just isn't the Furmark limit stuff Nvidia has, everything points to it being far more complex, and the fact they already protected the VRM's in things like Furmark in the current gen.


Then you've got the whole process issue, maybe they'll run several weeks of ultra expensive wafers and make less profit to make higher quality cores for 6990's, theres a tonne of options to use more expensive and better quality metals to insulate and prevent leakage, reduce the voltage being used, but if they do that I don't think the 6990 will be as good value as previous dual gpu cards, but still better than 2x6970's otherwise theres little to no point.
 
I'm also interested to see what AMDs dynamic power limitation features can bring to the table. There is a LOT of potential in this (far more than just restricting clockspeeds in furmark), so it'll be interesting to see how far they have developed the technology. Also, once you have the hardware capacity to measure power draw and adjust clockspeeds / GPU sub-components, you can improve the process later via drivers or firmware.

I think that this kind of advanced power management will become increasingly important with future GPU generations - particularly in the multi-GPU market. If nothing else, it will place more weight on strong "performance-per-Watt" engineering, which can only be a good thing.
 
Duff-Man;17866744This should bring better "performance per stream processor" on average said:
If the 1920SP Cayman part uses less than 150W, under normal gaming situations at 100% GPU load (so 2560res in a decent game, but no need for Furmark), then I will buy one for you[/b] :p

Agree with everything but the will not improve power per shader, it won't inherantly do so, but won't necessarily "not" do that either, weird sentence.

We don't know the design, the transistor designed to be used for it, it could easily use less power per shader, or more, its almost impossible to say. Theres a choice of a few different types of transistor in any process, it could be the "5th" trancendental shader in the 5d setup actually requires simply due to spacing, design, something, to use a higher leakage shader, or it might not. The 4d cluster could also require the use of a more leaky transistor.

The 4d cluster uses less power than 5d cluster, purely because if its saving 10% die area, you're almost certainly using less power. But if a 5d cluster uses 1 watt, each shader might use 0.2W, while a 4D cluster using 0.9w's might use
0.225w each, we really don't know. Better design, less via's, less power for the interconnects.

In reality I'd expect power per shader to stay fairly static, but performance per watt will go up quite a bit. I'd expect Barts with the same performance as Cayman to use more power, but it would also use WAY more than 150W.

AS for the last bit, I think I'm going to have to just insist the 6970 will use 150W, just incase AMD pull a bait and switch and it actually ends up on the 28nm process, because they'll you'll have to buy me one too? :p


Jigger and Psychas are mad, Raven's made no bold claims, he's not trolling, the idea the 6970 will use 150W is madness, you would literally have to have the 6970 as a 28nm core with GloFo/AMD pulling off the most mindblowing, secretive and brilliant launch of any tech ever, I'm going to give that a 0.0002% chance of happening.
 
i wasn't stating that 6970 would use 150 watt how can it do that while barts even if using 5d cluster using same amount . i was referring to ravens comment.
 
I'm also interested to see what AMDs dynamic power limitation features can bring to the table. There is a LOT of potential in this (far more than just restricting clockspeeds in furmark), so it'll be interesting to see how far they have developed the technology. Also, once you have the hardware capacity to measure power draw and adjust clockspeeds / GPU sub-components, you can improve the process later via drivers or firmware.

I think that this kind of advanced power management will become increasingly important with future GPU generations - particularly in the multi-GPU market. If nothing else, it will place more weight on strong "performance-per-Watt" engineering, which can only be a good thing.

Thats the thing, its not too much of a stretch to believe they'll do it, even less when you consider their APU's will almost certainly have to have that kind of capability so Turbo can work seemlessly throughout the entire APU die, it will give the CPU and GPU on an APU insanely more flexibility for better gaming performance, and better CPU performance when the gpu is barely required.

Power gating/dynamic control even without APU's was always coming, power management is pretty key and AMD/Nvidia have made pretty huge strides in idle power performance in the last couple gens anyway.

AFAIK, Intel has "turbo" on their GPU's doesn't it now, in upcoming Sandybridge cores that is, also on current stuff I honestly don't know.

I think what it will essentially do is change what the 300W limit means more than anything, a 5970 to use 300W in Crysis, would be clocked at a point where it was using 350W or more in Furmark, so with a feature like that you'd essentially be at a higher real TDP on the 6990 vs a 5970 even if both were limited to 300W.
 
Back
Top Bottom