And with so fewer shaders there's no reason why it should be using so much power. All very odd. How did he measure the power usage?
For starters the 580 was undervolted

Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
And with so fewer shaders there's no reason why it should be using so much power. All very odd. How did he measure the power usage?
device id is 6718 which should be 6950.
6719 should be 6970.
not sure what is going on, silly as it is.![]()
It doesn't matter the moment has gone.
Both Raven and Rroff calim to have sources and be in on the action when In fact they haven't a clue like the rest of us.
Sad tbh![]()
How do you know that?
Thats interesting only its the other way around isn't it?
5850 - 6899, 5870 - 6898
6850 - 6739, 6870 - 6738
40% faster than a 580GTX ? - cool - now Wednesdays worth waiting forwhere was the post ?
I'd love to think that was really the 6950 and the 6970 had say 1920 SP... but as I said before I have a good source who says the 6970 is 1536 SP.
EDIT: Or maybe I "made that all up" like I "normally" do and just happened to get a fairly arbitary number correct?![]()
We can't count it out yet... if that did turn out to be the 6950 (which seems unlikely) then the 6970 would quite feasibly be in the 30+% faster than 580 ball park.
As I said before tho, my source is the same as my info on the 58xx cards - which anyone can look back and find I was correct about.
Again I'll point out, firstly 1536 was in NO WAY arbitrary, at all, secondly LOTS of people were guessing 1536 even before the magical really early date you said it(weeks/months after others had guessed so).
Your source on the 5870, you say stuff like that, then say go look it up, then no one does, and in the future you'll claim that as "proof" you were right, as you were now.
THe 5870, was expected by EVERYONE to be a 256bit 1600 shader card, what magical new piece of information did you guess at that no one else did?
ohh. i missed that...Yeah Gareth we have been over this, he has updated to a test build of GPU-Z that shows the correct specs for the 6970, that would be the one showing 1536 SP's. And the official release date is the 15th.
ohh. i missed that...
so the 6970 as less SP than the 5870.
ohh. i missed that...
so the 6970 as less SP than the 5870.
Yes but given the numbers AMD have been rolling 1536SP on the new architecture should give 40-50% performance increase over 1600SP on the old.
I don't recall seeing anyone claiming 1536 SP (other than where I got my info from) before I said it - but happy to be proved wrong. Certainly no one here was making such claims but I can believe it may have been mentioned months ago elsewhere as these guys would have known for months.
When I was claiming 1600SP for the 5870 the only people making that claim were myself and charlie pretty much everyone else was saying different numbers until charlie came out with it and then suddenly everyone changed their tune.
EDIT: To be fair I think you were claiming 1600 SP would be the logical step forwards about that time as well as IIRC you've always held the same view RE architecture changes for ATI/nVidia generationally.
Because a general doubling has happened every generation since, errm, I'm not sure it was so long ago, was it the GF 256 that was the first 1 pipeline card , then 2, 4, 8, 16, then we got a little inbetween dx9 and dx10 and adding shaders rather than pipelines for AMD. Even the 2900xt to 4870 wasn't unexpected, 320 to 800, the 2900xt was clearly cut down to fit at 80nm(it was meant for 65nm) it was probably a 400shader part to start with.
But you're basically saying the only people claiming it was me, and this really well known guy who publishes his info in the interweb for everyone to see. I'm not sure you can claim public knowledge released by Charlie as "you were right about it".
The whole 32/28nm thing will throw a spanner in the works, realistically you'd have expected the equivilent of 2400 shaders at 32nm and 3200 at 28nm. Nvidia will WANT 1024 shaders in their next gen, but will they for the first time in a long while use their heads and go slightly smaller to bring yields up, who knows, head says they can't be stupid enough to do it AGAIN, heart says, Dear leader really is that arrogant.
My information honestly didn't come from Charlie, but I do believe he did post about it a couple of days before I came out with it.
I'm not really an expert on this but I can comprehend the general ghist of the issue.