I've still yet to see a IQ reduction, I'll point out again that the 10.10's look BETTER in the screen shots linked, that both on older drivers and on Fermi theres a noticeable difference in the brightness in the AF, theres "banding" at various distances where colour is inaccurate.
It seems people have decided to call this "better", when the new drivers offer a different IQ, its not surprising that 10.9, that doesn't support the 6870 and new quality AF, can't produce the same quality AF. In the newer drivers theres UNIFORM lighting from near to far, this can not be said for the 10.9 drivers or Fermi's usage.
I do love Nvidia are spinning better lighting uniformity as "worse IQ", just because AMD's last gen cards, and Nvidia's current gen cards seem to get the lighting incorrect.
Seriously, look at the pictures, not far off both the old drivers and Fermi pictures show differeing levels of brightness of the ground, the 10.10 on 68xx cards shows a more uniform lighting.
AMD increase IQ, Nvidia call foul just because its DIFFERENT to theirs, so they insist its worse.
As for Rroff again, I also find it hilarious he was being "savaged", I asked a simple question, where he got his numbers from, thats it, I'm not sure how thats being savaged. As said, theres several prediction threads on various forums, the one on semiaccurate has a LOT of talk about sticking to a 16 shader groups per cluster formation, it was WIDELY discussed for WEEKS before Rroff posted his "guess". 1536 is predicted by several people for the 6950 with 1920 for the 6970, mostly by people assuming a cluster size of 64 shaders in the week before Rroff made his bold pronouncement.
AS said, I was questioning the "its 7 to 14% faster" stuff you manage to come up with for every new card, that really is never correct, neither are your guesses. In Physx discussions you have this habit of months later, claiming victory in a past thread and using this as fact months later in a new thread to try and win some new argument.
Well yet again all the leaks in the past few days suggest a 1920 shader 6970, and a 3840 shader 6990, making Rroff wrong, again.
I'd also say, I'm not surprised if AMD go for a lower count 6950 vs the 5850. The 5850 only had a 10% drop in shaders, but was capable of the same clocks so the 33% cheaper 5850 was when overclocked, not more than 10% behind, often less, making the 5870 bad value. This time around I've suggested a bigger shader drop but with a lower drop in clock speed gives a similar overall drop in speed as last gen, but less overclocking head room and more difference between the cards, which will mean theres more reason to spend more on the 6970 than there was a 5970. I also won't be surprised to see 1920/1792 or 1664/1536 for the 6970, 6950 and probable at some point 6930.