Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Prob didnt uninstall his nVidia drivers.![]()
Well, I'm sure that AMD will price it competitively. It's been a long time since an AMD product was priced above its nvidia performance-equivalent, and I don't see any reason that this would change now. If the 6970 is truly only slightly above GTX570 performance, I'm sure we will see it priced similarly or lower.
Prob didnt uninstall his nVidia drivers.![]()
If PowerTune is validate to 130-250 watts. You can choose between maximal performance or minimal power use. This is absolutely user configurable. If you want to run the game fast, choose the max setting and the card will eat ~250 watts, if you don’t bother about loosing some FPS, use the minimal setting, and the card only eat ~130 watts. Of course you can able to choose other options between max and min settings.
If that's the case, AMD will have had a major step backwards in efficiency. A 20% performance increase over a 5870 makes so little sense when you compare the efficiency improvements they made with the 6800s.
I expect performance to be much greater than what we are seeing, I'm sure the new powertune feature will increase performance when the user accesses this feature which this guy with the 6970 obviously has no access to.
Something looks wrong to me, Gibbo has posted stating that its a good card for the price.
I've said this many times before, but it's worth mentioning again:
GPU design is not just about the current generation, but about setting a platform for the next couple of generations. It seems that AMD have taken a fairly large step forward this time around, in terms of architectural rearrangement, and a slight performance penalty can be expected as part of that. But if this allows the architecture to be scaled to larger design sizes on smaller manufacturing processes, then it can be a good thing in the long term.
We saw this with Fermi, and it's possible we will see this with Cayman also. In the case of Cayman, we already know that it was designed with a 32nm process in mind, which would presumably have lead to a larger GPU design (i.e. more transistors), and so effective scalability would have played a larger part than at 40nm.
For this same reason, I have been saying that I would expect only a very small improvement over Cypress in terms of performance-per Watt (additional overhead due to increased control logic circuitry). But we will certainly have to wait until Wednesday for any real power-consumption figures.
Gibbo has already confirmed its a lot higher though![]()
When is the last time that you saw a salesman say that one of his products was poor value?
I mean, I'm not saying that he's wrong per se - just that what he says is not really indicative of anything at all. Gibbo's job is to shift the large quantities of GPUs that he has in stock, and generating hype is always going to be a part of that. He's just a good salesman.
But all that should also apply to the Barts based cards, and look at how they turned out.
I've said this many times before, but it's worth mentioning again:
GPU design is not just about the current generation, but about setting a platform for the next couple of generations. It seems that AMD have taken a fairly large step forward this time around, in terms of architectural rearrangement, and a slight performance penalty can be expected as part of that. But if this allows the architecture to be scaled to larger design sizes on smaller manufacturing processes, then it can be a good thing in the long term.
We saw this with Fermi, and it's possible we will see this with Cayman also. In the case of Cayman, we already know that it was designed with a 32nm process in mind, which would presumably have lead to a larger GPU design (i.e. more transistors), and so effective scalability would have played a larger part than at 40nm.
For this same reason, I have been saying that I would expect only a very small improvement over Cypress in terms of performance-per Watt (additional overhead due to increased control logic circuitry). But we will certainly have to wait until Wednesday for any real power-consumption figures.
Not really... Barts was a Cypress-style architecture with an improved front-end, rebalanced shader-to-texture ratio, and other minor architectural tweaks.
Cayman seems to be a far more significant reworking of the architecture, taken with "one eye on the next couple of generations". This is where short-term (i.e. current-generation) inefficiencies are likely to creep in.