• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

ATI cuts 6950 allocation

Well, I'm sure that AMD will price it competitively. It's been a long time since an AMD product was priced above its nvidia performance-equivalent, and I don't see any reason that this would change now. If the 6970 is truly only slightly above GTX570 performance, I'm sure we will see it priced similarly or lower.

Gibbo has already confirmed its a lot higher though :(
 
Mfw this **** turns out to be true:

intenserageguyk.jpg
 
I have just caved in and ordered a second GTX 580 before the prices go up and stocks run completely out. I had a 10% discount code which expires tomorrow and I didn't want to risk missing out on a very cheap 580. All of the latest 6900 news has been negative and some of the leakers appear to have real cards. As you get closer to release the leaks tend to get more accuate so it now seems pretty obvious 6970 will join a long list of slower than Fermi cards. Instead of Cayman being The Ferminator it now looks like will get Ferminated. If I though the 6970 had a decent chance of beating GTX580 I would have held out.

Now I just need to hope my HX850 PSU can handle two 580's.

edit: I think it is likely that 6900 boxes will include a driver disk. I doubt most of the leakers/spoilers will be using hacked or unofficial drivers, although they may not yet be fully optimised.
 
Last edited:
I expect performance to be much greater than what we are seeing, I'm sure the new powertune feature will increase performance when the user accesses this feature which this guy with the 6970 obviously has no access to.

If PowerTune is validate to 130-250 watts. You can choose between maximal performance or minimal power use. This is absolutely user configurable. If you want to run the game fast, choose the max setting and the card will eat ~250 watts, if you don’t bother about loosing some FPS, use the minimal setting, and the card only eat ~130 watts. Of course you can able to choose other options between max and min settings.
 
Something looks wrong to me, Gibbo has posted stating that its a good card for the price. Unless its going to go for £200-235 I cant see a point in buying it at all.

I cant even see a point in going for a 6990 after experiencing the 4870x2 stuttering.

I have to say I'm holding my breath in hope that these are all incorrect.
 
If that's the case, AMD will have had a major step backwards in efficiency. A 20% performance increase over a 5870 makes so little sense when you compare the efficiency improvements they made with the 6800s.

I've said this many times before, but it's worth mentioning again:

GPU design is not just about the current generation, but about setting a platform for the next couple of generations. It seems that AMD have taken a fairly large step forward this time around, in terms of architectural rearrangement, and a slight performance penalty can be expected as part of that. But if this allows the architecture to be scaled to larger design sizes on smaller manufacturing processes, then it can be a good thing in the long term.

We saw this with Fermi, and it's possible we will see this with Cayman also. In the case of Cayman, we already know that it was designed with a 32nm process in mind, which would presumably have lead to a larger GPU design (i.e. more transistors), and so effective scalability would have played a larger part than at 40nm.

For this same reason, I have been saying that I would expect only a very small improvement over Cypress in terms of performance-per Watt (additional overhead due to increased control logic circuitry). But we will certainly have to wait until Wednesday for any real power-consumption figures.
 
Going by the "leaks" so far the performance bump is more like it was from hd4870 to hd4890.

Still be interesting to see what it performs like when legit reviews start cropping up. Wonder if guru 3d will do their "preliminary" bs just to grab hits again.
 
I expect performance to be much greater than what we are seeing, I'm sure the new powertune feature will increase performance when the user accesses this feature which this guy with the 6970 obviously has no access to.

So... By overclocking the card (albeit by adjusting power threshold rather than by direct clockspeed manipulation) you expect performance to increase? That's not particularly surprising...

Something looks wrong to me, Gibbo has posted stating that its a good card for the price.

When is the last time that you saw a salesman say that one of his products was poor value? :confused:

I mean, I'm not saying that he's wrong per se - just that what he says is not really indicative of anything at all. Gibbo's job is to shift the large quantities of GPUs that he has in stock, and generating hype is always going to be a part of that. He's just a good salesman.
 
Last edited:
At the heart of AMD's strategy is value for money. So, as it looks likely that the 6970 will come in as the second fastest single gpu card on the market, it will continue the pattern that we see year after year. The 4870 was 10 - 15% (13 as an average iirc) behind the 280.

Must be Antilles that will be the 30 - 40% faster than a 580, maybe a bit more. We'll see, would be nice if the usual predictable pattern was shaken up but then again, at worst, the 6970 will still be a very fast card.
 
Last edited:
I've said this many times before, but it's worth mentioning again:

GPU design is not just about the current generation, but about setting a platform for the next couple of generations. It seems that AMD have taken a fairly large step forward this time around, in terms of architectural rearrangement, and a slight performance penalty can be expected as part of that. But if this allows the architecture to be scaled to larger design sizes on smaller manufacturing processes, then it can be a good thing in the long term.

We saw this with Fermi, and it's possible we will see this with Cayman also. In the case of Cayman, we already know that it was designed with a 32nm process in mind, which would presumably have lead to a larger GPU design (i.e. more transistors), and so effective scalability would have played a larger part than at 40nm.

For this same reason, I have been saying that I would expect only a very small improvement over Cypress in terms of performance-per Watt (additional overhead due to increased control logic circuitry). But we will certainly have to wait until Wednesday for any real power-consumption figures.

But all that should also apply to the Barts based cards, and look at how they turned out.
 
Gibbo has already confirmed its a lot higher though :(

Not really, price changes in the channel and rebates going through to pay back the difference is basically VERY common. On the very slim offchance they made a Barts that is 50% bigger(255mm2 vs 380mm2 give or take a few mm) is only 5-10% faster, and AMD made a big booboo then AMD have a core almost 40% smaller than Nvidia's, they'd change the final pricing and anyone already purchasing cards would get the money back and prices would, without question offer better price/performance than Nvidia, because thats what AMD do.

Unless Nvidia go and decide to take a hit on every sale just for the sake of it but I doubt they'd kill pricing THIS early.

Think 470gtx at £300+ at launch and how laughable that was, then how good pricing was at £220 and below, and frankly insanely good at £165.
 
It could explain why OcUK are confident they have more than enogh cards to last throughout Christmas. Poor performance + high price = <sales.
 
When is the last time that you saw a salesman say that one of his products was poor value? :confused:

I mean, I'm not saying that he's wrong per se - just that what he says is not really indicative of anything at all. Gibbo's job is to shift the large quantities of GPUs that he has in stock, and generating hype is always going to be a part of that. He's just a good salesman.

I do agree with your comment but he knows people on these forums are not stupid and I cant see less than 95% of people of these forums doing there own investigation into how good a product is really.

Also I cant see OCUK investing in a heavy volume of stock unless they are sure to shift the product. Unless OCUK didnt do their own home work into weather a product is worth promoting or buying.

All I can ultimatly say is that I dont care that much about the leaks as we have no verification from sources that can be called trusted or from AMD themselves.

Its all hearsay and conjecture.
 
But all that should also apply to the Barts based cards, and look at how they turned out.

Not really... Barts was a Cypress-style architecture with an improved front-end, rebalanced shader-to-texture ratio, and other minor architectural tweaks.

Cayman seems to be a far more significant reworking of the architecture, taken with "one eye on the next couple of generations". This is where short-term (i.e. current-generation) inefficiencies are likely to creep in.
 
I've said this many times before, but it's worth mentioning again:

GPU design is not just about the current generation, but about setting a platform for the next couple of generations. It seems that AMD have taken a fairly large step forward this time around, in terms of architectural rearrangement, and a slight performance penalty can be expected as part of that. But if this allows the architecture to be scaled to larger design sizes on smaller manufacturing processes, then it can be a good thing in the long term.

We saw this with Fermi, and it's possible we will see this with Cayman also. In the case of Cayman, we already know that it was designed with a 32nm process in mind, which would presumably have lead to a larger GPU design (i.e. more transistors), and so effective scalability would have played a larger part than at 40nm.

For this same reason, I have been saying that I would expect only a very small improvement over Cypress in terms of performance-per Watt (additional overhead due to increased control logic circuitry). But we will certainly have to wait until Wednesday for any real power-consumption figures.

No, we didn't, please stop saying this, sorry but 2900xt was a "new" generation, the 8800gtx was a new generation, completely radically different architecture(unified shader) than previously.

Fermi is NOT an unexpected different architecture, its EXACTLY what anyone expected. it performs EXACTLY as you'd have expected a 512 shader 285gtx to perform, in exactly the same ballpark. There is no loss in performance due to architecture, the only "loss" came from being unable to release at the speed and shader count they meant to, that had nothing to do with the architecture.

Expecting Cayman performance per mm2 or per shader to be LOWER than Cypress is utterly utterly laughable.
 
Not really... Barts was a Cypress-style architecture with an improved front-end, rebalanced shader-to-texture ratio, and other minor architectural tweaks.

Cayman seems to be a far more significant reworking of the architecture, taken with "one eye on the next couple of generations". This is where short-term (i.e. current-generation) inefficiencies are likely to creep in.


Improving the front end, to the tune you can reduce the core size what, over 25% and maintain very similar performance and radically increase performance in some situations(xfire, tesselation) and radically improve performance/w, performance/mm2, performance/£ is NOT minor any way you cut it.

The front end changed, a LOT and its almost half the core.
 
Back
Top Bottom