So according to that article, the 7950 and 7970 will be around £220 and £285 respectively, if so I might wait until it lowers slightly.
Plus 20% VAT and the usual UK price gouging.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
So according to that article, the 7950 and 7970 will be around £220 and £285 respectively, if so I might wait until it lowers slightly.
3 gig per gpu, nice. Great move for a stock card and a much needed bump up but nothing new. I would love to see a doubled up 6 gig per gpu arrangement, for laughs.A 6Gb 7990 OMG
http://www.brightsideofnews.com/new...-mix-gcn-with-vliw4--vliw5-architectures.aspx
Was expecting more if true.
I can't help but think "AMD vs. non power of 2 buses".
A completely new, superior architecture, more ability to help out in the future offloading CPU tasks onto the GPU and a vast improvement in GPU power, likely to be 70%+, was expecting more?
What exactly?
Is it still going to use xdr2?
Well more, just like my post said.
More shaders, more texture units, more clock speed etc etc.
GCN shaders aren't in any way comparable to VLIW 4 ones. VLIW 4 1536 shaders, could be slower than "random architecture 5" 8 shader gpu, or another gpu with 12400 shaders, could be slower than a 6990.
GCN is well, we'll have to see how much more efficient, but its closer to Nvidia in so far as its not really scalar anymore.
Worst case scenario with VLIW 4 was only 1/4 of all shaders were working on an instruction per clock, this was an efficiency increase on VLIW5 which unsurprisingly its worst case scenario was 1/5. If you look at reviews 6970 is sometimes way out ahead of the 5870, others its right on top but even then usually the minimums are anything up to 25-30% faster than the 5870 and the architecture change is why.
Best case VLIW4 was WORSE than VLIW 5, yet it is almost never slower and often 20-30% ahead on max and mins, with less shaders, thats purely from efficiency.
Now GCN essentially has less shaders, but the amount of times it should fill all 4 shaders per cluster....... should be MASSIVELY higher. IE VLIW 4 was probably averaging close to 2-2.5 instructions per 4 shader cluster. GCN should pretty much be 100% used every clock, or that is the intention.
In theory that means a Cayman if it could fill every shader every clock could gain anything from 40-50% faster with the same shader count........ that is what GCN is. So circa 2000 shaders SHOULD roughly work as fast as probably 3000 Cayman shaders
Its a HUGE increase in power we're talking about here, HUGE.
All theory and speculation.
Look what AMD said about Bulldozer for the past good knows how many years its been coming.
Tosh, AMD told us what Bulldozer was, most people knew Bulldozer wasn't going to blow away the 2600k, anyone expecting it too, was mental.
Bulldozer wasn't speculation it was a known architecture and it was maybe 5-10% slower than people expected due to a few cache issues, a little slower in single thread due to scheduling problems, the latter of which was speculated and thought about way before launch.
As with GCN, its not speculation, its known, an architecture simply works how it says it works, that's how life is, its 0's and 1's and VERY predictable, its pretty damn easy to predict where performance will be, down to the last percent will be almost impossible, drivers, exactly where clock speeds end up on shipping products and if there is a bottleneck that we don't know about.
The architecture speaks for itself, there is a DISTINCT difference between a scalar architecture DESIGNED to use UP TO 4 or 5 shaders in a cluster, its a known issue that it won't always fill it up, this was the design behind it. It's also known that this architecture is simply not like that, that each shader is essentially individual and able to be accessed independantly.
There is no guessing here, Cayman and GCN architectures aren't close, the shaders aren't close, and there isn't even the possibility a 2000 ish shader GCN will perform around that of a 2000 shader Cayman.
not true, many people are still running 1080p res so they might be interested in a slightly faster, cooler, quieter 6900 series card for less money
Thats TOSH, you are claiming a performance increase which must be speculation and theory as no one knows anything about it outside of the developers etc, also AMD made many claims about Bulldozer through the years, none of them have come true, some to do with release dates, some to do with performance claims against existing AMD and competing products.
I wait for real reviews and tests, not pulling 70%± out of the air.
wall of random text
You can tell the difference "raven.... its 40% faster than a 580gtx, because my friend's, cousin's, barber's, sister's, boyfriend's, father...... had a premonition", BS, "DM.... 6970 will at most be 15% bigger than a 5870 because they won't go over 400mm2, as its mental to do so, that will put performance somewhere in the 25-30% improvement range IF its 15% bigger, a bit less if its not quite that big". What happened, I posted that 5-6 months before the release, it was patently obvious to anyone with any basic knowledge on the subject area.
Stop talking about bold claims, because those are irrelevant, it was clear where Fermi would be performance wise based on the architecture, it was clear when it would be available despite the hot air from Nvidia for 6 months lying about it.
and make VERY good predictions based on this, that aren't out of thin air.
more text going off topic
FFS your trolling,
Classy, swearing.
you bought up some random conversation between you and Raven (why? I dnot know), bought up some random TOSH about Fermi (why? I dont know), while making a claim about a 70% performance increase over current cards.
I will make it real simple for you, your educated guess of 70%+ is a guess yes or no?