• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

ATI cuts 6950 allocation

Anyway just assuming they are right the 6870 is sub 50% behind the 580gtx, the 6970 has a circa 70% effective increase in shaders over barts, do the math.

one 4d shader should provide about the performance of one 5d shader, so shader for shader they are about 25% faster. Meaning 1536 shaders is equivilent to 1920 shaders, but with BARTS front end, that meanst 1120 up to 1920 = 71% increase in shader power, roughly.

A 1536 shader Barts would be gaining around 40% shader power, I'd frankly expect a 1536 Barts to beat a 480gtx comftably and be not far off a 580gtx anyway, potentially beating a 570gtx(with a increase in memory clock speeds to bump bandwidth with more shaders). Cayman could be potentially adding another 25% from shader performance with better shaders, more 4d shader clusters meaning better performance in worst case scenario's.

But other limitations might mean that much shader power on a 256bit bus, and similar internal bandwidth means it might not scale up with shader power perfectly. Theres also the possibility it could scale better with more tweaks to the memory controller and front end. I think the 1536 should be close to a 580gtx, potentially 10-15% faster anyway, if its got 1920 shaders, and the 4d shaders do give a 25% performance boost, it could utterly destroy the 580gtx.

Mighty essay that is. Most of it based on fud information about the incredible performance of these mytical "4d shaders".
6970 will be slower than 5970. That's a fact.

6970 and 580 will trade blows in games and benches.
Anything over 10% each. People just need to calm down and not expect too much.
 

Looks legit. So it is 384*4D for Cayman XT, 352*4D for Cayman Pro, 224*5D for Barts XT and 192*5D for Barts Pro.

6970 - 71% clusters over 6870
6950 - 57% clusters over 6870

If they indeed managed to improve efficiency of a 4D cluster anywhere near 5D levels, we can safely assume that 6970 is indeed going to be 10-15% faster than GTX580, almost 70% faster than HD6870 in games and 50% in parallel computing tasks? Would HD6850 be almost as fast as GTX580 as well or would they hold back its performance by lowering the clocks? I'm very much inclined to give HD6950 a try.
 
U cannot beat a 500mm chip with a 389mm amd!
I don't care how efficient ur shaders are.
U should have made 400m+ and 1920. poor

I haven't done anything :(


Is this significant:confused:

ATI/AMD cards in general tend to have a higher texture fill rate than Nvidia with the pixel fillrate being the parameter that tends to be similar.

Thanks

That's how you calculate texture fill rate, TMUs times core speed. LOL.
 
U cannot beat a 500mm chip with a 389mm amd!
I don't care how efficient ur shaders are.
U should have made 400m+ and 1920. poor

Just a trolling comment tbh, just because Nvidia chooses to manufacture huge dies doesn't mean AMD have to as well, and tbh AMD have been all the better for it.

I'd like the 6970 to be faster but if it trades blows yet is better value and with better availability then that's fine by me.
 
U cannot beat a 500mm chip with a 389mm amd!
I don't care how efficient ur shaders are.
U should have made 400m+ and 1920. poor

I don't post much on this forum, but I do read it a lot and I find it funny that several times in the past you've said your not a fanboy because you've had both ati/amd and nvidia cards, however your comments suggest otherwise.

How can you say its poor, Do you know the price of this card? Do you know its performance? Wish I had a crystal ball...

My thoughts are HD6970 slightly slower than GTX580 but £50 cheaper
 
I don't post much on this forum, but I do read it a lot and I find it funny that several times in the past you've said your not a fanboy because you've had both ati/amd and nvidia cards, however your comments suggest otherwise.

How can you say its poor, Do you know the price of this card? Do you know its performance? Wish I had a crystal ball...

My thoughts are HD6970 slightly slower than GTX580 but £50 cheaper

Yes I have both cards and use both company's with no allegiance unlike most ppl on here.
But amd's tactic of skimping on die space is really irritating.
We waited for nearly 1.5 years for a refresh and what they brought is brats on steroids. Not good enough really, sorry.
 
Yes I have both cards and use both company's with no allegiance unlike most ppl on here.
But amd's tactic of skimping on die space is really irritating.
We waited for nearly 1.5 years for a refresh and what they brought is brats on steroids. Not good enough really, sorry.

Only you seem to have a problem with it.
 
Originally Posted by charlie
Why isn't anyone speculating on a switch between 1920 and 2560?

-Charlie

I saw this on semiaccurate froums, charlie talking about the pic with the switch on the side of the card. If it where a switch between those spu then I would crap my pants...

http://www.semiaccurate.com/forums/showthread.php?p=87632&posted=1#post87632

If that turns out to be true...

Dear god :eek:

That would be as fast if not faster than my dual 5850's

From what I've heard, Charlie is rather reliable. So, looks like this could be the graphics card to get. Christmas list lengthens :)
 
Yes I have both cards and use both company's with no allegiance unlike most ppl on here.
But amd's tactic of skimping on die space is really irritating.
We waited for nearly 1.5 years for a refresh and what they brought is brats on steroids. Not good enough really, sorry.

You didn't answer my questions so I'll assume you don't have an answer.

Barts isn't to replace or take over from the 58xx series, its more of a replacement of the 57xx series and barts has a substantial performance lead over Jupiter considering barts is on the same process (40nm).

You say its not good enough, but nVidia aren't doing any better, the GTX580 is roughly 15% faster than the GTX480, I guess in your eyes thats good enough?
 
Back
Top Bottom