• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

ATI cuts 6950 allocation

Ahh k, I was talking about single screen as that is what the vast majority use. But i agree there should be variants for the higher res multi screen users.
 
Ahh k, I was talking about single screen as that is what the vast majority use. But i agree there should be variants for the higher res multi screen users.

It's not just multi screen though. Both teams have 3d now and ATI are making big noise about 3d on their cards. Am I right in assuming you need more memory for 3d?
 
It's not just multi screen though. Both teams have 3d now and ATI are making big noise about 3d on their cards. Am I right in assuming you need more memory for 3d?

I'm not sure... There will undoubtedly be some overhead, but I wouldn't have thought it would require twice as much memory or anything. There are still the same number of pixels to render, and there are no extra textures to store in memory. It's just that each frame must be rendered twice to give the "left eye" and "right eye" viewpoints.

It's an interesting question though... It would be good to see some feedback from someone with a 3D capable system. Perhaps running a quick benchmark in 2D and then in 3D, and comparing the memory-use log from Afterburner :)
 
I have a hard time seeing how it could be slower than a 580 with 1920 SP but if it is slower they will definitely have to price it <£360 to make it attractive.
 
try 5760x1080 ;)

1gb chokes.

Yes, thing is MOST people don't use eyefinity, as with quad/trifire, quad sli, eyefinity is something, even amongst high end users, thats WAY less than 1% of users, of ALL users it will be way less than 0.01% of all cards going into setups like that.

For normal res

http://www.anandtech.com/show/3621/amds-radeon-hd-5870-eyefinity-6-edition-reviewed/6

theres one situation the 5870 2gb shows a real boost in FPS, in Crysis minimums, but both have minimums so low I would be running lower settings and therefore also not run into the memory problem.

http://www.anandtech.com/show/3621/amds-radeon-hd-5870-eyefinity-6-edition-reviewed/7

Three displays, barely any difference, they haven't used much AA, but look at the performance, L4D has AA used, its identical in performance, most games don't have AA, the memory isn't hurting performance AND most are unplayably slow already so adding AA isn't going to give you anything but a slideshow.

Now the difference is a 6950 "might" have close to enough power to actually raise FPS enough in some of those titles that AA might be possible, but its unlikely, several of those titles I'd want a 100% performance improvement just to play as is, AA would again make them unplayable.


http://www.anandtech.com/show/3621/amds-radeon-hd-5870-eyefinity-6-edition-reviewed/8

6 screens, massive massive difference in performance.. thats so few people I wouldn't be remotely surprised if not a single person on this forum uses a 6 screen display setup.

2GB is a bit pointless for 99.999999999% of people on this forum so I'm quite happy to play less money to not have it.

I would be too surprised to see, hmm, RRP's of £250 and £350 for 1gb 6950, 2gb 6970, and then potentially quite soon after launch, a 2gb 6950 right in the middle, around £300.

Remember how memory controllers on mobo's and cpu's, overclocking 2x512mb sticks could always get you further than 2x1gb sticks, etc, etc. Its quite possible the 6950 1gb version will, overclock further because it has higher speed lower density chips(2gb won't use twice as many chips, but twice the density per chip) and overclock further because the board has 10-15W less of memory to actually power in the first place.

For anything sub 2560x1600/4xaa then in general 2gb will be nothing but a waste.
 
The ever-reliable (:p) Fudzilla is reporting that the 6970 has 1536 stream processors. So, who knows...

http://www.fudzilla.com/graphics/item/21155-radeon-hd-6970-runs-at-880mhz

This is the only place I think AMD have gone wrong, I think most information does suggest 1920sp's, but we really don't KNOW that yet, at all, and if it turns out the 1920sp is wrong, or quite possibly Nvidia people leaking slides to raise expectations massively(which would have a huge negative impact on AMD post release of a much slower than expected card) then AMD could look pretty poop afterwards.

But then if I was in the PR department at AMD, and news that the card had 20-25% more shaders than it was going to have started becoming the expected and predominant rumour I may have someone reliable leak the real shader count. So its potential a good sign that no one has done that yet.
 
This is the only place I think AMD have gone wrong, I think most information does suggest 1920sp's, but we really don't KNOW that yet, at all, and if it turns out the 1920sp is wrong, or quite possibly Nvidia people leaking slides to raise expectations massively(which would have a huge negative impact on AMD post release of a much slower than expected card) then AMD could look pretty poop afterwards.

But then if I was in the PR department at AMD, and news that the card had 20-25% more shaders than it was going to have started becoming the expected and predominant rumour I may have someone reliable leak the real shader count. So its potential a good sign that no one has done that yet.

I still say it could just be someone not understanding the new architecture and either miscalculating the SPs based on the old architecture or using a diagnostic tool thats still calculating it based on the old architecture. So it is likely to be either 1536 or 1920 rather than any other figure.
 
I still say it could just be someone not understanding the new architecture and either miscalculating the SPs based on the old architecture or using a diagnostic tool thats still calculating it based on the old architecture. So it is likely to be either 1536 or 1920 rather than any other figure.

I said on SA, its VERY possible someone has leaked 1920 as they misunderstood.

Remember the slides that are real, architecture ones without performance, leaked accidentally on the 22nd, said one key thing 4 way shaders same performance as the old 5way cluster, for 10% less die size.

So vs Barts, it has 25% higher per shader performance. That would mean a 896 shader "Cayman based architecture", (896x1.25=1120) would match barts. Which would mean 896+71% or so = 1536shaders.

But 1536x1.25=1920. So essentially someone could have asked, an AIB or something when told 1536 shaders "so what would that mean in terms of old cards" and someone at AMD told them" it would be like a 1920 shader Barts".

Now, a 1920shader Cayman, at 25% faster shader for shader, would be insanely good, and not a huge die size increase over Cypress(20% more shaders, saving on the front end vs slightly bigger shaders equalling out so its only 20% bigger).

A 1536 card thats 25% faster per shader over Barts, is going to be 70-75% faster than Barts, before the potential for things like internal bandwidth limitations, memory bandwidth limitations, bottlenecks, and other improvements that could be in Cayman and not Barts, some fundamental performance differences, more blocks of shaders meaning when you can only get 1 shader per block working higher performance, and simplified things like scheduallers(with every shader identical and less balancing being required, AMD hinted it takes less core logic to run the new shaders) could all raise performance and/or save space.

For me a 1536 Cayman could be pretty damn good anyway, a 1920 Cayman really could quite easilly be 30-40% faster than a 580gtx.

Key thing really is not to freak out over reviews and go "only 1536 shaders, lets go find some 580gtx stock".

But I still can't quite see it only being 1536 shaders, vs Cypress, per shader it will take up a little more size, but reduced core logic size, reduced front end size(seemingly) and probably some savings here and there due to refined process. A 1536 shader Cayman I wouldn't expect to be any bigger than Cypress, potentially even a little smaller, theres no real need to not go a little bigger this time around and 1536 shaders would mean a 6950 was, really a very low shader count, unnecessarily so.
 
For anything sub 2560x1600/4xaa then in general 2gb will be nothing but a waste.

Absolutely. But, for high-resolution triple-screen gaming, 2Gb pretty much essential.

I can say first hand that 1.5Gb of frame-buffer is not enough to run most games at 7680*1600 (a similar number of pixels to a six-screen 1080p setup), when AA is enabled. Without AA I haven't run into any problems yet (apart from Metro where the framerate is predictably pathetic at that res :p)

I agree with the principle of having only a separate, specialist product which has enough memory for ultra-high res triple-screen setups (like the 2Gb 5870, or the 4Gb 5970), but it would be good to at least have the option.

For single-screen gaming (even at 2560-res) there are very few applications where any more than 1Gb is required.
 
Back
Top Bottom