• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Picture of AMD ''Cayman'' Prototype Surfaces

Agreed, Intel had to sallow their pride and admit they got it wrong and AMD got it right but with that they were able turn things round again. Until Nvidia has a major rethink about what it wants from it's consumer graphics division they will find it hard to make money from the market (or at least the sort of returns they have enjoyed in the past).




True but the the problem is if you get into bed with companies like Dell and HP I would imagine it's a bit like me making bread and jumping into bed with Tesco sure I can sell loads as Tesco has the distribution and supermarkets but those ******** will beat me down on price until I make virtually nothing.

Tesco might treat you a little different if you were one of the only two (Major) discrete bread manufacturers and your bread tasted the better than the rival manufacturer and had less calories.
 
The amount of change if minor or major is neither good or bad, that depends on if the change is in the right direction.
Fermi was a fairly major change wasn't it?

At the end of the day, minor changes are all that is needed for RV870, I would much rather predictable incremental performance rather than they take too much of a risk and **** up the architecture that delays the products 6 months and underwhelms when it finally get's released.

The change was fairly significant while also being fairly insignificant. I mean where do people think a 480shader 285 gtx would have performed if it simply got double the shaders and rops. Basically almost exactly where it is, architecture changes for the good, often simply facilitate feeding a larger number of shaders to the same efficiency than a smaller core could feed a lower number. Some changes, ringbus, are technologically pretty amazing, 1024bit internal memory bandwidth was quite simply amazing, however it was WAY before its time and took WAY to many transistors and die space to implement which significantly dropped the space for other things.

THe 3870 dropped to a 256bit external bus and 512bit internal ringbus, but still used a HUGE amount of transistors and die space for it, aswell as being basically a rushed design(think gf104 to gf100, its smaller, not really better, its slower and not as good, as the 3870 was, people forget it was mostly slower than the 2900xt that was on 80nm) simply so they could stop producing a very expensive and low yield 2900xt as soon as possible, just as Nvidia want out of the GF100 asap.

While GF100 tinkered with ratios, and number of shaders in each cluster, the clusters themselves, the rops and tmu's in a different location were very similar and its not altogether that big a change. This is what I've pointed out many times though, the VAST majority of performance increase in 3870 to 4870 was shader count and EFFECTIVE transistor increase, which is responsible for the vast majority of performance increase from 285gtx to 480gtx, and 4870 to 5870. I say effective as the shader style in the 3870 and 4870 was very very similar, yet in a 15-20% transistor count increase they got a well over 100% increase in shader count, this is largely because a lot of the transistors in the 3870 were wasted and doing very very little, remove them and the effective transistor count was way lower than the actual transistor count.

The fundamental issue is, not many transistors are wasted on the 5870, they haven't got a bus they can half in size, they can't cut out 3/4's of the internal bus, etc, etc.

Tesco might treat you a little different if you were one of the only two (Major) discrete bread manufacturers and your bread tasted the better than the rival manufacturer and had less calories.

Yup, theres WAY to many farmers in the UK and Tesco's have them all competiting with each other.

Dell is one of 10 companies that can buy the volume of gpu's that AMD can produce and theres only one company in competition, one that can't compete on price AT ALL. AMD are sitting pretty and Nvidia, with more expensive cores that perform less well, or WAY WAY more expensive cores that perform only slightly better, have no ability to fight AMD on price or force AMD price cuts.
 
Some pictures and info on the Barts xt part have surfaced. Could be 5850/70 series performance for 5770/5830 money.
http://www.semiaccurate.com/2010/09/10/one-more-amd-graphics-card-leaks/
bartsxt1.jpg


bartsxt2.jpg


Position of the vrms is interesting and should hopefully help cooling.
 
Its very odd, people are saying its not a cayman, but maybe it is, if the 5770 has been beefed up and given ridiculous bandwidth at 256bit, then it could be, well, very very good.

It depends, its all an unknown right now, I'm not wondering if the price points will change based on what they can do with this process. IE move the 5770 price point up £30 but make it significantly faster, because the 256bit bus, becoming basically a 1gb minimum card, and really the rops/tmu's/shaders need a decent bump otherwise a 256bit bus is complete overkill. It might just be that single card because the 5770 is borderline on bandwidth already, and realistically they had no other choice than moving to 256bit if they increased it. Without an increase any extra shaders would likely go to waste.

Its an odd situation, did they stick a 256bit bus, a fairly small shader/tmu/rop increase meaning its got a completely disproportionate amount of bandwidth, which increases the cost in a place that doesn't bring much value/profit to AMD. Maybe because of the jump up in bus they'll be almost forced to make it hugely better so they can price it so the card is worth the cash.

It would also explain Nvidia's price drops, if a 6770 gets a double size bus, and likely a pretty large bump in shaders to make the bus worthwhile, they might have basically made themselves a cheaper 5850 and price it somewhere inbetween the current 5770/5850, IE 5850 performance and potentially a decent wedge better in certain circumstances, for £160 or so, the 460gtx 1gb will be utterly worthless.
 
Semi-accurate are saying a few partners have samples and they saying they are at about 5850 speeds already.

Yup, its a bit of a waste to take the die size hit of a 256bit bus, without the hardware to use it.

So its shaping up like the 6770 will replace the 5830/5850 , maybe closer to the later in speed, potentially saving on die size by dropping some of the "this 40nm process is so crap we had to add stuff" waste, and increased efficiency in the uncore/shader dropping size slightly so they can sell it I would guess, around £160-170 in the uk for 5850 performance and potentially a decent wedge ahead in certain situations.

Min fps should be up as worst case scenario on a 5 way shader if each shader can only do one instruction, you only have 320 instructions on a 5870, with a 4 way shader, assuming the same 1600 shader count, the worst case scenario becomes 400 instructions, but also when something complex was to be done, the best case and worst case was 320 instructions because only one of the 5 could do complex things. So best case scenario for complex instructions goes from 320, up to 1600, a 5 times improvement(that would almost never been seen, but 2-3 when needed could make a big difference), and 320 to 400 for simple instructions is still 25% better.

The 6870 and co, well, we'll have to see, fast enough gddr5, with the speed improvement should stop any need for a bigger bus, memory is going up like 30%-ish in speed which will be a huge bandwidth increase.

The biggest increase will be on the 6770 as they are basically changing the market for it. Not sure if the 6670 is becoming the old 5770 or not, there were rumours a while back about keeping it because its a fantastic core in terms of size/yield/cost to AMD. But I'm sure an improved version would make sense.

The 5xxx series isn't spread out very well as it is, 80 shader, 5570/5670 and all the other varients of those two all having 400 shaders, an 800 shader card, and a 1600 shader card.

It looks like the next cards will be something along the lines of 2000, 1400, 800, 400, and we'll see if they bump up the bottom end cards to 100-200 shaders somewhere. A much more even spread, this gen between £120-200, theres nothing to sit right in the middle, the 5830 is expensive to make, high power and doesn't really work in that hole, you really want a card designed to be there, high yield, not selling expensive cores for cheap.

Problem is, that hole was the only reason the 460gtx was successful, if they plug it, with better cards, Nvidia has a huge core, thats slower than a MUCH smaller core. Remember the 460gtx is much slower than a smaller core already, 460gtx 1gb vs 5870, no contest, the 5870 is around 12% smaller. If they replace the 5850 with a core a further 15-20% smaller than the current Cypress and potentially even have better performance, Nvidia are screwed across the entire range, till probably Q3 next year.
 
I really hope they move away from the fan they have been using on reference cards for the last while although i doubt it.

I wish/hope, but not gonna happen, WAY to many people around here think exhaust = 1,000,000 C lower case temps which equal WAY higher gpu and cpu overclocks, its madness.

5850 stock cooled, over voltage, overclocked, 85C load at around 925Mhz(needs a bit of juice to do that on mine), 5850 with a prolimatech cooler, no exhaust, CPU overclock lost, none, cpu temp, up maybe 1-2c, max, gpu overclock up from 925mhz to 1050Mhz, gpu load temp down to 45C.

Exhaust coolers are awful because the fans have incredibly small airflow for an incredibly high amount of noise in comparison, stuffing it through a small slot at the back with lots of fins just increases the noise.

Horrible coolers, then again, if Dell and the likes insist on keeping the heat out of the case, they get what they ask for as 90% of gpu sales end up with those types of companies.

One day, if we petition enough, we'll get on release dual slot exhaust coolers for those that want them/use sli/xfire, and on release, available at the same time, silent, triple or quad slot coolers that help overclock a ridiculous amount, keep temps ridiculously low, take up loads of space and for the vast majority of us who don't have several pci-e and pci devices, all have the space for a crazily big cooler.

At least going back to the heatsinks where the tops of the fins weren't bent over. :(

My 2900xt, 3870x2, 4870x2, you take off the shroud, strap on a £5 120mm silent fan, silent cooling, better temps, better overclocks, yay. Then the 4890 and they bent the fins over at the top of the sink, so strapped on fans no longer work :(
 
What did i just read?

Load of incoherent babble.

well it had something to do with a strap on being good...........:p

but seriously i do have to agree with drunken, the standard cooler all the new cards come with now days are not the best for cooling, its to keep the mass market pre built box builders happy.
Consider the situation for them to use GPU A costing them X pounds and not have to spend any more than necessary on the other cooling components, or use GPU B also costing them X pounds or maybe even X pounds+ and have to supply a better cooling solution for the rest of the system.

just remember most of these type of systems will just sit on the desk/floor for their one or two year warranties and never be opened just gathering dust and slowly heating up more and more.
 
AMD is using smoke screens, feeding media and forums with conflicting messages.
They send different info to distrubutors as they know they release info early, and if they use different info for every distrubutor its enough to create havoc.;)
 
AMD is using smoke screens, feeding media and forums with conflicting messages.
They send different info to distrubutors as they know they release info early, and if they use different info for every distrubutor its enough to create havoc.;)

Interesting, would like to hear more.
 
AMD is using smoke screens, feeding media and forums with conflicting messages.
They send different info to distrubutors as they know they release info early, and if they use different info for every distrubutor its enough to create havoc.;)

You could also assume the info fed to forums, and reported on by media etc. is info AMD want's them to know.
I doubt AMD is going to Anti-Climax it's launch by previously leaking bigger numbers than it's products produce, if anything the opposite would be true given their past traits.
 
You could also assume the info fed to forums, and reported on by media etc. is info AMD want's them to know.
I doubt AMD is going to Anti-Climax it's launch by previously leaking bigger numbers than it's products produce, if anything the opposite would be true given their past traits.

ATI don't let hardly anything slip, they are super cautious about about revealing anything, and I hear they have some very focused espionage systems in place.

ATI leaks usually go,

"Surprise ! New cards. They hard launch next week"
 
These 'mid life refresh' cards are such boring non events......

They are always the same,in that, previous mid range card performance becomes the new entry level, previous high end card performance becomes new mid range, etc,etc.....

I want brand new architecture, dammit.....:D
 
Yet theres next to no info out, almost everything is guessed, some estimates are better than others.

Pictures of cards leaking is inevitable, thats why info leaks in drips and drabs, if someone says this card is X with Y clocks, with Z memory clocks and A performance numbers, then AMD know who leaked the info. When almost everyone has a card, its fairly safe to leak a picture of the card, when quite a lot of people know the final clocks, its fairly safe to give them, if AMD have given out cards ranging from (made up numbers) 800-900Mhz, and someone gives an exact clock, they know whose leaked. If someone says final clocks are likely to be 800-900Mhz, it doesn't implicate anyone.

From recollection, I went to a PR event for a , hmm, I can't remember what card now :p, x800, maybe the 9700, years ago, signed a NDA(long since expired :p ). I saw the card, picked it up, no cooler on it, not in a working system, led towards what performance we could expect and rough estimates of clocks but no definitive info.

8 chips suggests with almost certainty, 256bit bus, if you take a 160mm2 core, with 800 shaders, thats pretty well balance, and add on 60mm2 to take it from a 128bit to a 256bit bus, it will have twice the bandwidth it needs, cost more to make, have lower yields yet offer next to no extra performance, its a complete waste.

Realistically if you add on 60mm2 for the extra bus, its madness not to add on significant number of shaders. Efficiency comes from balance, and efficiency is why the 4 and 5 series cards were so competitive, more performance from a lot less space means cheaper cards than the competition at similar performance levels.

A 5770 + 256mbit bus with the rest of the specs the same would cost, £20-30 more, for maybe 5% more performance, worthless basically. A 5770 + 256bit bus, + 400/600 more shaders would cost another £20 to make, but bump performance up probably 50-60%.

So £20 more for a worthless 5% performance bump, or £40 more for a 60% performance bump, which makes far more sense and would be far more sellable.
 
Back
Top Bottom