• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD 7 series latest spec info from AMD round up and discussion

Do we know if the move to GCN brings any direct graphics performance increase because otherwise it's only 33% more cores and 10% more MHz over the 6970 cards.

We dont know if it will make a direct graphics performance increase as we have no benchmarks. However from previous die shrinks the perfomance even on the same architecure is always increased as its on a smaller process so its not as simple as core count. Over the current 6970 caymen gen you have 4 main factors that will directly increase gfx perfomance for certain.

1)die shrink to 28nm

2)xdr2 ram

3)massive increase in radeon clusters and ROPs

4)and new gpu architecture and complier

after this you have core speed and stream processors.
 
Last edited:
Grrr, more "new" information thats now month + old rumours, XDR2 isn't confirmed, Nordic don't know much seeing as their latest story contradicts..... itself many times over.

"AMD has low stock on HD6000 and capacity problems...... and huge demand from OEM's and orders backing up........ but what we really hear is AMD has low demand."

So why the low stock and increased production of HD6000's and capacity problems if demand has dropped off? Likewise they are calling 7-10k wafers for Q4 "low capacity and less than expected". This is bog standard capacity for a new process, and EXACTLY what we thought would be available in Q4 as of 3-4 months ago now.

Spinning old info, with no understanding, as new stories is incredibly irritating.

Again I'll point out "proper" news sites won't run with it, crap news and rumours are repeated by the equivilent of the Daily Mail sites of the IT world.


XDR2 is not confirmed, its from one leak that ends up getting mentioned as a possibility in so many sites that suddenly it becomes confirmed, possible but not all that likely. AFAIK no one is actually fabbing the stuff making production problematic.


GCN(daft name) is losing its superscaler nature to a degree, it won't be run on a driver based schedualler and should be increasing efficiency of usage of shaders dramatically. Right now the VLIW4/5 are VERY rarely using all 4 or 5 instructions, averaging 2-3, thats why peak flops is awesome and dwarfs Nvidia, but sustained flops is no where near that. peak/sustained should be pretty close on GCN, much like Nvidia's architectures, you could be looking at 30-40% higher shader usage but till the outside world see's it, not a huge amount is known on where it would perform.

Basically a "GCN" 1536 shader card should pretty much destroy a Cayman, so a circa 2k GCN card could be 80-100% faster, potentially.

I don't know why Farooq is saying this is "confirmed" info from TSMC and AMD, unless theres a new article I haven't seen as of today, then AMD have confirmed nothing on specs, on production, on dates, process, neither has TSMC.
 
We dont know if it will make a direct graphics performance increase as we have no benchmarks. However from previous die shrinks the perfomance even on the same architecure is always increased as its on a smaller process so its not as simple as core count. Over the current 6970 caymen gen you have 4 main factors that will directly increase gfx perfomance for certain.

1)die shrink to 28nm

2)xdr2 ram

3)massive increase in radeon clusters

4)and new gpu architecture

after this you have core speed and stream processors.
The die shrink itself doesn't increase performance directly, other than allowing for higher clockrates (usually), the two unknowns here are the XDR RAM and the new architecture. Radeon clusters/cores/sps have increased by 33%, with the exception of the ROPs which have increased by 100%. I guess this is the one of interest, because if the ROPs are just doing what they've always done then it does suggest that they're expect the a near doubling of throughput from the new architecture, even though there's only 33% more of the new type of cores.
 
@Jokester I always thought that die shrinks would increase performance by a small amount because the electrons have a physically shorter distance to move? Can't remember where I heard this so might be wrong though...
 
@Jokester I always thought that die shrinks would increase performance by a small amount because the electrons have a physically shorter distance to move? Can't remember where I heard this so might be wrong though...
It's because they've got a shorter distance to move effectively that allows you to increase the clock rate. If you keep the clock rate the same, the data might get to the next bit of logic quicker, but it just sits twiddling it's thumbs until the next clock at which point it gets processed.

Increasing the clock rate removes that thumb twiddling time and is also what overclockers do by increasing volts and decreasing temperatures (both decrease signal propagation times) so they can then increase the clock rate.
 
I think they are gonna have to after what happened with bulldozer.

I think all the bulldozer dissatisfaction will be considerably reduced if these gpus perform highly

definitely, I mean, what were they doing all this time?! they can't have taken that long to produce something that performs worse than their last generation
 
It's because they've got a shorter distance to move effectively that allows you to increase the clock rate. If you keep the clock rate the same, the data might get to the next bit of logic quicker, but it just sits twiddling it's thumbs until the next clock at which point it gets processed.

Increasing the clock rate removes that thumb twiddling time and is also what overclockers do by increasing volts and decreasing temperatures (both decrease signal propagation times) so they can then increase the clock rate.

Ah I get it now, chars :)
 
Rumored dates here!

Oh look, the site several people have been linking to insisting that its PROOF that AMD won't have cards till mid 2012, is now reposting another story that recently got posted elsewhere.

While I've been saying, they are internet whores who go around and post ANYTHING without a clue whats going on. They were SURE nothing would launch till 2012, now they are saying December.

Softpedia links should frankly simply not be linked, they are freaking awful, gutter press of the internet.

Yeah but you can bet your ASS the price will. :mad:

Only if you go with a store who is happy to screw their customers completely ;)

It depends on the allocation but, new processes only ever end up with 5-10k wafers in the first month, so capacity is no different to every other Nvidia/AMD launch on a new process, except Apple might be in on the act..... then again Nvida might not be this time around.

Softpedia suggesting sales might not sky rocket as one might like, firstly sales are NEVER big on the high end product, especially in the first months, think the 5870, TSMC had even bigger problems, due to a production fault they were down to around 3k wafers a month(so 7-10k wafers sounds pretty decent). That was with Nvidia launching their GT210(rebranded to gt310 shortly after) that launched before the 5870. There wasn't a huge amount of stock, but it also wasn't hard to get a 5870/5850 in the first couple weeks.

Yet with a 6 month lead on high end, and mid end, with Nvidia releasing a low end shrink first AMD ended up with a huge huge sales run up till Nvidia finally released the 480gtx, then the 460gtx not long after, AMD had sold 3-4million DX11 gpu's while Nvidia 6 month later was on a few hundred thousand. That all started with incredibly slow high end sales.
 
Those gcn cards are very interesting. Am I right in thinking that means we could potentially accelerate any application with these gpus? Or at least devs would be more likely to implement gpu acceleration as it would be easier to implement? I can certainly say I like the idea of using a powerful gpu to accelerate any task :)


In a word yes but as always its a lot more complicated than that. The compute unit supports all c++ virtual functions and DLLs and the memory can be addressed by x86 virtual memory. So assuming you have a program coded in c++ that supports virtual memory it will be accelerated how well optimised remains to be seen. The GCN is AMD's first full step towards its fusion model where gpu and cpu are combined. I would say early adopters will get a boost but it needs a lot of refining. Its really up to developers to code programs that will use the full power available of a gpu. However AMD GCN model makes this a lot simpler process than the current nvidia and amd 6 series. GPU's wont replace CPUs anytime soon tho as general tasks are still quicker on a cpu. That being said floating point, vector, folding etc are all far faster on a gpu and this is where you will see the big gains.

Hope that clears it up for you
 
Last edited:
Cool, so although it has lots of potential we should expect nothing until its more mature and developers adopt it as something worth putting effort into.
 
I remember when Bulldozer looked very promissing, and look what a dog turd that turned out to be. Assuming that the same managment team had final decision over the GPU development path, take all leaks with a pinch of salt.

Until I see real benchmarks I will not get excited:).
 
Assuming that the same managment team had final decision over the GPU development path,


I would very much doubt CPU and GPU have the same management team....


Still... I echo your slight pessimism given I was looking forward to BD.... Strange things going on at AMD.
 
Back
Top Bottom