• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Exclusive: AMD working on Tonga GPU

Definitely some truth to these rumours.

b9imNdu.png

R9 M295X M for Mobile, as bru alluded to, thats probably the new low power 280X in Mobile form, the two R9 200 should be the R9 280 and R9 280X.

For the Tonga 280X to be running in mobile form (M295X- Laptops) it needs to be running at about ~100~ Watts, anything more than that and you wouldn't be ably to keep it cool.

As a guess i would think the DeskTop Tonga 280X would be running at about 160 Watts, 65 Watts down from the 7970's 225 Watts.
Which would mean a Tonga 1280SP 270X would be about 100 Watts, a Tonga 265 about 80 Watts.
 
Last edited:
Judging from Beema/Mullins, I'd expect a 20-30% drop in power flat out from the process, and it's possible to get more from architecture. Depends on the size of the die and the design, seems most of the power improvement in the 750ti is really from cache improvements. A LARGE amount of power on any gpu is the communication with external memory, optimising that with more cache is a huge advantage. But with the way the industry is going they will be shortlived advantages. Once we have HMB you can get more bandwidth with significantly lower power usage anyway so much of the power is saved automatically.

if AMD went the same route and upped the cache to optimise memory usage, it's a lot of work if they planned say first 20nm parts to use HMB, it makes more sense if HMB wasn't coming for another couple years in volume parts. It would seem to be a fairly decent amount of architecture work that would be to a decent degree wasted once we move to different ways of linking memory to die. The shorter the gap to HMB the less that work likely pays off.
 
Last edited:
R9 M295X M for Mobile, as bru alluded to, thats probably the new low power 280X in Mobile form, the two R9 200 should be the R9 280 and R9 280X.

For the Tonga 280X to be running in mobile form (M295X- Laptops) it needs to be running at about ~100~ Watts, anything more than that and you wouldn't be ably to keep it cool.

As a guess i would think the DeskTop Tonga 280X would be running at about 160 Watts, 65 Watts down from the 7970's 225 Watts.
Which would mean a Tonga 1280SP 270X would be about 100 Watts, a Tonga 265 about 80 Watts.

Well that may be sorted then, seems like a good prediction based on that information.

So as I said, this does make sense if they plan on a new high-end laptop card. Wonder how powerful the M295X will be then? Since the GTX 780m/880m is very impressive for a laptop.
 
280X again?

7970 mk4?

Better not be another £300 card for the 5% of people who are prepared to splurge the price of a console on a single PC component.

The whole market is pretty awful. Nothing new in the mid-range due this year is there.

Maybe we're reached a plateau and a technology barrier. Maybe this is as good as it gets until they make some breakthroughs with new materials. Die shrinks will only be possible a few more times until you (practically speaking) can't get any smaller anyhow.

The future looks increasingly stagnant.
 
Better not be another £300 card for the 5% of people who are prepared to splurge the price of a console on a single PC component.

The whole market is pretty awful. Nothing new in the mid-range due this year is there.

Maybe we're reached a plateau and a technology barrier. Maybe this is as good as it gets until they make some breakthroughs with new materials. Die shrinks will only be possible a few more times until you (practically speaking) can't get any smaller anyhow.

The future looks increasingly stagnant.

Hopefully it'll turn out that we shouldn't be so pessimistic.

Things have been going well for Intel (up till 14nm anyway), with them relentlessly increasing performance per watt by decent amounts. The only reason we didn't really notice is because they have ignored desktop for the last ~4 years.

Graphics cards also were doing very well up till now, with doubling (or more) of performance for every die shrink. I mean compare the GTX 580 to the GTX 780 Ti. The best things Nvidia had to offer at 40nm and 28nm respectively. The 780 Ti is easily over 100% faster.

I think it's just all the major chip makers have had issues moving to and/or below 20nm. And hopefully they'll sort it out for next generation (16-10nm), since it's an engineering issue and not a physics issue.
 
I think it's just all the major chip makers have had issues moving to and/or below 20nm. And hopefully they'll sort it out for next generation (16-10nm), since it's an engineering issue and not a physics issue.

^^ This.

It'll get there, it took a while to move from 45nm as well. Once on 20nm and refined, the second wave of 20nm will be a massive step up from what we have now, both in power consumption and performance. 14nm even more so. There's still a lot of performance and power consumption improvements to be made over the next few node shrinks.
 
Just as an aside, there are plenty of people predicting that no one will go beyond 14nm, except maybe Intel.

Simply because of the enormous costs, and increased rate of defects, making the whole thing uneconomic.

Must like going to the Moon has been possible for decades but nobody does, the theory is that further die shrinks beyond 14nm (or 10nm if you're optimistic) are possible, but nobody will be willing to spend the cash to build the fabs to make the chips.

I don't know if any of that is true or not, but it seems there is good reason to be realistic rather than optimistic for the future. Rate of progress has already slowed, and looks likely to slow further.
 
Just as an aside, there are plenty of people predicting that no one will go beyond 14nm, except maybe Intel.

Simply because of the enormous costs, and increased rate of defects, making the whole thing uneconomic.

Must like going to the Moon has been possible for decades but nobody does, the theory is that further die shrinks beyond 14nm (or 10nm if you're optimistic) are possible, but nobody will be willing to spend the cash to build the fabs to make the chips.

I don't know if any of that is true or not, but it seems there is good reason to be realistic rather than optimistic for the future. Rate of progress has already slowed, and looks likely to slow further.

I doubt this will be the case, due to the market for phones/tablets/TV's/cars/etc.

It may cost a lot to go down nodes, but the cost of NOT going down nodes is likely much greater.

Plus even if people do stop around 10nm, this doesn't mean processors won't get faster after that. There are many things companies are looking into to increase performance beyond die shrinks, such as better materials to support higher clock-speeds and 3D chips (actual multi-layered chips as opposed to just '3D transistors'/Fin-Fets). And even optical computing instead of electrical.
 
Back
Top Bottom