• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

it's worse than i thought, if i understand correctly, they only have 1 mid range GPU for desktop in 2016, only ONE... so again 2016 turned out to be a bust in PC, ppl have to wait for 2017, yeay!
 
Don't forget to factor in any tech improvements (even better color compression etc), that might improve things.

Do we know anything about the architecture yet of Polaris, or just the die size etc?
 
People complain about these tiny little increases in power but who is it that keeps upgrading from GTX 980's to the GTX 980 ti's? Seems these days that tech is getting stagnant and are more than happy to buy a little performance increase here and there!
 
Where does it say only 1 mid gpu for 2016 ?

It'll be a "mid range" £400 card ;) And a salvaged part for £300 :p But the big guns are apparently not till next year.

The other GPU is a laptop/low-end card targeting console bracket performance.

And it's on record that AMD will only produce "two new GPUs" for 2016.
 
Where does it say only 1 mid gpu for 2016 ?

AMD Polaris to get two GPUs this year

Raja Koduri confirmed that AMD/RTG has two versions of FinFET GPUs in development: Polaris 11 and Polaris 10. The first processor is allegedly planned as mid-range solution for desktops and something more of a high-end solution for notebooks. The latter should definitely make an appearance in enthusiast portfolio by replacing Fury X. Of course those codenames should end with something fancier and easier to remember as we get closer to the launch, as explained in this part of the interview

one for desktop and one for notebook

full article here videocardz
at best i see it as 390 and 390X kinda GPU
 
AMD Polaris to get two GPUs this year

Raja Koduri confirmed that AMD/RTG has two versions of FinFET GPUs in development: Polaris 11 and Polaris 10. The first processor is allegedly planned as mid-range solution for desktops and something more of a high-end solution for notebooks. The latter should definitely make an appearance in enthusiast portfolio by replacing Fury X. Of course those codenames should end with something fancier and easier to remember as we get closer to the launch, as explained in this part of the interview

one for desktop and one for notebook

full article here videocardz
at best i see it as 390 and 390X kinda GPU

VC isn't correct I don't think. We know the first card is a low-end, not a mid-range product. They've already officially said it will "bring console performance to notebooks".

Console performance =/= desktop mid-range. On a desktop, console performance is low-end. Notebooks be damned. Console performance is 7870 level. Nobody is calling that mid-range these days.

The 2nd GPU is the "mid-range" one. The "enthusiast" part doesn't exist/ hasn't taped out yet.
 
You're aware the 980TI has higher power consumption than the 290x, Fury X, especially when overclocked, right? :)

I agree, that grammar is atrocious!

If you look carefully at the reputable benchmarks I linked from Anandtech, you can see the the overclocked 980ti has higher power consumption than the FuryX.

So no, I wasn't wrong about this.

Since the 980ti is an 'overclockers dream' - I feel it very relevant to mention, as many people are overclocking these cards.

Yes you was!

Your first statement implies at stock Vs stock and more so when overclocked. That is also a weird thing to say and might as well say the Fury X is faster than the 980Ti when you drop the clocks down on the 980Ti.

Not that it matters really and I don't think anyone of us cares about a few pence on the leccy bill but neither AMD or Nvidia are bad for power draw and so long as the TDP is dealt with sufficiently, it matters even less.
 
Hard to believe AMD/NV will have had to wait 2 years after the node comes online (Dec 2014 -> late 2016) to get their top chip out. They used to get first dibs and fone chips had to wait. I don't like the modern era.
 
people are taking the "two GPU's" too literally.

The FuryX, Fury and Nano all use the same GPU (as do the 980 and 980ti on Nvidia's side?). So you could have "just" two GPU's but multiple spec'ed cards on that standard GPU?

Wouldnt surprise me if you have a full fat GPU which has parts disabled as we always do.
 
Hard to believe AMD/NV will have had to wait 2 years after the node comes online (Dec 2014 -> late 2016) to get their top chip out. They used to get first dibs and fone chips had to wait. I don't like the modern era.

not really Finfet+ isn't that old, beside samsung just started LPP Finfet mass production few days ago, and chips will still be fairly small, the reason of the delay is that anything under 28nm had a lot of leakage to be viable for GPUs, Finfet solved the problem.
 
I thought they said they were going to be from top to bottom this time, now its only 2 new ones, and rebrands are us, so i suppose the 290X will get the 490X moniker when its wheeled out again.
 
Last edited:
Hard to believe AMD/NV will have had to wait 2 years after the node comes online (Dec 2014 -> late 2016) to get their top chip out. They used to get first dibs and fone chips had to wait. I don't like the modern era.

14nm LPP went into production basically within the past couple of weeks, Polaris went into production in the past two weeks as well. 14nm wasn't available two years ago, 20nm was, it offered the transistor density but not power improvement. Very few people including phone customers went 20nm, of those who did most of them had disappointing performance gains and trouble with costs, yields and other problems. Being the first shot at finfets, 16nm ff and 14nm LPE is really just the first 'recipe' for the processes. They had lower yields, this is fine on 50-125mm^2 mobile phones where by and large the chips are almost commodity level pricing.

A $500 phone that costs $200 to make and has $300 profit, $25 chip that costs $20 to make and has $5 profit. If you don't make that $5 profit on the chip, you make $300 on the device, it doesn't matter. To AMD who sell the chips, making a dramatically smaller profit doesn't work for them.

The 'second gen' of finfet, which is just experience on the process giving a refined 'recipe' which brings along with other benefits the higher yields required for both larger chips and companies who make a profit on the chips not the devices.

With other new nodes, they are still planar, finfet is a huge huge step in terms of finding yields and having big potential problems with yields, also coupled with double patterning this is the single most complex transition in process history.

So no, when the process was ready gpus are amongst the first to go into full scale production. Intel transitioned to finfet on 22nm before the move to double patterning. The biggest difference between Intel's 22/14nm is double patterning and look how much trouble it's caused the industry leader. Throw in finfet and having a 6-12 month period refining the first round of finfet is a very smart move for AMD/Nvidia. They would have had dire yields, huge costs, actually reduced their profits(instead of selling high yield 28nm chips) and prevented the need to tape out products twice.
 
I thought they said they were going to be from top to bottom this time, now its only 2 new ones, and rebrands are us, so i suppose the 290X will get the 490X moniker when its wheeled out again.

The low end has effectively gone now. A 16nm lets say 50mm^2 part would be no faster than an APU and considering every single laptop sold or low end desktop has an APU they are nearly worthless now.

So most gens would start off with, AMD side, a 50-75mm^2 low end, a 150-200mm^2 mid range chip and a 250-350mm^2 high end chip.

Scratch off the low end, midrange and high end chips are coming in 2016. 350mm^2 chip or thereabouts would beat a Fury X but not by a huge amount(well, with a bigger than normal architecture change for AMD and second gen HBM controller could change that). The midrange will replace everything below that. 2017 will bring either later 2017 10nm mid sized parts, or big 14nm products probably 450-525mm^2 in size.

Probably will be a rebrand stupid low end card, it's commodity, some need them, failing old gpu or just the need for extra outputs. Why use an ultra expensive 14nm process which due to the performance needed in a low end card isn't at all required, when you can use a ever cheaper 28nm process instead? As people switch to newer processes the demand on the old one decreases so cost goes down. Likewise every wafer start on a super low end card is one more wafer you can't make high end parts with far higher profit on.

The low end has been pretty much rebrands for 5+ years because it's not about performance.
 
Scratch off the low end, midrange and high end chips are coming in 2016.

Only these new "mid-range" GPUs are targeting laptops, and aim to bring "console-class" performance.

For many of us here on this forum, that is what we'd expect from the "low-end" of the new, 2016 GPUs.

The "high-end" as you yourself said will be roughly Fury performance, maybe up to 20% more (maybe less). Personally that's what I'd want from the "mid-range" of AMD's new line-up.

So those wanting Fury +50% - which could legitimately be called "high-end" - are not getting anything this year.

We're actually getting low-end and mid-range cards, based on their performance jump relative to the previous gen.

Although we don't precisely know how well the higher end GPU will perform, it's a safe bet that the notebook targeted GPU, with "console-class" performance, will be dire.

e: Put it this way: if the lesser of the two chips is not a good upgrade for a 380 or a 290, then it's not mid-range, it's low-end.
 
Last edited:
A low end gpu is about providing outputs, video acceleration and actually letting your screen work. YOu don't want, no one anywhere wants console level performance in a machine they don't game on. The suggestion the low end should target console level performance is completely beyond absurd and no one thought the low end in 2016 would target that.


People just need to accept, big chips won't come out early any more, it was barely feasible at 65nm, it was a cluster**** at 40nm for the only company that tried it, it didn't happen at 28nm and even when it came was expensive and had fairly low yields. The low end gpu is still called as such but we're not talking a gaming product.

Realistically there used to be low end, midrange and high end, except... the low end was close enough to midrange that you would use it for gaming but after an amount of years the need for the low end to be as small as possible for non gamers meant it didn't(and shouldn't) scale upwards, so it stayed with as few shaders as possible.

So now we have low end(no gaming), midrange, high end and ultra enthusiast type parts. low end <75mm^2, midrange, 125-175mm^2, high end, 300-400mm^2, ultra high end 450-600mm^2.

The midrange and high end will come out fairly early on a new process, the ultra high simply can't and follows later, the low end is simply immaterial to gamers and new architectures will rarely provide a worthwhile improvement to it. By the nature of new processes, each level (except low end) will beat the cards one level above it on the previous process. High end will beat Fury X, midrange will be 7970, maybe the 290x. Size wise it the midrange would probably be best served in the middle of those two cards in terms of better segmenting the performance brackets for AMD.


It's not sensible at all to say a likely sub 200mm^2 card(for the smaller core) should be a good upgrade to a 450mm^2 290, it's just not sensible. It's making up targets randomly just to get disappointed that AMD didn't beat physics.
 
A low end gpu is about providing outputs, video acceleration and actually letting your screen work. YOu don't want, no one anywhere wants console level performance in a machine they don't game on. The suggestion the low end should target console level performance is completely beyond absurd and no one thought the low end in 2016 would target that.

People just need to accept, big chips won't come out early any more, it was barely feasible at 65nm, it was a cluster**** at 40nm for the only company that tried it, it didn't happen at 28nm and even when it came was expensive and had fairly low yields. The low end gpu is still called as such but we're not talking a gaming product.

Realistically there used to be low end, midrange and high end, except... the low end was close enough to midrange that you would use it for gaming but after an amount of years the need for the low end to be as small as possible for non gamers meant it didn't(and shouldn't) scale upwards, so it stayed with as few shaders as possible.

So now we have low end(no gaming), midrange, high end and ultra enthusiast type parts. low end <75mm^2, midrange, 125-175mm^2, high end, 300-400mm^2, ultra high end 450-600mm^2.

The midrange and high end will come out fairly early on a new process, the ultra high simply can't and follows later, the low end is simply immaterial to gamers and new architectures will rarely provide a worthwhile improvement to it. By the nature of new processes, each level (except low end) will beat the cards one level above it on the previous process. High end will beat Fury X, midrange will be 7970, maybe the 290x. Size wise it the midrange would probably be best served in the middle of those two cards in terms of better segmenting the performance brackets for AMD.

It's not sensible at all to say a likely sub 200mm^2 card(for the smaller core) should be a good upgrade to a 450mm^2 290, it's just not sensible. It's making up targets randomly just to get disappointed that AMD didn't beat physics.

You and I have very different ideas about what constitutes a low-end GPU, if you think a low-end dGPU is just about providing graphics outputs. You can do that with the most basic integrated graphics. You don't even need a dGPU for that.

That's one of the craziest things you've ever said, dm.

Listen, a 7870 is low-end in /today's/ dGPU range. Let alone the next gen.

A 380/290 is mid-range /right now/. If the 2016 "mid-range" cards aren't a big upgrade from a 380/290 then they aren't mid-range.

Unless you think no progress from 28nm to 14nm is perfectly fine. Or that all we want is power reduction from these new cards.

Heck no. Those of us on 970/380/290 (etc) cards, want a new /mid-range/ card that is better performing, by a long way!!

e: Again, if a 380/290 is mid-range today, then a console-class (7870 perf) card cannot be mid-range tomorrow. Just can't work like that.

You're suggesting that "mid-range" performance can actually get worse with a new generation. That makes no sense. There is only 1 AMD GPU this year that appears to be able to beat a 380/290, and you're calling that the "high-end" GPU...

e2: AMD describe the lesser of the two GPUs as a "power-sipping" design for "thin and light notebooks". There is no way /in hell/ this GPU is beating a 290/380, so it really doesn't deserve to be called "mid-range". "Console class" is 30FPS at 1080p...
 
Last edited:
You and I have very different ideas about what constitutes a low-end GPU, if you think a low-end dGPU is just about providing graphics outputs. You can do that with the most basic integrated graphics. You don't even need a dGPU for that.

Ah, because you don't NEED a low end discrete gpu... they literally stopped existing completely, I see, we're debating using logic.

That's one of the craziest things you've ever said, dm.

Listen, a 7870 is low-end in /today's/ dGPU range. Let alone the next gen.

Saying it doesn't make it true, it doesn't make the 290x any smaller or cheaper either. 7870 is midrange and 7970 high end on launch, they are 200/350mm^2 respectively(give or take), moving to the next process you'd expect to find similar performance in the 100/175mm^2 cores.... which is what I said. YOu randomly believe a something midrange around the 175mm^2 size, should skip competing with a 7970 and just beat a 290x 438mm^2 core because you've decided that the silicon industry is wrong, you decide how much things cost and what bracket they are in. Not pesky things like wafer costs, yields, die sizes, power usage.... but your personal wishes and beliefs.

Good for you, other people living in the real world, a 14nm 150-200mm^2 isn't going to easily beat a 28nm 438mm^2 core. A 438mm^2 core on ANY process is NOT midrange, at all, in any sensible measure.

You're suggesting that "mid-range" performance can actually get worse with a new generation. That makes no sense. There is only 1 AMD GPU this year that appears to be able to beat a 380/290, and you're calling that the "high-end" GPU...

e2: AMD describe the lesser of the two GPUs as a "power-sipping" design for "thin and light notebooks". There is no way /in hell/ this GPU is beating a 290/380, so it really doesn't deserve to be called "mid-range". "Console class" is 30FPS at 1080p...

This is all based on you deciding that 7970/680/7970/780ti all just belong together in the same performance bracket despite being vastly different sized cores because you want them to.
 
Back
Top Bottom