• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

So they are making pleb chips on GF and the daddy on TSMC.

I said LPP was trash but it was fingers in ears and head in sand all round.
Actually since the GF process has worse power consumption characteristics and the fact we don't know if they are using HBM2 or GDDR5/GDDR5X on the low end part,means the efficiency improvement might be somewhat understated.

If this is the case,that means only good things for the higher end chips,as they have a much higher TDP and power limit to play with!!
 
I know 90% of your posts are trolling, but:

1) Anandtech are frequently wrong and if this chip is GF, it's a very big surprise. If it is GF it doesn't mean that production chips will be.

2) I seriously doubt any of the initial chips will be GF as it'd be a nightmare logistically. Their 14nmFF plant is in New York. For any volume product (as opposed to test chips) they'd have to ship the GPU from NY to UMC in Taiwan or Samsung in South Korea to have it mounted on the interposer, then have the interposer shipped to China or Malaysia to be mounted on the PCB and have the cooler mounted. Zen or APUs make more sense for GF, though Zen is also now expected to at least initially be Samsung too.

3) Greenland is definintely Samsung. It's their big chip. It would defy all logic if any of their big chips (or maybe '16 chips at all) were TSMC.

Thats assuming they are using HBM2 for the small chip and not using GDDR5/GDDR5X which is cheaper and higher volume.

The GF/Samsung process is meant to be denser but have worse power consumption characteristics against the TSMC process,meaning it might make more sense for their higher end chips to use TSMC.

Also,having their larger volume chips on GF will probably help with the WSA they have with GF too.

There is also one other consideration - Apple. Apple is buying up lots of capacity from TSMC for 16NM,which probably means both Nvidia and AMD will be struggling for capacity for a few months. GF/Samsung already have a working process and as TSMC gets more of the orders from Apple,it means AMD can use that capacity to get GPUs out quicker.

This is the thing - cards like the GTX750TI were boring for most of us. However,the GPU started the rot at AMD in the last 18 months with regards to marketshare,as it displaced AMD from quite a few laptops and OEM desktops,so I expect this 120MM2 part might be a pipecleaner like the HD4770 was,but also to try and get back marketshare in prebuilt PCs from Nvidia,especially if the competing lower end Nvidia GPUs are not being released for a while.
 
Last edited:
Remember when the Nvidia 900 series came out and used less power and "everyone" was saying that nobody cares about power efficiency we just want a powerful card.

Now it seems AMD are releasing a power efficient card it seems like actually lots of people do suddenly care about this

I think you will find that it was the complete opposite.... Do you not remember the countless threads of the 970 vs 290 and how the usual lot kept going on about power draw and efficiency being "superior" to the AMD cards and being the main reason to get a 970 over a 290.... heck I think we even had a couple of threads praising how good the power efficiency and low power draw was... Someone worked out the difference between a 290 and 970 with the same hours of usage over 2 years to = a £50 difference lol....
 
I don't care about power tbh so long has the price is right and performance is worth it over what I have now I'll be buying.

Amd hopefully won't mess this one up, I had money ready for furyx but I couldn't buy one, then the issues with pump noise just completely put me off..

They need to make this a successful release and a water cooled version with no pump issues.
 
Remember when the Nvidia 900 series came out and used less power and "everyone" was saying that nobody cares about power efficiency we just want a powerful card.

It's swings and roundabouts, the same was said from Nvidia fans when the GTX480 was released. I think the truth is most enthusiast PC GPU buyers don't care about power usage as long as it's within reason. It's only the fundamentalists from either side who will cling to whatever minor victories they can, regardless of how insignificant.
 
Dear lord, power efficiency ALONE, with no performance improvement is meaningless. The entire process and silicon manufacturing business is based around performance gained from improved power efficiency.

I know it's confusing for some people that the same words can be used with different meaning in different situation but it's really pretty simple.

If you have a card that gets a 15000 score in a benchmark and uses 250W, then another card comes along and gets 15000 score in the same benchmark but uses 150W... you get power efficiency but it doesn't improve performance.

If you have a new card that gets a 28000 score and uses 250W, you get increased performance. The only thing that allows this to work is power efficiency. Without it, that score would take 500-600W of power to achieve and neither AMD or Nvidia is willing to make a single high end core that uses that much power.

Every single process that comes through, be it 150nm, 65nm or 14nm, each new node brings with it a roughly 50% power reduction per transistor and twice the density... without BOTH these things together you don't get significantly faster chips. Twice the transistor density with the same power usage would double the power in a given area, with a 250 or 300W limit you couldn't actually get more performance. You'd have the same chip at half the size, useful but ultimately we already have that level of performance.

980 was boring because it brought an existing level of performance for a minimal saving in money from power usage.

On the other end of the scale same transistor density but half the power usage and you're limited by reticle size(largely size they can create a image over due to the light source effectively. So 600mm^2 and very poor yields at that size, doable but not great and expensive at 500mm^2, so halving power could mean theoretically 250W chip with twice the performance but it would need to be 1000mm^2 which is literally not possible.

Double transistor density AND half the power allows you to make a new chip roughly the same size, roughly the same power with roughly double the transistor count. THis is the fundamental basis around the entire chip fabrication industry... but people in this thread think performance per watt is 'new' or a change of direction.

Incidentally this is precisely why 20nm sucked so hard, massive cost increase(due to double patterning, longer process for manufacturing, worse yields, much more expensive and difficult tape out, which is all theoretically fine except it roughly doubled transistor density but massively missed the 50% power reduction target, it was closer to 20%. Finfets barely decrease density but drastically reduces power. 28nm-14/16nm finfets is essentially one 'normal' node with double transistor density and a little over 50% power reduction(best case as always).
 
How people say power consumption (Performance per Watt) is irrelevant is beyond me? :P

Better performance per watt is needed to build faster GPU's within the same TDP !!

Come on people, take the stupid hats off.

We will see low power decent performance laptop / entry level PC GPU's, and high end GPU's (250W TDP) performance monsters, way beyond current cards. Improvements in performance per watt allow this...

Can't wait for the new tech, been waiting far to long for a real advance. Like the idea of a very low power but good performing gaming system in small form factor for the lounge, and absolute beast GPU for my desktop PC (Pascal Titan / Full Fat Polaris '250W' High end). Roll on the new stuff.
 
Last edited:
I think you will find that it was the complete opposite.... Do you not remember the countless threads of the 970 vs 290 and how the usual lot kept going on about power draw and efficiency being "superior" to the AMD cards and being the main reason to get a 970 over a 290.... heck I think we even had a couple of threads praising how good the power efficiency and low power draw was... Someone worked out the difference between a 290 and 970 with the same hours of usage over 2 years to = a £50 difference lol....

I think it was in those threads that people were saying nobody cares about power efficiency.

It's swings and roundabouts, the same was said from Nvidia fans when the GTX480 was released. I think the truth is most enthusiast PC GPU buyers don't care about power usage as long as it's within reason. It's only the fundamentalists from either side who will cling to whatever minor victories they can, regardless of how insignificant.

Well indeed I'm sure as with so many things it does go around in circles. But that just shows how many just troll these forums looking to "get one over" on the other side/team/vendor. It doesn't make it right just because both sides do it.

I can't be the only one that gets fed up reading a group slating a technology or feature at one point and then 6 months later posting links and praising the other side for pushing the future tech when they do the same thing.
 
so why do people think AMD's stuff is coming first??
wasnt so long ago people were saying nvidia's is 1st maybe as early as feb/march?

im surprised we seeing tech demos so early but i dont think we should read too much into that :)

Didn't AMD themselves say "back-to-school period, 2016"... if so, nV will surely be first. Back-to-school is August, give or take.
 
Well indeed I'm sure as with so many things it does go around in circles. But that just shows how many just troll these forums looking to "get one over" on the other side/team/vendor. It doesn't make it right just because both sides do it.

I wasn't claiming it makes it right because both sides do it. I was pointing out the hypocrisy in your post while you were attempting to point out the hypocrisy in others. The irony was overflowing from your entire post.
 
Only a handful of people were saying that.

I think a lot of people understand that if you want cool, quiet cards that don't run up you bills while they render a desktop, then you need efficient chips. I think people also understand that if you want more graphical power out of your chips, you need to be more power efficient in order to get more transistors within your power budget.

I'm not quite sure where Googalymoogaly is getting his ideas from, because there's not many people who really expect or want 1 KW plus graphics cards heating up their houses.
 
I think a lot of people understand that if you want cool, quiet cards that don't run up you bills while they render a desktop, then you need efficient chips. I think people also understand that if you want more graphical power out of your chips, you need to be more power efficient in order to get more transistors within your power budget.

I'm not quite sure where Googalymoogaly is getting his ideas from, because there's not many people who really expect or want 1 KW plus graphics cards heating up their houses.

I didn't mention 1KW cards, I said around 300W.
 
I didn't mention 1KW cards, I said around 300W.

We're already there due to PCIE limits, so the only way to get more graphical power is more transistors within this power limit. Each transistor therefore has to be more efficient.

You're suggesting that lots of people don't care about power limits because they want more graphics power and don't care about power usage. Tell them they need to buy 1 KW plus cards, and they'll balk - look at how few people go SLI/Crossfire/Trifire/Quadfire. People who don't understand might post hyperbole, but tell them they need to spend that much more, and all of a sudden, it's not that important.

Efficiency is important, even if a few people don't understand that. Being power efficient isn't the same as being low power. Being power efficient means you can do more, not less.
 
Last edited:
Can we have a thread that stays on topic, and isn't a handbags at dawn stylee yawnfest? It's 2016? Doesn't that mean anything? :p
 
I wouldn't be surprised if AMD used HBM in all of their performance tiers, IT is a matter of pushing more than just the cost of HBM manufacturing down, it also pushes down the cost of assembly.

Especially around interposers. Considering they will be using HBM with thier APU's etc.
 
I didn't mention 1KW cards, I said around 300W.

No but you did say this.

Remember when the Nvidia 900 series came out and used less power and "everyone" was saying that nobody cares about power efficiency we just want a powerful card.

The part in red is the salient point here. It was most certainly NOT "everyone" who was saying "nobody cares about power consumption". A few people may have but not "everyone". Now do you see why "some of us" but not "everyone" took the time to respond to your ludicrous claim?
 
Back
Top Bottom