• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

Re: Bru.

Tessellation levels can and should be established during creation, that's Nvidia's job, once in the engine it's down to the developer so yes ultimately the developer is responsible.
 
Re: Bru.

Tessellation levels can and should be established during creation, that's Nvidia's job, once in the engine it's down to the developer so yes ultimately the developer is responsible.

Yes that's correct....the developer is responsible....the same developer that has just had their game labeled with "TWIMTBP" and has used Gameworks, which coincidentally is only announced just before release (not sure if it was 4-8 weeks).
:eek:
 
2004:
Manufacturer: "We can't consistently get this performance from this chip, we'll have to sell them running at around 70% of their potential."
Customer: "Yay! I clocked this chip up 500MHz and got something for free!"

2016:
Manufacturer: "We can consistently get our chips running in the top 5-8% of their potential which means people are getting the most out of these chips by default."
Customer: "Boo! Poor overclocker!" :mad:

This, so very much, this.

If you are the kind of person to overclock, then you shouldn't be bothered by the higher power rating on a warrantied product that already has the clocks you would be trying to reach... in fact, it seems like you'd be happier about it.

Personally, I'd much rather have a 4.5GHz Zen that can't hit 4.6GHz no matter what I do, than a 3.5GHz Zen that I may only be able to sometimes get to hit 4.5GHz... if I get lucky in the silicon lottery.

On the flipside, I underclock aplenty... and I understand why Intel, for example, didn't clock Sandy Bridge very high.

If Intel had released Sandy Bridge CPUs with 4GHz or 4.5Ghz SKUs, then that'd be the new level of performance they'd be expect to beat for each subsequent generation. By only topping out around 3.4GHz, they knew they could just push the default clocks up for the next generation if they didn't have any real IPC improvement. It was this very dynamic that was an issue for getting FX Steamrollers - they couldn't clock well enough, and AMD had very very highly clocked Piledriver parts in the wild. The extra 6~9% IPC could only just makeup for the loss in clockrate, making it an unmarketable product.
 
Hopefully something will change before Polaris releases. If it doesn't, I'll feel forced to buy Pascal, which is indeed a shame.

As much as I'm a fan of AMD, gameworks has gained traction and now it's almost expected that every big game release features which 'wonderful' software.

Anytime I feel like I am being forced to buy a particular product, I do my best not to buy that product. As for GPUs, gameworks features work extremely well on the consoles - which are all AMD hardware - but they then work terribly on PCs. It's an obvious case of artificial handicapping by nVidia that AMD should really pursue legally... though history has shown that these types of issues are usually too complicated for most judges or juries to comprehend.

Here, of course, the difference in performance is usually not enough to be a concern - and AMD looks like they are probably making hardware changes specifically designed to work even better than nVidia in nVidia's strong areas. Combined with everything else they have going for them - and nVidia's entrenched optimizations for DX11 - AMD has more potential for realizing improved performance.
 
I strongly disagree with this statement.
Wasn't the kepler performance kind of evidence of that too :p That was acted on by Nvidia very shortly after the outrage so clearly indicated it was something within there control. Sorry if we don't have logs and hand written confessions from Nvidia personnel that are open to losing there jobs by outing there own company but the writing on the wall might be enough.

Evidence doesn't win in the court of Nvidia though. I'd try and reel it back on topic but lets be honest, there's no news just yet :? sit and wait game boys.
 
Last edited:
The 950 is 'officially' rated at 90W, so that would put little Polaris at 28W haha. So there must have been some cherry picking going on.

Big Polaris and Big Pascal will both end up using around 250watts give or take a few.

I saw there was some maths I could do...

Now assuming the above 2 statements hold true:

Let's first round Little Polaris up to 35W for Big Polaris, to cover the possibilty that scaling the performance-per-watt up to 250W isn't perfect.

Now 250/35=7.something. Now times the performance of the 950 by 7... hmm how to quantify performance...

Let's say that the 980ti is roughly double a 970 and that a 970 is roughly double a 950. 980ti would be roughly 4 times a 950. If Big Polaris is 7 times a 950, then it would be somewhat 1.75x a 980ti. Round that down to 1.7 for dodgy drivers/Nvidia gameworks/etc.

If my random guesswork calculations are anything to go by, then big Polaris may perform around 70% better than a 980ti and honestly, that wouldn't be too bad given the right price. But it sure sounds rather optimistic...

I may bookmark this so I can come back and laugh at this when Polaris finally releases. What do other folks think?
 
I saw there was some maths I could do...

Now assuming the above 2 statements hold true:

Let's first round Little Polaris up to 35W for Big Polaris, to cover the possibilty that scaling the performance-per-watt up to 250W isn't perfect.

Now 250/35=7.something. Now times the performance of the 950 by 7... hmm how to quantify performance...

Let's say that the 980ti is roughly double a 970 and that a 970 is roughly double a 950. 980ti would be roughly 4 times a 950. If Big Polaris is 7 times a 950, then it would be somewhat 1.75x a 980ti. Round that down to 1.7 for dodgy drivers/Nvidia gameworks/etc.

If my random guesswork calculations are anything to go by, then big Polaris may perform around 70% better than a 980ti and honestly, that wouldn't be too bad given the right price. But it sure sounds rather optimistic...

I may bookmark this so I can come back and laugh at this when Polaris finally releases. What do other folks think?

My guess is AMD has 2 GPUs slated for mid 2016. This will result in up to 4 SKUs, ranging from around 35W up to around 150W. 150W being replacement for Hawaii but faster and cheaper. All of them GDDR5.

Then towards the end of the year a 3rd 'big' GPU at around 250W with HBM2. And a cut down version.

So that's total 6 SKUs.
 
You need to consider that AMD is not going for a 600MM2 die this time,but probably something more like 300MM2 to 400MM2. Plus the top end card will have stuff dedicated towards DP compute which adds nothing to gaming performance.

I see something like 30% to 40% improvement over a Fury X IMHO at most and if AMD do better than that,that will be a massive generational improvement at launch.
 
You need to consider that AMD is not going for a 600MM2 die this time,but probably something more like 300MM2 to 400MM2.


30-50% transistor strink, would still mean the same amount of transistors can fit on a 400mm2 die vs a 28nm 600mm2 die would it not? Add in clock speed increases (i hope) and things look rather good?
 
You need to consider that AMD is not going for a 600MM2 die this time,b.

No vendor is.
that suicide on a new node.
a 400mm2 or such with twice the transistor count and 14nm will do really well at entusiast. a 14nm 300mm2 die will perform better than todays 980ti and fury x.


compute is important depending on game etc..it can be a 30% difference between the 390 and the 970 or a 25% with a furyx and a 980ti at 1080p.


http://www.nordichardware.se/Grafik...erlaengtade-spel/Prestandatester.html#content
 
30-50% transistor strink, would still mean the same amount of transistors can fit on a 400mm2 die vs a 28nm 600mm2 die would it not? Add in clock speed increases (i hope) and things look rather good?

what Polaris offers is around a 300mm2 die at the entusiast end match from previous 28nm generation, the question is just how big a die AMD goes for as 400mm2 is doable.

However for the average Joe we will have a priced 390/970 peformance card at a 380/960 price point and it be a small die.

Future is indeed interesting
 
No vendor is.
that suicide on a new node.
a 400mm2 or such with twice the transistor count and 14nm will do really well at entusiast. a 14nm 300mm2 die will perform better than todays 980ti and fury x.


compute is important depending on game etc..it can be a 30% difference between the 390 and the 970 or a 25% with a furyx and a 980ti at 1080p.


http://www.nordichardware.se/Grafik...erlaengtade-spel/Prestandatester.html#content

There will be uber chips waiting in the wings from both vendors.

They won't see the light of day until both vendors have milked the market for all it's worth with the mid range stuff first. This will also give them time to improve yields for later on when these chips are needed.

Die shrinks are getting harder to implement and take longer to bring to the market so 14/16nm is going to be with us for a very long time. This will make the need for uber chips an absolute must.
 
I suspect there are going to be done disappointed AMD fans as I think they now realise that to be successful they have to play the games all the large corporates play. Milking your customers, selling essentially the same products to different customer groups, price fixing, rebranding, exaggerated marketing etc.

It goes on in so many industries. White goods manufacturers will sell the same internals with a different shell and warranty to customers with varying budgets. LCD screen manufacturers fined by the EU for price fixing. The aforementioned VW scandal. The GPU rebranding we've seen. There are so many examples, some illegal and most just taking advantage of the rules but really quite immoral imo.

I'm not saying AMD will do any of the above but they need to be more ruthless if their competition are which means less of a bargain for the consumer and may make deciding which way to jump a little more difficult. That said I don't see many of the savvy people on here struggling to unravel the corporate bull ;)
 
AMD aren't stupid... for all the flack they get, they are still run by people with at least half a brain, and it would take a bunch of complete brain dead morons to blow their entire load in one shot and put out the most powerful possible product they could manage and hold nothing back. Come on, does anyone SERIOUSLY think that will happen and we the consumer won't get milked like we ALWAYS do?? It's not even about being ruthless, it's business 101 and pure common sense!
 
There will be uber chips waiting in the wings from both vendors.

They won't see the light of day until both vendors have milked the market for all it's worth with the mid range stuff first. This will also give them time to improve yields for later on when these chips are needed.

Die shrinks are getting harder to implement and take longer to bring to the market so 14/16nm is going to be with us for a very long time. This will make the need for uber chips an absolute must.

i dont see AMD holding back chips if they have them available!
taking the performance crown by a nice margin would help them a lot

thats if they first to market anyway, im sure nvidia would like to match them with midrange if they could lol - but i dont think so this time!
 
Back
Top Bottom