• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

There were some interesting bits of yesterdays AMD webinar:

Those two GPUs will have surely have their cut down versions and will have a wide range of different board designs"

"AMD also pointed out one of their key features in their Polaris architecture which is Asynchronous Compute."
"AMD is very keen on pushing this feature into upcoming titles and they stated they will drive it even more aggressively than before."
"they specified that they were able do get to get more performance out of each transistor not only thanks to 14nm FinFET process but also by virtue of the various optimizations within the Polaris architecture itself."

"We asked them if the efficiency improvements that AMD specified in their previous presentations are uniform for all GPUs AMD will offer or if it’s just for the most efficient GPU of the whole family. We were lucky enough to get my question answered with a little tiny bit of extra unexpected spice. AMD answered that there will be wide range of different board designs which means some will have different clocks than others. This means that some will be more efficient than others but generally the capability and improvements will be there."
"AMD mentioned AIB partners and that some of them will most likely try to drive the clocks as high as possible with their own designs, which would mean lower efficiency."
"AMD is not limiting AIB partners with clockspeeds for the sake of better efficiency"

This might mean they were mainly aiming for efficiency not performance, but AIB partnets might do the opposite. The former mention of 175w tdp but only 120-130w real power consumption pointed to the direction that they have room for improvement.
 
Actually it was D.P. who posted it in the Pascal thread : https://forums.overclockers.co.uk/showpost.php?p=29515546&postcount=8854

IF Nvidia had ASYNC they would have ensured it was turned on for the 1000 series to show proof they had improved considerably in AOTS.

What about Hitman?
That game also clearly proves Nvidia is rubbish at ASYNC due to no hardware.


I am not sure why people are trying to defend Nvidia as the more people try to say Nvidia has suitable ASYNC compute, the worse it will look when new games arrive.
 
There were some interesting bits of yesterdays AMD webinar:

Those two GPUs will have surely have their cut down versions and will have a wide range of different board designs"

"AMD also pointed out one of their key features in their Polaris architecture which is Asynchronous Compute."
"AMD is very keen on pushing this feature into upcoming titles and they stated they will drive it even more aggressively than before."
"they specified that they were able do get to get more performance out of each transistor not only thanks to 14nm FinFET process but also by virtue of the various optimizations within the Polaris architecture itself."

"We asked them if the efficiency improvements that AMD specified in their previous presentations are uniform for all GPUs AMD will offer or if it’s just for the most efficient GPU of the whole family. We were lucky enough to get my question answered with a little tiny bit of extra unexpected spice. AMD answered that there will be wide range of different board designs which means some will have different clocks than others. This means that some will be more efficient than others but generally the capability and improvements will be there."
"AMD mentioned AIB partners and that some of them will most likely try to drive the clocks as high as possible with their own designs, which would mean lower efficiency."
"AMD is not limiting AIB partners with clockspeeds for the sake of better efficiency"

This might mean they were mainly aiming for efficiency not performance, but AIB partnets might do the opposite. The former mention of 175w tdp but only 120-130w real power consumption pointed to the direction that they have room for improvement.

Thanks for that finally some new info instead of the endless bickering here.
 
Thanks for that finally some new info instead of the endless bickering here.

It pretty much sums up what i thought all along. Even though there will be reference models that will not be the super performers but will have decent performance for their price, the aftermarket ones will be where the high clocks and performance are.

It also extends on from the infor around that 1ghz 470x with 380-380x performance with 60watt tdp. Now that is probably the passively cooled part they were talking about, but once it has an extra 6pin power (if needed) and better cooling it will prob clock right up to 1.4 - 1.6ghz and have 390 - 390x performance.

We might even see something like this;

470-470x = P11pro p11xt(low clocked)
480 = P11 (high clocks)
480x = P10 (low clocked, maybe using pro core)
490 = p10pro (higher clocked)
490x = P10XT (higher clocked for performance)

Then Vega when it drops will fill the Fury segment.
 
Last edited:
So, in theory, you'll be able to have a big overclock over the stock speed on reference cards?

I can see why they were aiming mostly for efficiency, laptops sell far more than desktops so if they can get a mobile GPU with good power usage/performance then they'll be on a winner.
 
IF Nvidia had ASYNC they would have ensured it was turned on for the 1000 series to show proof they had improved considerably in AOTS.

What about Hitman?
That game also clearly proves Nvidia is rubbish at ASYNC due to no hardware.


I am not sure why people are trying to defend Nvidia as the more people try to say Nvidia has suitable ASYNC compute, the worse it will look when new games arrive.

Hitman is an interesting one. When you use the developers own canned benchmark AMD comfortably win, even in DX11 which raises eyebrows. When you test independently from the AMD sponsored developers benchmark things look quite different. http://www.pcgameshardware.de/Hitma...Episode-2-Test-Benchmarks-DirectX-12-1193618/

I wonder how many threads we would have had on this if the results were reversed and this was a NV sponsored title.
 
I have no problem admitting that I think the Hawaii architecture was a bit more forward-thinking in terms of where low level API optimizations were heading.

But there was also no denying that whatever memory bandwidth and ACE advantages it had, actual performance results were entirely mixed. And still are. Hawaii has only recently started to edge ahead, which is hardly a major victory against a card that came out nearly 2 years ago. Go back to when the 970 released and for a good while, it was ahead of the 290/290X overall fairly comprehensively.

So for you to classify Hawaii as 'high end' and GP204 as 'low end' is not only a gross exaggeration, it is complete disinformation. Untrue and intended to try and push a reality that didn't exist.

Hawaii was high end just as the gk110 was high end at the time.
When the 20nm die shrink wasn't working out amd failed to redraw hawaii under the gcn 1.2 (tonga) improvements. We would effectively have seen hawaii on a shrunken refined die with refinements all around in power/ memory throughput and video decoding.

When gm204 (970/980) was released it competed and beat the hawaii architecture and also the high end gk110. It's not an exaggeration that gm204 was intended as mid range even if it beat the previous high end offerings from amd/nvidia. Gm200 was always known about and eventually it was released as the high end offering.
It's exactly the same for gp104 in this situation, it's just a mid range offering until they can mass supply gp100.

To compete with the gm204 and to save r&d amd respun hawaii, rebadged it as grenada and managed to achieve a 10-15% improvement in driver overhead.
Effectively amd had improved their shader throughput per engine by improving their sceduling efficency. I predicted concerns with fiji and my theories were posted way before most people started looking into its performance problems. fiji is a perfect example of a front end problem, which leads me onto my final point.

From the limited documentation it seems amd have fixed the front end and have a lot more refinements in other areas. I therefore predict a 10% improvement in front end efficency over a similar shader gcn 1.2. If this is the case then with another 10% in clockspeed headroom I can see AMD arriving at around 20% over their previous performance tiers.
This would mean the p11 could see 270x to now tonga 2048, and p10 to hawaii/fury 3584 performance. I don't think they can compete with 1080 but it'll be around 200 cheaperthan the 1080.

This Leaves amd to redraw fury as fury II with the front end improvements, and a considerably smaller die size.
 
Hitman is an interesting one. When you use the developers own canned benchmark AMD comfortably win, even in DX11 which raises eyebrows. When you test independently from the AMD sponsored developers benchmark things look quite different. http://www.pcgameshardware.de/Hitma...Episode-2-Test-Benchmarks-DirectX-12-1193618/

I wonder how many threads we would have had on this if the results were reversed and this was a NV sponsored title.

I think i said this last time about that site, it uses a 20-30 sec run in a bit of level, which is not fully reflective of the game to make a overall statement of speed,

the canned bench is at least a minute if not two, which covers different types of load across a range of scenes, which is much better to make an overall statement of speed.

it would be like taking the best 30 secs / fasted FPS section of heaven and trying to say that was the same as the overall speed of the benchmark, its simply bad science...
 
Hitman is an interesting one. When you use the developers own canned benchmark AMD comfortably win, even in DX11 which raises eyebrows. When you test independently from the AMD sponsored developers benchmark things look quite different. http://www.pcgameshardware.de/Hitma...Episode-2-Test-Benchmarks-DirectX-12-1193618/

I wonder how many threads we would have had on this if the results were reversed and this was a NV sponsored title.

So the fury gains 16% in directx and the 980ti basicallys gains hardly anything (maybe 3% tops) but the 980ti is still 7% faster and isnt enough for the Fury to close the gap.

New 1080 is "better" at dx12 than the 980ti and is faster anyway in dx11 so faster is still faster in my book.

Yes the AMD does things closes the gap ebtween AMD cards and Nvidia in dx12 but AMD are never close enough in the first place from them to overtake.

Nvidia just keeps winning by brute force.

I see the AMD cards as super efficient 1.3 litre triple turbo charged cars which just still cant be as fast as Nvidia's V8 goliaths no matter how hard they try.
 
So the fury gains 16% in directx and the 980ti basicallys gains hardly anything (maybe 3% tops) but the 980ti is still 7% faster and isnt enough for the Fury to close the gap.

New 1080 is "better" at dx12 than the 980ti and is faster anyway in dx11 so faster is still faster in my book.

Yes the AMD does things closes the gap ebtween AMD cards and Nvidia in dx12 but AMD are never close enough in the first place from them to overtake.

Nvidia just keeps winning by brute force.

I see the AMD cards as super efficient 1.3 litre triple turbo charged cars which just still cant be as fast as Nvidia's V8 goliaths no matter how hard they try.

It'd be the other way round in this case, the lazy 4litre fury getting kicked by the 2.5 turbo nvidia. ;)
 
Hey if P10 is 390x (8gb) for £200 - £250 I'll be all over it, as the 390x is the card I'm currently looking to get

That's not such a great improvement in price to performance. Currently one can find 970/390s for that price, which themselves aren't too far off from the 390x. I remember at the 300 series launch that the 970 even beat the 390x in some games. I'd think £200 max for 390X performance Polaris is 'improving on price to performance'.

If it's over £200, then it's got to do a whole lot better than 390X to sell well. Especially considering the 1070, even if it's more pricey than the 970, the price to performance there would make things harder for such a card from AMD.

I would argue that £300 is too expensive to be mainstream. The 970 was such a success partly because of how crap the 960 was.

But previously, cards like the 460 (<£200) have been the best-sellers. Also the 7850.

I remember back when the 560Ti at under £200 were among the most popular. And I also recall seeing a 7850 or few below the £100 mark, before the R9 200 series was even a thing. So I'd have to agree here, but we must take into account inflation. Which then explains why the £250ish cards are popular these days.

As for the 970, it was much to do with the target of 1080p at 60fps that the 970 was able to do at maxed (or close to) settings. People wanted a good 1080p experience and thus bought that card. Whereas it was harder to achieve on a 960, not to mention that the AMD cards at that price point perform better.

It would be cool to see the cards around the £200 mark become popular again, but many of us now have more disposable cash to spend and desire more than 1080p60 these days. I know my next upgrade will be whatever card can do 1440p90+ maxed settings. The reference FE 1080 can't even manage that consistently right now.
 
New AMD Polaris details emerge


AMD's Polaris Webinar did not reveal much about their new GPU architecture, but that does not mean that we have learned nothing from the event. Most of the new information came from some interesting tidbits of information during the Q&A session, where AMD discussed some of their key goals with the new Polaris GPU architecture.

While AMD did not give any clear performance numbers they did state that they were able to get more performance out of each transistor in their upcoming GPUs, not only because of the jump for 28nm to 14nm FinFET but also due to several key optimisations within the Polaris Architecture itself.
When looking at Nvidia's Pascal GPU architecture the GPU we can see that most of the new architectures performance gain come from enhanced clock speeds when compared to its Maxwell counterparts, with Nvidia's main architectural changes coming in it's new and more efficient memory controller and the dedicated controller that allows Nvidia Simultaneous Multi-Projection to take place.

AMD has been a lot clearer when it comes to showing us where the improvements to their new pascal architecture are, with AMD clearly showing that they have redesigned their Graphics cores, their Geometry units, their caching system and have improved the design of their memory controller.

Polaris promises improved shader efficiency, enhanced memory compression and an improved geometry processor, all of which should give AMD some decent performance gains when compared to their older GCN architectures. This alongside AMD's move to 14nm FinFET should allow AMD to greatly increase the per core GPU performance of Polaris, as well as greatly increase the GPUs power efficiency.
Sadly right now there is not much that is known about AMD's upcoming Polaris GPUs, but it seems like AMD will be focusing more on the mid-range and performance segments of the GPU market, rather than the enthusiast grade market that Nvidia's GTX 1080 is currently aimed towards.

Hopefully, we will learn more about AMD's upcoming Polaris series GPUs over the next few weeks but for now it looks like AMD will not be providing the GTX 1080 competitor that we have all been hoping for.
Source:
http://www.overclock3d.net/articles/gpu_displays/new_amd_polaris_details_emerge/1
 
As we have clearly seen by the opinions of people on these forums and by the launch price of pascal its not the tier of the card that determines its mainstream/enthusiast grading but the price instead which means we could get affordable but powerful cards from AMD. But of course we dont know as it could also be the complete opposite as in Cheap and not that fast.
 
Back
Top Bottom