• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

What makes me wonder about those 480x leaks is the crossfire scaling in 3dmark. The lowest result i could find gave 60% scaling for the fury(picked a card close to the polaris card) but the CF setup in that slide as a good deal less than that which makes me believe either drivers are it holding back, slides are fake, some sort of throttling issue in CF due to heat or perhaps something else i cant think of right now. With at least 60% scaling it should have come out at the very top just edging out the 1080..

Drivers are quite liekly to blame.
 
Things never scale that linearly though.

We will see when they are reviewed, I just think it makes sense to be realistic, especially given what AMD themselves have mentioned.

I know that, and they don't need to, the 980TI is not 40% faster than a 390X, its <30%.

It may turn out that the best AMD have is another 390X, be that as it may there is something seriously wrong with their 14nm production, its actually fairly easy for AMD to make a 980TI on 14nm given P10 is more than half the size of a 390X at twice the density.
Look at it another way, if an apparent 380X is performing like a 390X with a 30% overclock (2048 Shaders @ 1050Mhz vs 2048 Shaders @ 1375Mhz = to 2816 Shaders @ 1050Mhz) then 2560 Shaders @ 1600Mhz is a 390 with a 60% overclock.

One thing is for sure, if AMD don't have a Polaris 980TI then they do not have a mainstream card, what they have is a second tier low end card.
First tier being the bulk of ownership, 970/390, those are the people who want to upgrade to 980TI performance for GTX 970/R9 390 money, if AMD cannot cater for those people they will have to resign themselves to a position and relevance they currently experience for CPU's.
 
The problem is Samsung's 14nm process is actually about 1.9 times the density, and that is a theoretical best case. As transistors get smaller you have to be more and more careful with layout to avoid interference and thermal issues. Then there is the fact that simply doubling cores or clock doesn't double performance.
the whole 2560 shader specification is still a rumour, as are clock speeds. One can look at the power and clock speed advantages 14/16nm has and you can see that a 380X scaled just wont make sense in speed-thermal envelope Samsung are suggesting. 1600MHz clocks is a 60% increase over the 290x which will result in similar power consumption, minus any architectural improvements. Architectural improvements are typically in the order of about 10-205% per generation.

Then there is the fact that AMD are starting out at a far worse performance per watt starting point.They have a lot of ground to make up:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/27.html


AMD"s most recent press releases talk about Polaris having twice the performance er watt as 290x/390x. That will put performance per watt 25% behind pascal, and that is best case scenario since it is AMD marketing figures.

The Polaris will be a big step up in performance and efficiency but I don't think it will catch up with Pascal. that isn;t a big dal, they can trade performance for power and price the card right it will sell very well.
 
From the size of the chip its predicted the full fat desktop chips would have 2560 Shaders (25% more) and be clocked at around 1600Mhz (+15%)
Thats a total of 40% more GPU, that would put them right in the 980TI level, at least.

If (big if) Polaris comes with 2560 shaders and is clocked at 1600Mhz it will actually be more like 8.2 TFLOPS which is the same as GTX1080. Though it's never that simple of course, especially considering Fiji is already at 8.9 I think. :)

Basically I would guess a 2560 shader Polaris at 1.6GHz would be somewhere between 1070 and 1080 performance.

Big If of course. AMD are being exceptionally tight lipped and I'm hoping it's because they want Nvidia to commit to 1070 clocks speeds before finalising their own. Remember the last few GPU release Nvidia have KO'd AMD with a well timed right hook.

290X was swiftly followed by 780Ti
Fury X was preceded by 980Ti

It didn't help that in both cases AMD downright ****ed up with the stock coolers in both cases. R9 290X ref had a horrible horrible cooler and the pump whine on Fury X was an astonishingly poor own goal.

I'm hoping AMD have learned from these mistakes, but then again as Einstein (allegedly) said "insanity is doing the same thing over and over again and expecting different results". :)
 
I found this interesting :
https://www.semiwiki.com/forum/content/3693-leading-edge-foundry-landscape.html

Samsung also reports that 14nm will have 0.55x the area of 28nm (for both LPE and LPP), that is a 1.82x density improvement. If TSMC sees a 1.9x improvement for 20nm over 28nm and another 1.05x at 16nm over 20nm, they would see a 2.00x density improvement for 16nm versus 28nm (please note TSMC’s 16nm process is really what everyone else is calling 14nm, the number 14 is apparently unfavorable in Taiwan). Assuming both companies have similar density at 28nm then TSMC could potentially have a density advantage at 16nm.
 
With the heat being so focused inside a smaller chip would adding a heat-spreader to the top like a cpu not help matters? Obviously it could be larger to dissipate it over a larger surface. Just relies on the chip makers not cheaping out like intel did with crap compound between the chip and the spreader. :confused:
 
The problem is Samsung's 14nm process is actually about 1.9 times the density, and that is a theoretical best case. As transistors get smaller you have to be more and more careful with layout to avoid interference and thermal issues. Then there is the fact that simply doubling cores or clock doesn't double performance.
the whole 2560 shader specification is still a rumour, as are clock speeds. One can look at the power and clock speed advantages 14/16nm has and you can see that a 380X scaled just wont make sense in speed-thermal envelope Samsung are suggesting. 1600MHz clocks is a 60% increase over the 290x which will result in similar power consumption, minus any architectural improvements. Architectural improvements are typically in the order of about 10-205% per generation.

Then there is the fact that AMD are starting out at a far worse performance per watt starting point.They have a lot of ground to make up:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/27.html


AMD"s most recent press releases talk about Polaris having twice the performance er watt as 290x/390x. That will put performance per watt 25% behind pascal, and that is best case scenario since it is AMD marketing figures.

The Polaris will be a big step up in performance and efficiency but I don't think it will catch up with Pascal. that isn;t a big dal, they can trade performance for power and price the card right it will sell very well.

I don't know where you got 1.9x, i'm going from Samsung's own gumph which states 50% higher density.

AMD's slides say 2.2x the perf per watt, thats AMD's own marketing so who knows but we do know that Polaris 11 is running at less than half the power consumption of a GTX 950 at the same performance, so they haven't just improved their own efficiency by more than double, they more than doubled Maxwell's efficiency.

To remind you the GTX 950 system power consumption was 157 Watts while the Polaris 11 sytem (all the same components other than the GPU) was 86 watts, those 86 and 157 watts was the whole system, so probably about 110 watts for the 950 and 40 watts for Polaris 11.

whats 2x a 950 at 100% scalling? a 970? a 970 is a 390, so an 80 watt 390? whats P10 power consumption?
 
Last edited:
Yeah if 290/x came with custom cooler it would have been better.

I remember the heatsink the 2900xt came with, solid copper, that thing was beastly..mind you the chip needed it. :eek:

dUZNsLR.jpg
 
i don't know where you got 1.9x, i'm going from Samsung's own gumph which states states 50% higher density.

AMD's slides say 2.2x the perf per watt, thats AMD's own marketing so who knows but we do know that Polaris 11 is running at less than half the power consumption of a GTX 950 at the same performance, so they haven't just improved their own efficiency by more than double, they more than doubled Maxwell's efficiency.

To remind you the GTX 950 system power consumption was 157 Watts while the Polaris 11 sytem (all the same components other than the GPU) was 86 watts, those 86 and 157 watts was the whole system, so probably about 110 watts for the 950 and 40 watts for Polaris 11.


Those power usage figures are completely irrelevant because no settings were provide and the frame rate as capped. AMD cards get a big benefit when capping the frame rate and drop power, Nvidia cards don't.


AMD's performance per watt figure varies by which GPU they refer to, sometimes is Tonga and sometimes Hawaii. Which ever comparison you make and even taking AMD's word for it the best case scenario still puts AMD at a performance per watt disadvantage.

For density, see my post above.
 
Those power usage figures are completely irrelevant because no settings were provide and the frame rate as capped. AMD cards get a big benefit when capping the frame rate and drop power, Nvidia cards don't.


AMD's performance per watt figure varies by which GPU they refer to, sometimes is Tonga and sometimes Hawaii. Which ever comparison you make and even taking AMD's word for it the best case scenario still puts AMD at a performance per watt disadvantage.

For density, see my post above.

Thats a convenient blanket statement that holds no water for me, you forget, i had both a 290 and a 970.

The only way you can cap Hawaii's power consumption is by using its power play adjustments, the same is true for the 970. capping the frame rates on Hawaii alone has an effect but nothing like capping the power.
 
Last edited:
Thats a convenient blanket statement that holds no water for me, you forget, i had both a 290 and a 970.

The only way you can cap Hawaii's power consumption is by using its power play adjustments, the same is true for the 970. capping the frame rates on Hawaii alone has an effect but nothing like capping to power.

Well AMD disagrees with you, but I assume you know better than AMD
http://www.amd.com/en-us/innovations/software-technologies/technologies-gaming/frtc

And we aren't even talking about Hawaii but Polaris which could have even more advanced power saving features for down-clocking and idling during frame cap scenarios.

Maxwell just doesn't have that capability, capping the FPS does nothing to the power consumption.
 
Well AMD disagrees with you, but I assume you know better than AMD
http://www.amd.com/en-us/innovations/software-technologies/technologies-gaming/frtc

No its right, i play Star Citizen a lot, its CPU limited, a lot, my 970 never gets hotter than 45c where in anyother game it stick to around 62 to 64c at the clocks you see in my signature.

I also do a lot of work in Game Engines, Unreal / Cryengine... for many hours at a time, to keep the long term stress levels down on the GPU's i have a setting in MSI AB that caps the power consumption to 50%, sometimes i forget to switch it back to normal before playing Star Citizen and at the same FPS the GPU is running at around 35c.

i did exactly the same thing with the 290 and the resulting differences were exactly the same.

Capping FPS reduces power consumption on both cards, but capping the power consumption its self is even more effective, on both cards, its why i do it that way in UE and Cryengine.
 
Last edited:
No its right, i play Star Citizen a lot, its CPU limited, a lot, my 970 never gets hotter than 45c where in anyother game it stick to around 62 to 64c at the clocks you see in my signature.

I also do a lot of work in Game Engines, Unreal / Cryengine... for many hours at a time, to keep the long term stress levels down on the GPU's i have a setting in MSI AB that caps the power consumption to 50%, sometimes i forget to switch it back to normal before playing Star Citizen and at the same FPS the GPU is running at around 35c.

i did exactly the same thing with the 290 and the resulting differences were exactly the same.

Capping FPS reduces power consumption on both cards, but capping the power consumption its self is even more effective, on both cards, its why i do it that way in UE and Cryengine.


https://forum.beyond3d.com/posts/1916875/

https://forum.beyond3d.com/posts/1916883/
 
I can show you this now that this revision of the game is no longer under NDA.

look at the temperatures. 35 to 37c, thats CPU bound and there in FPS capped.

 
CPU bound is not the same as frame rate limiting though. When CU bound NVidia GPUs will end up down clocking which will reduce power to some extent, but that is not nearly as useful as AMD's clock-gating which is why AMD made a big press release about it and Nvidia didn't. With a frame rate limit Maxwell and earlier GPUs will render at a high clock speed to finish the rendering as quickly as possible, the ideal time for the next frame is not fully leveraged because lowering and raising the clock speed takes time and energy.
 
If they are unveiled on the 1st June what's the usual wait until Release

Depends, some cards are announced/unveiled and immediately available (970/980, 980Ti etc), some are couple of days (390 series), some a week (Fury X), some 3+ weeks (1080/1070 and Fury Pro), some longer for the Fury Pro Duo :p

If they're being tight lipped due to 1070 potential/results then we could see a week or less, if they're just plain having issues then god knows, could be July time.
 
Back
Top Bottom