• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon RX 480 "Polaris" Launched at $199

I do not think it is the X version. It is very likely that the RX480 is a cut down part, they have done exactly this in the last generation by releasing the cut down parts first in the mainstream segment (R9 380, 370). There seem to be other pointers as well but ill leave them aside for now.

So, the performance gains. You keep saying 7% but 7-8% would be from the extra CU's themselves but the actual gains would be higher.

We do not know if the RX480 is conservatively clocked but it does seem to be the case. A 390 shrink would clock to around 1.5 - 1.6 ghz pretty easily with the process change only. They could not have possibly sacrificed speed in favour of IPC to the extent that these wont do the same or similar given the massive headroom 14 FF allows over 28nm. I feel 1.26ghz is very conservative, its only around 160mhz above their 28nm big chips when the process allows 500 - 700 mhz (even more theoretically)

Secondly, since it is relatively easy to adjust the memory controller to support GDDR5x, I believe the RX480X will come with GDDR5x when the supply increases as it is very short at the moment (hence the 1080's being so scarce)

So the net gain would be a 11.1% increase in both clockspeed (1.26 -> 1.4ghz) and CU's (36 -> 40) plus 25% more bandwidth. Gaming performance gains? I wouldn't be surprised if its close to 20%.

Yeah, I've mentioned this before that a full 40CU and a clock bump could give a nice boost for a 480X card, but that isn't enough to be a 490 part, which requires a new bugger chip.

To clarify again 40CU would give a theoretical 11% boost but in the real world other bottle necks will limit the scalability to sub-linear, so in actual games there is about 7-8% increase.


As for clock speed, it all depends how much of the node improvement AMD spend on increasing clocks, and how much on reducing power consumption. You can increase clock speeds by about 65-70% tops, but at the same power consumption. AMD have stated twice the performance per watt, which is about expected for a new generation on a new node. Something like 70-80% from the. node shrink and the rest from architectural improvements. I'm not saying the clock speeds are anywhere near max, just that we don't really know how much AMD are leaning in the node shrink to reduce power vs increase clock speed.
I do think a 10-15% clock speed bump is very realistic, especially once the process matures.

So yep, maybe there is a card that is about 20% faster than the 480, that wouldn't be a 490x that would compete with the 1080 though.
 
Being the same silicon, if there's 250mhz of headroom to clock a 480x up, wouldn't you just be able to OC a 480 up to that speed as it wouldn't be anywhere near its limit? Then the gains would be mostly from the extra physical cores.

Yes, but this is always the case and why when you see downclock cut down cards if you are tech savy enough you can overlock back up to the faster chip. There is sometimes some speed binning going on, but it tends just to be bad luck if you have a card that can't reach the speeds of the faster chip.
 
Yeah, I've mentioned this before that a full 40CU and a clock bump could give a nice boost for a 480X card, but that isn't enough to be a 490 part, which requires a new bugger chip.

To clarify again 40CU would give a theoretical 11% boost but in the real world other bottle necks will limit the scalability to sub-linear, so in actual games there is about 7-8% increase.


As for clock speed, it all depends how much of the node improvement AMD spend on increasing clocks, and how much on reducing power consumption. You can increase clock speeds by about 65-70% tops, but at the same power consumption. AMD have stated twice the performance per watt, which is about expected for a new generation on a new node. Something like 70-80% from the. node shrink and the rest from architectural improvements. I'm not saying the clock speeds are anywhere near max, just that we don't really know how much AMD are leaning in the node shrink to reduce power vs increase clock speed.
I do think a 10-15% clock speed bump is very realistic, especially once the process matures.

So yep, maybe there is a card that is about 20% faster than the 480, that wouldn't be a 490x that would compete with the 1080 though.

I suspect that as there are two Vega chips, the slower will be the 490 series and the faster will be the titan/1080ti rival.

I think Polaris will be solely 470/480.
 
Yeah, I've mentioned this before that a full 40CU and a clock bump could give a nice boost for a 480X card, but that isn't enough to be a 490 part, which requires a new bugger chip.

To clarify again 40CU would give a theoretical 11% boost but in the real world other bottle necks will limit the scalability to sub-linear, so in actual games there is about 7-8% increase.


As for clock speed, it all depends how much of the node improvement AMD spend on increasing clocks, and how much on reducing power consumption. You can increase clock speeds by about 65-70% tops, but at the same power consumption. AMD have stated twice the performance per watt, which is about expected for a new generation on a new node. Something like 70-80% from the. node shrink and the rest from architectural improvements. I'm not saying the clock speeds are anywhere near max, just that we don't really know how much AMD are leaning in the node shrink to reduce power vs increase clock speed.
I do think a 10-15% clock speed bump is very realistic, especially once the process matures.

So yep, maybe there is a card that is about 20% faster than the 480, that wouldn't be a 490x that would compete with the 1080 though.

Wait, 490 or 490X, are you jumping 2 steps here?


The 980TI is barley more than 20% faster than the 390X, no one is expecting AMD to compete with the 1080, thats a low volume £600+ card, thats not what they are interested in right now.

But the 1070, thats not safe.
 
Wouldn't adding GDDR5x make it too expensive - something best left for the top-end premium cards?

Vega will be using HBM2, they have mentioned this already. Vega 11 may do with GDDR5x though we do not know that, leaving only Vega 10 to use HBM2 .

The RX480X will be around $300 as they have mentioned. GDDR5x will not be impossible on a 236mm chip especially when the supply increases.

Other than the process change, the non X cards have always been somewhat conservatively clocked especially on the first iteration of an arch (7850 anyone?). I'm pretty sure the card has a decent amount of headroom for the X version to slot in.

Indeed. I would be incredibly surprised if AMD started kitting out a $199 card with GDDR5X.

Also there is the fact that if it is only 390/390x performance, it isnt going to need more bandwith anyway, the 8ghz clocked DDR5 will be fine.

$300 not $200. Im not saying they will kit out the RX480 with GDDR5x for free

Also they have reduced bandwidth compared to the 390x. The uarch improvements, new cache and better compression will certainly reduce the need to a great extent but if the RX480 is 390X performance then the X version, being quite a bit faster would do good with the extra bandwidth. So coupled with 11% more CU's, 10 - 15% more clockspeed and GDDR5x they can justify an extra $100 for 15 - 20% more performance and 4GB of exrta memory.
 
Last edited:
For a bit of sport i'm going to make a prediction.

Based off this graph:


R 390.
2560 Shaders @ 1000Mhz

RX 480 2304 shaders @ 1266Mhz (+26.5%)

2304 = to 90% of 2560 + 26.5% = 114%

R9 390 is at 56% in that graph.
perf difference to R9 390 = 14% so 56% + 14% = 64%

RX 490 2560 Shaders @ 1450Mhz?
= RX 480 + 10% shaders + 15% Mhz = + 25% @ 0.7 scaling = + 17.5% performance.

My prediction is an RX 480 / 490 comparison graph would look something like this.

 
A sum up of many games

61 percentage points + 21% = 73% exactly where the 980TI is in relation.


Ah what the hell. I'm bored.

In that review the 390x doesn't feature in any of the graphs for any of the games, but then suddenly appears in the relative performance comparison?

Anyway, I guess it depends where you look.

This set of figures paints a different picture. I have run the 1440p numbers (sad I know).

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_1080_review,1.html

55%
-10%
3.6%
30%
15%
66%
71%
41%
28%
31%
49%
24%
30%
21%
30%
34%


32% faster including Hitman and Ashes (A Sync titles)

37.5% faster without Hitman and Ashes (A Sync titles)

Even including the DX12 A sync titles which are the 980Ti's big weakness, the 980Ti is on average 32 % faster than the 390X at 1440p.

I think "barely more than 20%" faster is stretching it a bit and you know it :p

I'm only really disputing this as I have always thought (and seen from my own reading of reviews) that the 980Ti was about 30% faster on average than the 980 (and therefore the 390x). The numbers from the Guru 3d review reflect this. However I don't know where the numbers are plucked from for the Techpowerup review as that card doesn't feature in any of the tests in the 1080 review chart you posted?
 
Last edited:
Ah what the hell. I'm bored.

In that review the 390x doesn't feature in any of the graphs for any of the games, but then suddenly appears in the relative performance comparison?

Anyway, I guess it depends where you look.

This set of figures paints a different picture. I have run the 1440p numbers (sad I know).

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_1080_review,1.html

55%
-10%
3.6%
30%
15%
66%
71%
41%
28%
31%
49%
24%
30%
21%
30%
34%


32% faster including Hitman and Ashes (A Sync titles)

37.5% faster without Hitman and Ashes (A Sync titles)

Even including the DX12 A sync titles which are the 980Ti's big weakness, the 980Ti is on average 32 % faster than the 390X at 1440p.

I think "barely more than 20%" faster is stretching it a bit and you know it :p

There are a couple of anomalies in that, for example Fallout 4 where they use all the Nvidia stuff turned on which we know kills AMD performance stone dead, to an unplayable level.
As we see in that graph that game with all that turned on the 980TI is 75% faster. actually a GTX 770 is much faster than a 290, thats the level of discrepancy with it.

Another example of that is Anno, again a missive performance discrepancy, AMD own fault in this case.

Those things do add up to a much greater difference.
No doubt why you picked Guru3D.

I guess we can both be accused of cherry picking to suit our own arguments.

But, aside from the one game Anno, no AMD user would have all the Nvidia stuff on as it makes the game unplayable.

i would argue TPU offer a more realistic state of performance levels, certainly they do offer a much greater number of games tested and a much greater range. and they don't trun all the Gameworks stuff on wherever they find it. Guru it seems to me like to ignore reality and act more like a marketing arm for Nvidia.

Of course you would have a reason ready as to why Guru3D is much better than TPU, can't think what that might be.

In any case Guru3D are not the place i look for performance comparisons, not unless i want to know how well Nvidia do with their Game Works titles (now seemingly defunct)

Without all the Nvidia stuff guru3D are much the same as TPU.
 
Last edited:
There are a couple of anomalies in that, for example Fallout 4 where they use all the Nvidia stuff turned on which we know kills AMD performance stone dead, to an unplayable level.
As we see in that graph that game with all that turned on the 980TI is 75% faster. actually a GTX 770 is much faster than a 290, thats the level of discrepancy with it.

Another example of that is Anno, again a missive performance discrepancy, AMD own fault in this case.

Those things do add up to a much greater difference.
No doubt why you picked Guru3D.

I guess we can both be accused of cherry picking to suit our own arguments.

But, aside from the one game Anno, no AMD user would have all the Nvidia stuff on as it makes the game unplayable.

i would argue TPU offer a more realistic state of performance levels, certainly they do offer a much greater number of games tested and a much greater range. and they don't trun all the Gameworks stuff on wherever they find it. Guru it seems to me like to ignore reality and act more like a marketing arm for Nvidia.

Of course you would have a reason ready as to why Guru3D is much better than TPU, can't think what that might be.

In any case Guru3D are not the place i look for performance comparisons, not unless i want to know how well Nvidia do with their Game Works titles (now seemingly defunct)

Without all the Nvidia stuff guru3D are much the same as TPU.

? Surely exactly the same could be said for AMD affiliated games like hitman and ashes though :confused:.

Also i only picked guru3d because i have always liked their style and layout. They are my "go to" site for reviews, as it were. I would be quite happy to do the same on any other review, but not now as im going to bed :p
 
Last edited:
Yes.
AMD could and i believe have implemented Tessellation and front end improvements in Polaris which should deal with the unusually high tessellation in Nvidia's Gamesworks titles.

Perhaps eventually Nvidia will find a way to run A-Sync as well as AMD do, which is an architectural DX12 feature.
 
Yes.
AMD could and i believe have implemented Tessellation and front end improvements in Polaris which should deal with the unusually high tessellation in Nvidia's Gamesworks titles.

Perhaps eventually Nvidia will find a way to run A-Sync as well as AMD do, which is an architectural DX12 feature.

Actually not. Can be hooked to DX11 also, if MS was ever bothered to add it with a new version of it. I do not remember who said it, but was either the AOTS guys or the Hitman. (and post was floating around here).
 
Actually not. Can be hooked to DX11 also, if MS was ever bothered to add it with a new version of it. I do not remember who said it, but was either the AOTS guys or the Hitman. (and post was floating around here).

Do you honestly think MS will add DX12 features and tech to DX11? This is a company that was absolutely happy with Windows Shop games being severely limited by UWP, and are only now giving back normal everyday functions like turning off V-Sync, allowing proper fullscreen and more.
 
Do you honestly think MS will add DX12 features and tech to DX11?
They already have.

Volume tiled resources, rasterizer ordered views, conservative rasterization - all DX12 features that have been implemented into DX11.3.

This is a company that was absolutely happy with Windows Shop games being severely limited by UWP, and are only now giving back normal everyday functions like turning off V-Sync, allowing proper fullscreen and more.
It is a work-in-progress, yes. That's how things work. It's not like they just took those things out for laughs. It is a fairly ambitious project that has required a lot of 'starting over'.
 
Back
Top Bottom