• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon RX 480 "Polaris" Launched at $199

Yer, I expect AMD have done something very very clever if they are going with 32 ROPs (even though I refuse to accept that it isn't 64) They indeed can be very cunning. Take the 7XXX as an example. They had hidden performance and a short while after (10 months I think it was), they brought out the 12.11 drivers, that allowed the 7XXX to stretch its legs and surpass the 680 (7970 that is) and hopefully they will be doing something similar with this 480.....

Unless the primitive discard accelerator needs a lot of tweaking in drivers to get the best possible performance from it. I don't see there being as big a jump in GCN performance as we saw with crimson drivers on the launch of the 390x, due to lower driver overhead.
 
Yer, I expect AMD have done something very very clever if they are going with 32 ROPs (even though I refuse to accept that it isn't 64) They indeed can be very cunning. Take the 7XXX as an example. They had hidden performance and a short while after (10 months I think it was), they brought out the 12.11 drivers, that allowed the 7XXX to stretch its legs and surpass the 680 (7970 that is) and hopefully they will be doing something similar with this 480.....

ROPs are used to draw the final frame for output. When doing that, there is sometimes "overdraw" which is explained in more detail here if you are interested, but the gist of it is that sometimes you draw something that is hidden (which is a waste of using the ROP) and then draw the object that hides it on top of it (overdraw).

The primitive discard accelerator probably reduces overdraw, which (together with the increased clock frequency) likely makes 32 ROPs more than enough for the resolutions the 480 is meant for.
 
Take the 7XXX as an example. They had hidden performance and a short while after (10 months I think it was), they brought out the 12.11 drivers, that allowed the 7XXX to stretch its legs and surpass the 680 (7970 that is) and hopefully they will be doing something similar with this 480.....
I mean, if it takes 10 months to produce drivers that suddenly 'unlock' the proper performance from a card, it is not 'being clever'. Quite the opposite. It is poor initial driver support. Same could be argued with the Crimson drivers for Hawaii.
 
ROPs are used to draw the final frame for output. When doing that, there is sometimes "overdraw" which is explained in more detail here if you are interested, but the gist of it is that sometimes you draw something that is hidden (which is a waste of using the ROP) and then draw the object that hides it on top of it (overdraw).

The primitive discard accelerator probably reduces overdraw, which (together with the increased clock frequency) likely makes 32 ROPs more than enough for the resolutions the 480 is meant for.

so the people barking up "but its only got 32 rops its a flop" tree are going to have to find another tree to bark up :)

thanks for clearing that up
 
Last edited:
ROPs are used to draw the final frame for output. When doing that, there is sometimes "overdraw" which is explained in more detail here if you are interested, but the gist of it is that sometimes you draw something that is hidden (which is a waste of using the ROP) and then draw the object that hides it on top of it (overdraw).

The primitive discard accelerator probably reduces overdraw, which (together with the increased clock frequency) likely makes 32 ROPs more than enough for the resolutions the 480 is meant for.

Thank you. Good to hear some info on that. Not all is lost then. Just amazed they didn't go with 48 or something though.
 
I mean, if it takes 10 months to produce drivers that suddenly 'unlock' the proper performance from a card, it is not 'being clever'. Quite the opposite. It is poor initial driver support. Same could be argued with the Crimson drivers for Hawaii.

Fair point but they do get there eventually. Crimson drivers is another example of bringing some decent gains to the GCN cards that previously was hidden.
 
Fair point but they do get there eventually. Crimson drivers is another example of bringing some decent gains to the GCN cards that previously was hidden.

I tend to look at it slightly different over the years. Amd's GCN seems to be more advanced than what Nvidia bring to the table. As games advance in using new features so does AMD's cards perform better. Drivers probably help but to me it's more the advanced architecture coming into it's own.
 
Personally couldn't care less what the paper spec is as long as it's pumping out enough frames to warrant it's price and deliver what is expected, not sure why people were getting so hung up on the 32 ROP thing.

1 more day of guessing and we'll be ready to see what's to come. Is there any official time that we can actually buy the card on Wednesday? Assuming if I can't buy in the morning before work I will have to get my wife to put an order in for me and I don't wanna miss out (hopefully nobody will).
 
Last edited:
I tend to look at it slightly different over the years. Amd's GCN seems to be more advanced than what Nvidia bring to the table. As games advance in using new features so does AMD's cards perform better. Drivers probably help but to me it's more the advanced architecture coming into it's own.

I think it is a little oposite of what you are saying.

GCN is much more brute-force, huge compute potential but much less transistors used to fully exploit that compute performance. AMD tried with Async compute and spent considerable transistors on hardware based scheduling but without fixing the actual bottlenecks in the GPU design.

Kepler, and especially Maxwell and pascal have a lot more finesse. They have less theoretical performance because instead of just throwing more and more compute units at the GPU they dedicate more and more transistor budget to making the actual compute resources fully utilized and remove all kinds of bottlenecks, including DX11 limitations. Therefore, there is far less to be gained form async compute or DX12 because the hardware is not limited to the same extent.
 
I think it is a little oposite of what you are saying.

GCN is much more brute-force, huge compute potential but much less transistors used to fully exploit that compute performance. AMD tried with Async compute and spent considerable transistors on hardware based scheduling but without fixing the actual bottlenecks in the GPU design.

Kepler, and especially Maxwell and pascal have a lot more finesse. They have less theoretical performance because instead of just throwing more and more compute units at the GPU they dedicate more and more transistor budget to making the actual compute resources fully utilized and remove all kinds of bottlenecks, including DX11 limitations. Therefore, there is far less to be gained form async compute or DX12 because the hardware is not limited to the same extent.

As time goes on and the games start using more features the GCN architecture keeps giving where as kepler and Maxwell keep falling away. In Dx12 which is more advanced it's very clear as to Gcn's superiority. The brute force gtx980ti does not seem to be hampered half as much as the weaker gtx980 and below. Kepler v Hawaii in this instance is a non contest.

If for instance at this point in time would you rather have a 290 v a gtx780 in any dx11 game, never mind a dx12 game. Dx12 again with a gtx980 v a 390x. At the high end i would still have a gtx980ti over the Fury X as in DX12 it seems to just about hold it's own while being way supperior in dx11.
 
Last edited:
I think it is a little oposite of what you are saying.

GCN is much more brute-force, huge compute potential but much less transistors used to fully exploit that compute performance. AMD tried with Async compute and spent considerable transistors on hardware based scheduling but without fixing the actual bottlenecks in the GPU design.

Kepler, and especially Maxwell and pascal have a lot more finesse. They have less theoretical performance because instead of just throwing more and more compute units at the GPU they dedicate more and more transistor budget to making the actual compute resources fully utilized and remove all kinds of bottlenecks, including DX11 limitations. Therefore, there is far less to be gained form async compute or DX12 because the hardware is not limited to the same extent.

The bolded part is partly correct, what nvidia did was remove some hardware and instead run the functionality and optimise code in drivers before sending it to the card to improve performance with maxwell. Although their hardware is more serial workload optimised.
 
As time goes on and the games start using more features the GCN architecture keeps giving where as kepler and Maxwell keep falling away. In Dx12 which is more advanced it's very clear as to Gcn's superiority. The brute force gtx980ti does not seem to be hampered half as much as the weaker gtx980 and below. Kepler v Hawaii in this instance is a non contest.

If for instance at this point in time would you rather have a 290 v a gtx780 in any dx11 game, never mind a dx12 game. Dx12 again with a gtx980 v a 390x. At the high end i would still have a gtx980ti over the Fury X as in DX12 it seems to just about hold it's own while being way supperior in dx11.

Its not that they are using more features, Maxwell actually has more feature support for DX12 than Hawaii or Fiji. What is happening is games are using more raw compute relative to the amount of geometry and texturing. This will favor compute heavy cards, e.g. later GCN models.

I don't know what the future trends will be. Compute will become more important but there is still a long way to go in improving geometry, tessellation and texturing. Lot of games out now with very low resolution textures, or models where you can still easily see the vertices.
 
Not bad. Hopefully some more to come from it with the custom cards too.

Still seems to score very low in the Steam VR test for whatever reason. That's a weird benchmark though. My overclocked 780 is a match for a moderately-overclocked 970 in most games, but only gets a 5.8 in that test, which is miles off a 970.

he did the VR test:
[Rumor] RX480 AIB Card Leaked and Tested!

Someone explained to me that the Steam VR test is not about absolute performance but frame variance. The RX480 could be getting some very low minimums form time to time which effects the end result heavily. Could be a driver bug, it is strange that AMD would use that benchmarks as one of the 2 officially released performance figures
 
he did the VR test:
[Rumor] RX480 AIB Card Leaked and Tested!

Someone explained to me that the Steam VR test is not about absolute performance but frame variance. The RX480 could be getting some very low minimums form time to time which effects the end result heavily. Could be a driver bug, it is strange that AMD would use that benchmarks as one of the 2 officially released performance figures

All I know is that my 290@1050 gets 7.1 in VR bench :p So something just doesn't add up. For some reason RX480 is rubbish in this benchmark.
 
Its not that they are using more features, Maxwell actually has more feature support for DX12 than Hawaii or Fiji. What is happening is games are using more raw compute relative to the amount of geometry and texturing. This will favor compute heavy cards, e.g. later GCN models.

I don't know what the future trends will be. Compute will become more important but there is still a long way to go in improving geometry, tessellation and texturing. Lot of games out now with very low resolution textures, or models where you can still easily see the vertices.

I don't really buy into the Maxwell supports more features. You can support everything yet do everything poorly. Gcn so far looks to be superior in dx12. I do agree that dx12 helps with Gcn's problems in dx11. To me the fact that Amd brough these cards out well before Maxwell and they are starting to overtake them shows that in a way GCN is superior. Maxwell's strength is lower power for similar performance but in a years time i see less performance for less power. Nvidia have more R and D so they can release a card for the times but for me i like to buy something that will last like GCN.
 
Back
Top Bottom