• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

The inverse is also true of course:
http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/6
The FuryX at 1080p looses performance when Async is switched on. Other resolution and setting can switch performance benefits/costs around endlessly, so it gets really hard to optimize appropriately even just on AMD. once you throw in NVidia GPUs then you need totally different optimization.

For someone calling it the poster child for async you obviously missed where their developers said it wasn't used all that extensively at all and that games in the future could use it a lot more effectively.

Again, this is their first shot at DX12, the first shot at DX11 and DX10, and DX9, etc, is always suboptimal. You take a new API, you design an engine you think will work great... low and behold you realise some mistakes are made, plenty of things can be improved and the next version of the engine on the same API is improved.

Maybe you're new, but that is actually how the world works. Make a car, make a new car and fix the things you screwed up last time. Pick up the guitar, learn a bit, practice over time and get better and play more complex pieces, bake a cake, it turns out horribly, keep doing it and get better at it.

In pretty much the first game to use async there is a very large benefit, 20% at high res with extreme settings.... and it will get better over time.
 
For someone calling it the poster child for async you obviously missed where their developers said it wasn't used all that extensively at all and that games in the future could use it a lot more effectively.

Again, this is their first shot at DX12, the first shot at DX11 and DX10, and DX9, etc, is always suboptimal. You take a new API, you design an engine you think will work great... low and behold you realise some mistakes are made, plenty of things can be improved and the next version of the engine on the same API is improved.

Maybe you're new, but that is actually how the world works. Make a car, make a new car and fix the things you screwed up last time. Pick up the guitar, learn a bit, practice over time and get better and play more complex pieces, bake a cake, it turns out horribly, keep doing it and get better at it.

In pretty much the first game to use async there is a very large benefit, 20% at high res with extreme settings.... and it will get better over time.

+1
 
For someone calling it the poster child for async you obviously missed where their developers said it wasn't used all that extensively at all and that games in the future could use it a lot more effectively.

Again, this is their first shot at DX12, the first shot at DX11 and DX10, and DX9, etc, is always suboptimal. You take a new API, you design an engine you think will work great... low and behold you realise some mistakes are made, plenty of things can be improved and the next version of the engine on the same API is improved.

Maybe you're new, but that is actually how the world works. Make a car, make a new car and fix the things you screwed up last time. Pick up the guitar, learn a bit, practice over time and get better and play more complex pieces, bake a cake, it turns out horribly, keep doing it and get better at it.

In pretty much the first game to use async there is a very large benefit, 20% at high res with extreme settings.... and it will get better over time.
There's actually several games that get little to no benefit using DX12.

Problem is - we dont know what is using async shaders unless they specifically tell us. And even then, we dont know to what degree or what specific implementation.

It is not some binary function.

And increased development complexity is not just like getting better at the guitar. Development costs for high level games are already through the roof. DX12 and these sorts of specific implementations already require a whole lot of extra work, but that extra work now has to be done in conjunction with separate development paths with different API's as no dev is going to want to solely develop for W10 users. Meaning a working DX11 branch is still absolutely necessary.

Developers will get better with async shaders and DX12, but it's not going to be this smooth or quick transition that some people are acting like it'll be.
 
Most dx12 today are patch in not engines built ground up for dx12.
DX12 will however have a faster adoption due to console are the main target for game developers first.

Polaris and soon Vega will be the main GPU leading the way into the future.

I call it DX12
 
Most dx12 today are patch in not engines built ground up for dx12.
DX12 will however have a faster adoption due to console are the main target for game developers first.

Polaris and soon Vega will be the main GPU leading the way into the future.

I call it DX12

A star to guide us !!!
 
A star to guide us !!!

e238d462d6d2c2b4.jpg
 
Most dx12 today are patch in not engines built ground up for dx12.
DX12 will however have a faster adoption due to console are the main target for game developers first.

Polaris and soon Vega will be the main GPU leading the way into the future.

I call it DX12

You do forget that all AMD cards are as strong on Vulcan Mantle
 
Most dx12 today are patch in not engines built ground up for dx12.
DX12 will however have a faster adoption due to console are the main target for game developers first.

Polaris and soon Vega will be the main GPU leading the way into the future.

I call it DX12

Faster than what? so far the adoption rate is no different to any other new API revision in the past and not looking like being much different (largely due to development cycles being the same).

Still tickled that some people thought all the DX11 games would be updated to DX12 within a few months of DX12 release and all new games would be DX12/Mantle.
 
Except Fiji based cards that are slower than Hawaii in Mantle.

Because dice had to write specific memory workarounds to deal with wddm 1-1.3 for each gcn version.

Andersson stated they had better performance and stability on wddm 2 and did not need the workarounds.
 
Last edited:
GCN 1.0 has 2 ACE's.

LENlIkO.jpg

Seems in reality 2nd gen Maxwell would be close to 1+1 than 1+31 when compared to AMD's architecture.

Pascal v Polaris is interesting as it looks like Polaris is 1+4? while the way they've done Pascal looks like more the equivalent of 1+1 in terms of a GCN architecture on paper but with the prioritisation, etc. functionally is like having 1+2 but between the likely clock speed advantage of Pascal and nature of real world data means the optimal point is likely to fall between the 2 architectures giving a broadly closer to equal result unless you synthetically load up either architecture.
 
Seems in reality 2nd gen Maxwell would be close to 1+1 than 1+31 when compared to AMD's architecture.

Pascal v Polaris is interesting as it looks like Polaris is 1+4? while the way they've done Pascal looks like more the equivalent of 1+1 in terms of a GCN architecture on paper but with the prioritisation, etc. functionally is like having 1+2 but between the likely clock speed advantage of Pascal and nature of real world data means the optimal point is likely to fall between the 2 architectures giving a broadly closer to equal result unless you synthetically load up either architecture.

Depends how cut down pascal get at the mid range which will go against polaris, Vega will be high end and that different from all accounts so it might handle AS differently.
 
Back
Top Bottom