• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

That didn't make an awful lot of sense. Surely then nvidia will also be hindered by it being in alpha.

What doesn't add up to me is there are some benches showing the furyx beating the 980ti by a frame or 2 but yet others are showing the 290x almost neck and neck with it and leaving out the fury results.

Surprised a benchmark thread hasn't been made yet, this forum could probably come up with far more realistic results than the "review" sites.
 
That didn't make an awful lot of sense. Surely then nvidia will also be hindered by it being in alpha.

What doesn't add up to me is there are some benches showing the furyx beating the 980ti by a frame or 2 but yet others are showing the 290x almost neck and neck with it and leaving out the fury results.

Yes very confusing and annoying that with all the hype about DX12 we don't have a single benchmark or game out to show it off even though it's been available in beta for months. What was Microsoft thinking.
 
Am I missing something here, I just did a search for ashes of singularity benchmarks of 980ti vs fury x and fury x only wins by a very small amount at 4k, not exactly "8 cores vs 2 cores" as was said in this thread. Fury x wins by a small amount at 4k in some DX11 games as well.

Because the 8v2 cores is only PART of the entire equation. That's just the likely reason for the improvement AMD sees.
 
Some excellent discussion on this matter here: http://hardforum.com/showthread.php?t=1873640

Page 3 gets really good when Mahigan and razor1 directly interact with each other.

Interesting stuff and no closer to a real answer... Mahigan admits he's wrong on more than one occasion, though the theory that maxwell isn't parallel seems to be gone, which brings us back to no real explanation on why nvidia's DX12 is running slower than their DX11, other than the obvious

Quite reasonable to assume Fiji is more efficient with async than without, but no explanation on why nvidia is actually losing performance going from 11 to 12
 
Last edited:
no - he coded for DX12 spec , not for anything vendor specific.

It's not that straight forward, it isn't a case of "it's coded in the DX12 way" it's a case of "it's coded one of the ways available in DX12", it just so happens they have chosen a way that favours AMD (similar to how iD and Blizzard used OpenGL for Doom 3 and WoW but used a code path that favoured Nvidia).

You can't really blame the devs for only coding it one way instead of supporting multiple vendors, especially when it keeps costs down, however when they are coding it in a way that favours the vendor with <20% market share (who happens to have their name on the box) to the detriment of the vendor with >80% market share, it looks quite fishy.

Not that things involving games with Nvidia's name on the box have never suffered from this lol, oh how they have :P
 
Interesting stuff and no closer to a real answer... Mahigan admits he's wrong on more than one occasion, though the theory that maxwell isn't parallel seems to be gone, which brings us back to no real explanation on why nvidia's DX12 is running slower than their DX11, other than the obvious

Quite reasonable to assume Fiji is more efficient with async than without, but no explanation on why nvidia is actually losing performance going from 11 to 12

I don't see that he admits he is wrong on more than one occasion(only once where he agreed with him),I can see razor1 backing down more times than one and even one poster commenting that he did it before.

If anything Mahigan actually e-mailed him the presentation,which the other guy did not have.

But ultimately,who cares??

There are no DX12 games out yet anyway.

We will get a better view of things over the next year.
 
Last edited:
It's not that straight forward, it isn't a case of "it's coded in the DX12 way" it's a case of "it's coded one of the ways available in DX12", it just so happens they have chosen a way that favours AMD (similar to how iD and Blizzard used OpenGL for Doom 3 and WoW but used a code path that favoured Nvidia).

You can't really blame the devs for only coding it one way instead of supporting multiple vendors, especially when it keeps costs down, however when they are coding it in a way that favours the vendor with <20% market share (who happens to have their name on the box) to the detriment of the vendor with >80% market share, it looks quite fishy.

Not that things involving games with Nvidia's name on the box have never suffered from this lol, oh how they have :P

You do realise,that people are going on about how async shaders are giving them "free performance" on consoles and they have been talking about it for a reasonable amount of time??

The PS4 has an 8 ACE(AMD term for their shader groups) GPU(same arrangement as the R9 290 and 390 series) and AFAIK,the XBox One has a 2 ACE design like the HD7970 had.

Considering that consoles are the lead platform for many games,you add 2 and 2 together??

Plus,with Pascal putting more effort on compute again,I expect all of a sudden use of async shaders will be no problem then!:p

But whats more interesting,is how the R9 290X and GTX780TI will performance in DX12 games,since they are compatriates.

The GTX970 and GTX980 are much newer designs.

But even then I expect Pascal and Arctic Islands will just be bettered orientated towards DX12 anyway,but I still think the consoles will be the limiting factor anyway.
 
Last edited:
You do realise,that people are going on about how async shaders are giving them "free performance" on consoles and they have been talking about it for a reasonable amount of time??

The PS4 has an 8 ACE(AMD term for their shader groups) GPU(same arrangement as the R9 290 and 390 series) and AFAIK,the XBox One has a 2 ACE design like the HD7970 had.

Considering that consoles are the lead platform for many games,you add 2 and 2 together??

Plus,with Pascal putting more effort on compute again,I expect all of a sudden use of async shaders will be no problem then!:p

But whats more interesting,is how the R9 290X and GTX780TI will performance in DX12 games,since they are compatriates.

The GTX970 and GTX980 are much newer designs.

But even then I expect Pascal and Arctic Islands will just be bettered orientated towards DX12 anyway.

Yes but if you read the discussion on hardocp, maxwell DOES support async, maybe marginally less well than Fiji, but not to the extent that it should hurt performance vs DX11
 
What I predict will happen,is that both sides will say their uarch is better,then this will drop:

http://forums.overclockers.co.uk/showthread.php?t=18689120

It will perform better on Maxwell OFC. Then another game will perform better on AMD hardware and then the argument will continue.

Then Pascal and Arctic Islands will drop,and offer better DX12 and VR performance when we get enough DX12 games,and everyone will forget the previous cards!:p
 
Yes but if you read the discussion on hardocp, maxwell DOES support async, maybe marginally less well than Fiji, but not to the extent that it should hurt performance vs DX11

?? Maxwell,always has supported async shaders AFAIK,I don't think that has ever been in question.

Plus I would take the DX12 and DX11 numbers with a pinch of salt - I expect Nvidia has done some stellar work with optimising the overhead on their drivers,so they will see less performance improvements overall,and remember is the difference between DX11 and DX12 statistically different - a few frames here and there is probably just measurement error??

AMD,OTH, is more bottlenecked under DX11,and with the latter GCN cards support async shaders better,you are just seeing a larger percentage improvement compared to a lower DX11 base score.

I think its more an indication of the driver overhead issues,AMD has with DX11 which is causing problems. So the DX12 uplift looks bigger.

After,all its nothing new - look how much Mantle helped in CPU limited situations with AMD cards??
 
Last edited:
Some excellent discussion on this matter here: http://hardforum.com/showthread.php?t=1873640

Page 3 gets really good when Mahigan and razor1 directly interact with each other.
A lot of that is going over my head, but I think one of the takeaways is that DX12 could end up being trouble for developers who aren't versed in low level programming. Today, we have games that perform better or worse for either Nvidia GPU's or AMD GPU's and it sounds like this disparity could grow significantly as driver importance is reduced and developers get much more access to the specific functionality of the GPU's. So unless AMD and Nvidia converge in terms of their architecture designs, then benchmarks might start being even more about how a game was developed rather than a clear reflection of the power capabilities between cards.
 
Someone mind linking to some benchmarks?

I keep seeing comments along the lines of and arguments that AMD does dx12 better based from:

Since the release of Ashes of the Singularity, a lot of controversy surrounded AMD’s spectacular results over NVIDIA’s underwhelming ones.

But all the ones I can find show both running neck and neck in dx12 and poor amd dx11 performance.
 
Back
Top Bottom