• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

What seems pretty clear is, DX12 is pretty ostensibly based on GCN architecture, AMD supports effectively every useful performance feature fully. Nvidia threw their toys out the pram at basically being asked to support Mantle reincarnate and got a few additional pointless throwaway features that won't get much support or offer much benefit so they could market having more features/DX12 support.

That is how the situation reads to me, and how it appears to be playing out in terms of which DX12 features are being used, offering performance improvement, being adopted by devs and what Nvidia appears to be lacking.



What a delightfully, completely rose tinted view. Thank you for that. :)
 
Do people have such short memories? Do we not remember the rumours that Nvidia was basically asking MS to add some fluff stuff to the DX12 spec that only Nvidia could support.

Do we remember Nvidia basically forcing MS to remove certain DX10 features that sped up hardware but they didn't support so they felt like ALL gamers should suffer as a result of them making a worse card than they should have. Then when games added support for those features using DX10.1 they PAID developers to actually REMOVE the DX10.1 from the game as it made them look bad.

What seems pretty clear is, DX12 is pretty ostensibly based on GCN architecture, AMD supports effectively every useful performance feature fully. Nvidia threw their toys out the pram at basically being asked to support Mantle reincarnate and got a few additional pointless throwaway features that won't get much support or offer much benefit so they could market having more features/DX12 support.

That is how the situation reads to me, and how it appears to be playing out in terms of which DX12 features are being used, offering performance improvement, being adopted by devs and what Nvidia appears to be lacking.

These are all the reasons I can't stand Nvidia, they repeatedly at every stage hold back features and performance at the expense of marketing and appearing to be the best. One company is stiffling new features because they didn't bother to support them and pushing back usage of such features by years. Tessellation, the features that wound up in DX10 instead of DX10.1. Nvidia knew the DX10 spec for ages, failed to achieve it and asked MS to **** everyone. Nvidia screwed their own customers by not supporting performance enhancing features and rather than ride the bad press that would come with they instead get MS to remove those performance enhancing features. They are so utterly anti consumer I can't stand it and I get irked by those who blindly support them while Nvidia go out of their way to screw their own customers, it's madness.

Nvidia time and time again choose to inhibit new features, then they also go the other way, they use over tessellation as a weapon to win benchmarks but provide THEIR OWN USERS with no benefit. Think about it, you literally can't see any IQ difference beyond a certain level of tessellation. ACtively designing the hardware to tessellate more is taking transistors away from other functions that can provide a performance benefit. You're paying for the die space that feature takes up and the design team putting time and money into making that, JUST to win a benchmark which but offers no benefit at all to their own users. How much better would Nvidia cards be if that time and money went into something that increased performance or IQ for their own users. Rather than spending time getting MS to remove useful features from DX10, maybe they could have just supported them. Rather than paying devs to remove DX10.1 from their game, add DX10.1 to their refreshes or the next generation of cards... nope.

I don't know about all of that but certainty there are plenty of people willing to fight tooth and nail to defend their favoured brand.

It narks me too because the brand they defend so vigorously is laughing at them, and who can blame them? They shaft their deciles who in turn go out of their way to make up excuses for it or argue its not happening often to the point of looking utterly ridiculous.

Its surreal and these so called Hardware Enthusiasts?!?! deserve everything those vendors do to them, if you think about it for a second it is entirely of their own making. some of these people are happy to see one of the two only vendors dead, like some sort of belonging to a higher power victory and gladly suffer the consequences.

But hey, we are the the apparently intelligent life on this planet who invented that ridiculous concept of religion, what does that tell you? :p
 
What a delightfully, completely rose tinted view. Thank you for that. :)
To be fair though, for cards such as 970 and 980 with the lower power consumption while retaining performance being competitive, surely we all wondered at some point "how did Nvidia do it!?"?

I wouldn't be surprise if the 900 series really doesn't support dx12 as well as AMD in terms of performance and benefits, but the thing it's even with that, it ironic thing is that it still wouldn't help improve AMD's sales and market situation much, as when there's enough games with dx12, Nvidia would already have new gen cards with new architecture that would run as well, if not better than AMD's offering. The only people that may suffer are probably those that are going to be holding onto the Nvidia cards.

It's just following on the tradition of AMD cards tends to be able to last for longer/age better for those that don't upgrade regularly. I'm just glad I went for a 290x instead of a 780 back then.
 
Last edited:
I am of similar opinion although a bit disappointed the audio capability was not utilised to its potential.

True audio will have its time soon. The exkborkx and praystayshun foor have the required DSP's built in.

I say the above since AMD's 14/16nm lineup will have it, including APU's.

Would be awesome if we get Aureal3D style Wave-traced environments again using first order reflections. Giving real environmental reverberation, directional 3D audio, and material dampening etc.
 
who wants to point me to the "best case" AMD review for Ashes
preferably one that shows what settings they used

edit; okay, downloaded it but every time I change any of the video settings it just says "restart required" and the only thing I can click is "exit" and when I restart it hasn't changed anything
 
Last edited:
whats funny is the frothing the green team fanatics are giving - all because AMD cards are matching the very best of Nvidia cards - and beating them at minimum frame rates...


unless ofc a review site is using `NVidia` approved settings , like turning certain things down or off.....
 
Nvidia complaining about DX spec and MS and about AMD - gosh is this 2015 or 2000 again - wonder how much Nv will throw at MS this time to `tweak` the spec - since they did it with DX8 back in 2000

I also remember the GeForce FX line being advertised as DirectX 9 cards when they could barely run DirectX 9 games!

I'm hoping the DirectX 12 situation improves for NVidia as I'm starting to think I wasted money buying a 980 Ti if a cheaper 290X/390X can really match it with DirectX 12.... The whole reason for buying a 980 Ti was to future proof my PC a bit more and I made sure I chose a GPU with FULL DX12 support at hardware level!! I'm starting to doubt NVidia already...
 
I also remember the GeForce FX line being advertised as DirectX 9 cards when they could barely run DirectX 9 games!

I'm hoping the DirectX 12 situation improves for NVidia as I'm starting to think I wasted money buying a 980 Ti if a cheaper 290X/390X can really match it with DirectX 12.... The whole reason for buying a 980 Ti was to future proof my PC a bit more and I made sure I chose a GPU with FULL DX12 support at hardware level!! I'm starting to doubt NVidia already...

What I dont understand is why people would think the AMD cards are not 'fully' DX12 compatible as this 12.1 extra just seems to be for devs or features that arent really substantial in the scheme of things.

It was and always has been for many many months that AMD were going to perform well using mantle or other low level API. Granted the driver updates have been terrible leading up to now but nothing worth crying over.

The 980 Ti is a great card. If it pains you then just sell it on while there is time but I am sure the green team will convince you otherwise. :p
 
This does all seem to be based off one benchmark, it may not be completely representative of things going forward.
A lot of games that have DX12 added in may not lend themselves to to using Async Compute as heavily as Ashes does, because they were designed around DX11.

Oxide was quite an early adopter of Mantle, which may have exposed similar abilities and so Oxide may have designed Ashes with this in mind.

Also didn't Oxide do the Starswarm tech demo, which seemed designed entirely to make AMD look good compared to Nvidia. So there may be close ties between AMD and Oxide even now and so Oxide may once again be trying to make AMD look good by pushing a feature they know Nvidia isn't currently doing as well (like when Crysis 2 did lots of Tessellation to make Nvidia look good, doesn't mean every game used that much tessellation).
Oxide say they made a moderate amount of use of Async Compute, but 'moderate' is a relative word. It could've been that they used just enough to make AMD look better than Nvidia, then because they could've used more they call the current amount 'moderate'. Other companies may use this much less because they're not trying to make AMD look good, resulting in much more balanced performance between the 2 vendors.
I mean look at Mantle, Oxide were early adopters of that and in 20 months we've had 9 Mantle games? So just because Oxide do it doesn't mean everyone will.

Maybe when games built from the ground up to use Vulkan/DX12 are released this may get used more, but by then we'll probably have the next generation of cards (hopefully on 16/14nm) and it may be a different story then.

Of course it's also entirely possible that most developers will use this and some a lot more than Oxide have and Nvidia owners will struggle to run games in DX12 mode. Then all PC gamers that look at reviews and aren't tied into a specific vendor will buy AMD over Nvidia and we'll get stories about Nvidia being in financial trouble because profits are down.
Pascal will come out from Nvidia and will improve a bit but won't full match AMD support so another generation will be dominated by AMD.

And I own 3 x 290Xs, 2 x 290s, a 7950 and have 2 x Fury Xs on pre-order (still) so I'm not just trying to justify my Nvidia purchases (I also own 3 x 980s).
 
What I dont understand is why people would think the AMD cards are not 'fully' DX12 compatible as this 12.1 extra just seems to be for devs or features that arent really substantial in the scheme of things.

Order Independent Transparencies in hardware is actually good for improving performance and scene quality. Hardware based can sort the transparencies faster and more efficiently than a software scheme. Allowing more complex transparent views within a scene with minimal performance penalty.

If you think about it. how many games do you know that have multiple numbers of transparent objects in a scene? because i can not think of many.
 
This does all seem to be based off one benchmark, it may not be completely representative of things going forward.
A lot of games that have DX12 added in may not lend themselves to to using Async Compute as heavily as Ashes does, because they were designed around DX11.

Oxide was quite an early adopter of Mantle, which may have exposed similar abilities and so Oxide may have designed Ashes with this in mind.

Also didn't Oxide do the Starswarm tech demo, which seemed designed entirely to make AMD look good compared to Nvidia. So there may be close ties between AMD and Oxide even now and so Oxide may once again be trying to make AMD look good by pushing a feature they know Nvidia isn't currently doing as well (like when Crysis 2 did lots of Tessellation to make Nvidia look good, doesn't mean every game used that much tessellation).
Oxide say they made a moderate amount of use of Async Compute, but 'moderate' is a relative word. It could've been that they used just enough to make AMD look better than Nvidia, then because they could've used more they call the current amount 'moderate'. Other companies may use this much less because they're not trying to make AMD look good, resulting in much more balanced performance between the 2 vendors.
I mean look at Mantle, Oxide were early adopters of that and in 20 months we've had 9 Mantle games? So just because Oxide do it doesn't mean everyone will.

Maybe when games built from the ground up to use Vulkan/DX12 are released this may get used more, but by then we'll probably have the next generation of cards (hopefully on 16/14nm) and it may be a different story then.

Of course it's also entirely possible that most developers will use this and some a lot more than Oxide have and Nvidia owners will struggle to run games in DX12 mode. Then all PC gamers that look at reviews and aren't tied into a specific vendor will buy AMD over Nvidia and we'll get stories about Nvidia being in financial trouble because profits are down.
Pascal will come out from Nvidia and will improve a bit but won't full match AMD support so another generation will be dominated by AMD.

And I own 3 x 290Xs, 2 x 290s, a 7950 and have 2 x Fury Xs on pre-order (still) so I'm not just trying to justify my Nvidia purchases (I also own 3 x 980s).

What i found interesting was that console developers were still new to this and were able to get 30% extra performance from using Async shader's. I want this to be used on the PC if this is the case because it looks like my card will benefit from it. Nvidia have supposedly been working with Microsoft for years on dx12 so there is no excuse for them not to have this working well in there latest cards. If indeed Nvidia cards (maxwell 2 in particular) can't make good use of a big dx12 feature then developers should still make good use of it due to how lucrative the console market is and keep the feature in the PC version. If Nvidia end up looking bad so be it as they should have had the hardware that can deliver in dx12 for there customer base who are beyond loyal.
 
Order Independent Transparencies in hardware is actually good for improving performance and scene quality. Hardware based can sort the transparencies faster and more efficiently than a software scheme. Allowing more complex transparent views within a scene with minimal performance penalty.

If you think about it. how many games do you know that have multiple numbers of transparent objects in a scene? because i can not think of many.

Yeah and sparse volume textures and conservative rasterization...

I cannot see Microsoft missing the boat on all these goodies for their own console. I am not saying they are are unwanted - moreso the hardware has been built and shipped before these extras materialised that you are speaking of. For the bulk of features the 12.0 supported cards will make use of them.
 
Order Independent Transparencies in hardware is actually good for improving performance and scene quality. Hardware based can sort the transparencies faster and more efficiently than a software scheme. Allowing more complex transparent views within a scene with minimal performance penalty.

If you think about it. how many games do you know that have multiple numbers of transparent objects in a scene? because i can not think of many.

Stuff like dense grass/foliage, etc. can really benefit from that kind of stuff - that said I have a slightly hardware assisted, software routine for handling that which is ridiculously ridiculously fast without needing anything more advanced than DX7.
 
Back
Top Bottom