• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

Kinda makes me wonder whether Nvidia were lying again, or if there's something else going on... I guess we'll find out as more DX12 games come out. The fact that Nvidia asked for Async Compute to be disabled rings alarm bells.

Well,TBF they also did claim they were the first to support Asynchronous Warp for VR,and then Tom Forsyth, Oculus VR's Software Architect, said the R9 290 series in fact were the first to do so. Although it made me LOL that AMD marketing failed so hard on that one..!


Problem is now is that we *know* that they're gonna ask every developer to do this!! At least until they can support it properly!!! Not fair at all imo :mad::mad::mad:

The only problem is probably every game which is multi-platform is going to eventually use it,as there has been noise about the consoles benefiting a lot from using it with regards to performance.

I see a new Gameworks feature now...!:p

ANTI-ASYNCHRONOUS SHADERS!

But TBF,Nvidia does support one feature AMD does not,so its going to be swings and roundabouts here.
 
Last edited:
at least 2, conservative rasterization and raster ordered views
they also support a higher level of tiled resources

AMD have a higher level of resource binding

how these all play out in games (if any of them are used or not) remains to be seen
if the nvidia demos are anything to go by, we could have a situation where a game runs faster on AMD hardware, but looks much prettier on nvidia
 
DX12 is a very exciting, but without understanding what each of the respective architectures have, it will be hard to tell who's actually going to get the most out of it. Although AMD naturally will have the upper hand since they have a much larger CPU overhead in pre-DX12 games.
 
Kinda makes me wonder whether Nvidia were lying again, or if there's something else going on... I guess we'll find out as more DX12 games come out. The fact that Nvidia asked for Async Compute to be disabled rings alarm bells.

If they are actually lying about their level of DX12 support for Maxwell cards, then it'll be an even bigger scandal than the 970 having 3.5 GB. It could explain why they've been keeping quiet about anything AOTS related since they made that false statement about MSAA. Personally I don't want to believe that they're lying but I wouldn't put it past them.
 
Kinda makes me wonder whether Nvidia were lying again, or if there's something else going on... I guess we'll find out as more DX12 games come out. The fact that Nvidia asked for Async Compute to be disabled rings alarm bells.

they didn't ask for async to be disabled, the developer disabled async for nvidia hardware as the developer found that trying to use it with their async code lowered performance for nvidia even further

the developer haven't actually said what options nvidia asked them to disable

it's an assumption some people are making, but at this point I would call it a "fact"
in fact, the developer says nvidia was trying to get them to remove certain "settings" from the benchmark... async isn't a setting in the demo
 
Last edited:
if the nvidia demos are anything to go by, we could have a situation where a game runs faster on AMD hardware, but looks much prettier on nvidia

It will depend how much time the devs put into their game/code. Look at CDPR and their BS PR with graphics that look very good in promotional materials and remain at console levels (with some extras here and there) in the final version.

No doubt nvidia will play its cards as best as possible (even dirty if it comes to that) and so will AMD. Probably what AMD has it going for themselves, is the extra "free" optimization that goes into consoles development on the same arch. as theirs.

they didn't ask for async to be disabled, the developer disabled async for nvidia hardware as the developer found that trying to use it with their async code lowered performance for nvidia even further

the developer haven't actually said what options nvidia asked them to disable

it's an assumption some people are making, but at this point I would call it a "fact"

Most likely it was MSAA. AS could have been as well, but that would mean extra coding to make everything work in some other way, probably less efficient. Or both. :D
 
http://wccftech.com/directx-12-supp...12-1-gcns-resource-binding-tier-3-intels-rov/
TL;DR: So summarizing, all IHVs fully and completely support the DirectX 12 API. No hardware vendor can claim 100% support of all specifications and the differences are usually negligible in nature. That said, if one is deciding by features observable by the end user and gaming experience, the slight advantage and edge goes to Nvidia with its Feature Level 12_1 support. Keep in mind however, that developers usually code for the lowest common denominator, which means Nvidia’s edge depends entirely on how many devs use it.

so heres the quandary - you throw gamesworks at a title - ensuring it works best on Nv DX12 with ROV and CR , games look pretty

OR

you aim at the console market with resource binding 3 and tiled resoruces , so PS4/Xbone play flashy DX12 games @ 1080p with a reasonable frame rate .

will be down to where the most money is - look at ubisoft and assisins creed on console for example


andybird you are wrong:

NVidia asked oxide to disable Async shaders on nv cards.

http://www.guru3d.com/news-story/nv...12-benchmark-to-disable-certain-settings.html
 
they didn't ask for async to be disabled, the developer disabled async for nvidia hardware as the developer found that trying to use it with their async code lowered performance for nvidia even further

the developer haven't actually said what options nvidia asked them to disable

it's an assumption some people are making, but at this point I would call it a "fact"
in fact, the developer says nvidia was trying to get them to remove certain "settings" from the benchmark... async isn't a setting in the demo

They clearly stated that Nvidia has asked them to do so, not the other way round.
 
I'm not an expert but i think basically AMD's GPU's shader and compute throughput is through the same memory pool, its a lot like AMD's Heterogeneous System Architecture on the APU side, parallel and serial workloads work as one, so much like HSA can compute Floating Point Operations in parallel thought the high speed engine shader streaming is computed in parallel instead of serial, it has the potential to get a huge leg up in performance.


This is a very different architecture to traditional GPU's and Nvidia would have to build it from the ground up, they may also have to find a different way of doing the same thing as HSA is AMD IP.

Maybe this is what NVidia unified memory is going to be doing?

Only 7/8 months to wait till we find out. :)
 
andybird you are wrong:

NVidia asked oxide to disable Async shaders on nv cards.

http://www.guru3d.com/news-story/nv...12-benchmark-to-disable-certain-settings.html

They clearly stated that Nvidia has asked them to do so, not the other way round.


This article is just based on the same post from a guy on a forum, it is not official in anyway shape or form. You have no idea how this info came about, was it an official request from NVidia or just too guys joking about over the phone. 'We have no idea' so don't take it as gospel. :(
 
Last edited:
what he said was "I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

where in that does it say "nvidia asked us to disable async compute"

it says "certain settings"... async compute isn't a setting, and it also says that they, the developer, actually disabled async compute on nvidia hardware, not because anyone asked them too but because they found their async compute code was even slower than disabling it

the "settings" that nvidia asked to be removed were not removed, async compute for nvidia hardware WAS removed, therefore async compute is not the the "settings" being referred to

basic reading and comprehension failure
 
Last edited:
what he said was "I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

where in that does it say "nvidia asked us to disable async compute"

it says "certain settings"... async compute isn't a setting, and it also says that they, the developer, actually disabled async compute on nvidia hardware, not because anyone asked them too but because they found their async compute code was even slower than disabling it

basic reading and comprehension failure

AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1210#post_24357053

That's of course he/she is whom they claim to be.
 
what he said was "I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

where in that does it say "nvidia asked us to disable async compute"

it says "certain settings"... async compute isn't a setting, and it also says that they, the developer, actually disabled async compute on nvidia hardware, not because anyone asked them too but because they found their async compute code was even slower than disabling it

the "settings" that nvidia asked to be removed were not removed, async compute for nvidia hardware WAS removed, therefore async compute is not the the "settings" being referred to

basic reading and comprehension failure

yes you have failed - take off those green tinted glasses and you`ll see

just like the only politician of recent to actually say it how it is is being vilified in the media - that person being Jeremy corbyn
oxide have used sharp but caged language against nv, and have sailed quite close to the wind using it

whats telling is that nv havent come out and rebuked them - which says to me (and thousands of others) that Oxide are correct.
 
Nvidia wanted oxide to disable Async compute for 'Everyone' in the benchmark. But Oxide refused and left it enabled for everyone else. The oxide dev also stated that there was performance regression when using Async with Nvidia hardware. Which is why it was disabled for all Current Nvidia cards.
 
what he said was "I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

where in that does it say "nvidia asked us to disable async compute"

it says "certain settings"... async compute isn't a setting, and it also says that they, the developer, actually disabled async compute on nvidia hardware, not because anyone asked them too but because they found their async compute code was even slower than disabling it

the "settings" that nvidia asked to be removed were not removed, async compute for nvidia hardware WAS removed, therefore async compute is not the the "settings" being referred to

basic reading and comprehension failure

Why does that even matter, whether or not they asked for it be disabled, enabling it clearly made their cards perform worse, hence why they disabled it.

the only 'vendor' specific code is for Nvidia where we had to shutdown async compute.
 
Last edited:
Back
Top Bottom