• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

Soldato
Joined
2 Jan 2012
Posts
12,166
Location
UK.
Since the release of Ashes of the Singularity, a lot of controversy surrounded AMD’s spectacular results over NVIDIA’s underwhelming ones. Was this DX12 benchmark gimped in order to run faster on AMD’s hardware? Apparently not as Overclock.net‘s member ‘Mahigan’ shed some light on why there are so dramatic differences between AMD’s and NVIDIA’s results.

What’s also interesting here is that Mahigan has provided a number of slides to back up his claims (which is precisely why we believe this explanation is legit).

As Mahigan pointed out, Maxwell’s Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task, whereas AMD’s GCN 1.1/1.2 is composed of 8 Asynchronous Compute Engines (each able to queue 8 Compute tasks for a total of 64 coupled) with 1 Graphic task by the Graphic Command Processor.

0oJ64q0.png

snvcf4y.png

LfqOVhP.png

http://www.dsogaming.com/news/amds-...re-more-friendly-to-parallelism-than-maxwell/
 
Last edited:
This doesn't surprise IF true.

AMD have limited resources to work with compared to Intel and Nvidia.

It seems awhile ago they made the decision to plan for the future with their CPU and GPU architectures. They put all their baskets in a multi core CPU and a DX12 era graphics landscape.

Hence why when they did it, the FX CPU line where poor and the AMD GPU's seemingly behind Nvidia.

However now they will start to see some fruits as we move in the the DX12 landscape.

Intel and Nvidia have the resources to adapt tho and will just change their model to maintain their respective crowns.
 
Fascinating read Boom, for all the talk of Nvidia greatness with conservative rasterization in DX12 it looks like something with this much impact isn't even in the features list, this looks pretty important to me.

No wonder AMD have been pushing hard for parallelism over the last few years and no wonder Nvidia are annoyed by the Ashes benchmark.
 
This is all well and good, but how come AMD only got minimal gains using their own (Mantle) API?? Minimums were raised of course, which is great.

At a guess I'd say once the game engines are built from the ground up using DX12 then AMD cards will shine more :)
 
This is all well and good, but how come AMD only got minimal gains using their own (Mantle) API?? Minimums were raised of course, which is great.

At a guess I'd say once the game engines are built from the ground up using DX12 then AMD cards will shine more :)

I think you answered your own question. All mantle games was added into the engine not built from ground up.
I agree dx12 will show better results when games start using it with dx12 in mind.
 
Hopefully this only takes nVidia a generation or 2 to adapt their architecture to fall more in line with DX12 needs. Much as I want an AMD win, I don't want AMD in the position nVidia is now. Between 40-60% market split is where I want both companies, I think 2 generations of AMD outperforming nVidia will do that.
 
Hopefully this only takes nVidia a generation or 2 to adapt their architecture to fall more in line with DX12 needs. Much as I want an AMD win, I don't want AMD in the position nVidia is now. Between 40-60% market split is where I want both companies, I think 2 generations of AMD outperforming nVidia will do that.

Pascal will be the one, they wouldn't let it slip any longer :)
 
Kinda makes a lot of sense that AMD would have a lead in parallelism as they have been focused on that for years. I'm sure Nvidia can adapt pretty quickly given they can just throw money at it until their arms get tired.
 
Hopefully this only takes nVidia a generation or 2 to adapt their architecture to fall more in line with DX12 needs. Much as I want an AMD win, I don't want AMD in the position nVidia is now. Between 40-60% market split is where I want both companies, I think 2 generations of AMD outperforming nVidia will do that.


wouldn't AMD leading for a while be good for the industry? We may then see more balanced game development with open standard Gameworks style libraries which give gamers more choice overall.
 
I'd say AMD having a 40-60% share would discourage devs from using GameWorks anyway, alienating a large portion of users isn't a great idea. Alienating ~20% isn't that much of a risk, hence the use of it.
 
wouldn't AMD leading for a while be good for the industry? We may then see more balanced game development with open standard Gameworks style libraries which give gamers more choice overall.

Good in theory but afraid it won't happen except for a few select titles.
Just look at the theory of Hsa and if implemented in gaming tech it could alleviate a lot of cpu-gpu overhead, but no devs have touched it.

Maxwell 2 has hyper q and 2 dma engines, but even in dual mode the max support is 32 queues, Amd has 64 with Gcn1.1/1.2.

Pascal will probably sort this situation out and with Amd's current marketshare and dev's late to dx12 standard, it won't be an issue for Nvidia when they are ready to tape out.
 
I don't think Pascal will be as focused on parallelism as GCN is currently. With Greenland supposed to be based on a newer super-GCN architecture, AMD should technically have the advantage with architecture. But it's still early days with DX12 and we have to wait and see the performance of Nvidia/AMD GPUs in other DX12 applications until we can determine how much advantage AMD actually has.
 
I don't think Pascal will be as focused on parallelism as GCN is currently. With Greenland supposed to be based on a newer super-GCN architecture, AMD should technically have the advantage with architecture. But it's still early days with DX12 and we have to wait and see the performance of Nvidia/AMD GPUs in other DX12 applications until we can determine how much advantage AMD actually has.

Let's hope it kicks major ass :cool:
 
Am I missing something? from most of the tests I've seen outside of synthetic individual feature tests where its swings and roundabouts the TX and FX are mostly on par for DX12 results. People seem to be kicking up a lot of fuss that AMD sees more dramatic increases from DX11 to DX12 and questioning why nVidia doesn't and missing the fact they have much better DX11 results in the first place?

Taking comments like:

“But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel… in comes in Mantle, Vulcan and Direct X 12.”


While there may be some elements of that to it - actually one of the big reason is due to the fact that with 337 drivers onwards nVidia actually went deep into the DX11 API and optimised data before it gets to the drivers themselves* (which AMD largely haven't) - a lot of people seem to have missed the implications of this chart:

aGdBtdR.jpg

Including technical people who should know better (props to wccftech who actually picked up on it) - but DX11/12 and the different architectures needs a lot deeper diving to do the subject justice than is being seen on tech sites atm.

EDIT: Again an example of how AMD are sticklers for how things "should be done" in the face of the actual messy reality and how forward thinking features often end up biting them in the rear due a mixture of things never actually working out in the idealised fashion and just bringing them too the table too early to make a difference - on the flipside the architecture should perform well in the future once DX12, etc. become dominant which is a positive in the long term but by then the TX will be consigned to history and nVidia will be on an architecture that works with the realities of DX12 and likewise people probably won't be using the FX.


* This goes beyond just recompiling shaders or trying to make the workload more parallel for the GPU, etc. but actually alleviating bottlenecks at that level before you throw the GPU architecture into the equation.
 
Last edited:
I'm surprised that the Nvidia cards were as competitive as they were in a benchmark that started out as a Mantle showcase (to the point where Oxide claimed it would bring an impossible before number of onscreen effects, well impossible only for AMD's DX11 driver as it turned out) and has obviously been designed and optimised for the strengths of AMD's architecture. If that's the best it can do under such circumstances then I think those proclaiming Nvidia's doom are going to be somewhat disappointed.

I think we should wait for more than one sponsored game before we proclaim the sky is falling.
 
Well now at least there is some light shined on this controversy! Look's like AMD's hardware is currently slightly better suited to DX12 than nVidia's, well the draw call performance side of things anyways. But knowing nVidia they will change this with Pascal!
 
I'm surprised that the Nvidia cards were as competitive as they were in a benchmark that started out as a Mantle showcase (to the point where Oxide claimed it would bring an impossible before number of onscreen effects, well impossible only for AMD's DX11 driver as it turned out)

In actuality, the number of light sources that the DX12 version of the game uses could never be run in DX11. This could also be a reason why there is some performance regression in the DX12 benchmarks for the nvidia cards.
 
This isn't about the fact that the sky is falling (except for Nvidia fanboys, which abound on this forum) but about the fact that GPU technology is very complicated and the two giants have different strengths which don't bear out in benchmarks results since you only see see the result rather than the process that begets it. In other words, if all you know about GPU tech is the specs of the GPUs and judge it by that & benchmark results, your ability to intelligently comment is severely limited (after all, how many people will put in the 1000s of hours required for understanding it?). Therefore it should be no surprise that most discussions are simply shouting matches between people who can at best say something like 'omg amd card z does 25 fps in x/y game & requires 330w while nvidia card w does 27 & requires 280w; amd suxx why cant they do anything right & fix drivers ffs!' Just yesterday someone was lamenting how they bought ati/amd since forever but couldn't take it any longer because his crossfire setup had problems in WoW - laughable because it just goes to show what an idiot he is, since sli works even worse in WoW & less often (multi-gpu solutions don't always work with it) and the problems are to do with Blizz's software team rather than AMD/Nvidia's. Yet here is mis-attributing blame and then tens of others jumping on his bandwagon even though they are as clueless as him. Sad state of affairs, but it is what is.
 
Last edited:
Back
Top Bottom