• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

work load is likely - my gtx 980 is starting to tank hard above 6000 unique units

That still doesnt explain why they get a perf drop going from 11 to 12 when starswarm gets a perf boost... I have no problems with AMD's performance being better with async, but nvidia getting a performance drop when other dx12 demos get a boost is still odd

I edited my Las post, you might want to check :)
 
But the dev already said that they disabled async on ashes for NVIDIA hardware, so what else is causing the drop in performance from 11 to 12, when starswarm, using the same engine shows a boost from 11 to 12

I think posting up the starswarm on dx12 anand article is a massive own goal as far as trying to prove ashes isn't being deliberately optimised for AMD hardware, as starswarm shows NVIDIA can and should get a performance improvement going from dx11 to dx12

Whatever it is about ashes that causes a drop in perf from 11 to 12 should be optional for NVIDIA hardware, which is probably what nvidia asked for, options, not disabled entirely or removed, but the option to turn it down, like turning down tesselation in other games

such as you can turn down massive over tessalation in gameworks games then huh....
 
That still doesnt explain why they get a perf drop going from 11 to 12 when starswarm gets a perf boost... I have no problems with AMD's performance being better with async, but nvidia getting a performance drop when other dx12 demos get a boost is still odd

I edited my Las post, you might want to check :)

starswarm *nitrous* engine is from mid 2014 - its not been updated other than using tier 1 D3D12 - now we have tier 3 D3D12 hardware , the latest nitrous engine , which starswarm doesn't have - is using it *ashes*;

that IMO is the difference.
 
That sounds like a completely made up excuse, well done.

what? you do fully understand that starswarm is the engine tech demo from 2013 , last updated in mid 2014 - and here we have the game using the latest version - the game being Ashes of the singularity , which will use the latest 2015 version of the game engine.

well done on not understanding how technology evolves. starswarm was the tech demo - ashes is the game.
 
what? you do fully understand that starswarm is the engine tech demo from 2013 , last updated in mid 2014 - and here we have the game using the latest version - the game being Ashes of the singularity , which will use the latest 2015 version of the game engine.

well done on not understanding how technology evolves. starswarm was the tech demo - ashes is the game.

But it makes Green look bad! Clearly you're talking nonsense.
 
what? you do fully understand that starswarm is the engine tech demo from 2013 , last updated in mid 2014 - and here we have the game using the latest version - the game being Ashes of the singularity , which will use the latest 2015 version of the game engine.

well done on not understanding how technology evolves. starswarm was the tech demo - ashes is the game.

Your time line doesnt fit, "tier 3" hardware has been out for years and the apparent "DX12" article you linked to is from Feb2015
 
Honestly with every pages of grief above NV performance you look more and more funny.
Is it really that hard that NV is not on top in something?

We will see the real performance when more dx12 benchmarks and games arrive.
 
Just put layte on ignore, pretty clear all his posts are NV talking points copypasted straight from the reputation manager forums.
 
Last edited:
Orangey once again showing that he cannot debate people who try to argue their point in a calm and collected manner. Faced with somebody who backs up their points, he resorts to name calling and making a big song and dance about putting them on ignore.

You should grow up son, putting your fingers in your ears and going lalala when adults are talking isn't going to solve anything.

I would say, with luck somebody will quote this so you will see it, but people who make a big song and dance about ignoring people on forums don't do anything of the sort.
 
LOL, what's he squawking about I wonder.

No doubt panicking and double-checking his next lot of copypasters haven't already been used elsewhere.
 
http://hardforum.com/showpost.php?p=1041825513&postcount=125

Well that's not good for Nvidia if true. Looks like AMD were actually onto something.

If you excuse me I'm off to go eat my hat.

A GTX 980 Ti can handle both compute and graphic commands in parallel. What they cannot handle is Asynchronous compute. That's to say the ability for independent units (ACEs in GCN and AWSs in Maxwell/2) to function out of order while handling error correction.

It's quite simple if you look at the block diagrams between both architectures. The ACEs reside outside of the Shader Engines. They have access to the Global data share cache, L2 R/W cache pools on front of each quad CUs as well as the HBM/GDDR5 memory un order to fetch commands, send commands, perform error checking or synchronize for dependencies.

The AWSs, in Maxwell/2, reside within their respective SMMs. They may have the ability to issue commands to the CUDA cores residing within their respective SMMs but communicating or issueing commands outside of their respective SMMs would demand sharing a single L2 cache pool. This caching pool neither has the space (sizing) nor the bandwidth to function in this manner.

Therefore enabling Async Shading results in a noticeable drop in performance, so noticeable that Oxide disabled the feature and worked with NVIDIA to get the most out of Maxwell/2 through shader optimizations.

Its architectural. Maxwell/2 will NEVER have this capability.
I'm not an expert but i think basically AMD's GPU's shader and compute throughput is through the same memory pool, its a lot like AMD's Heterogeneous System Architecture on the APU side, parallel and serial workloads work as one, so much like HSA can compute Floating Point Operations in parallel thought the high speed engine shader streaming is computed in parallel instead of serial, it has the potential to get a huge leg up in performance.

HSA on the APU side looks like this





This is a very different architecture to traditional GPU's and Nvidia would have to build it from the ground up, they may also have to find a different way of doing the same thing as HSA is AMD IP.
 
AMD thoughts on the matter:

Oxide effectively summarized my thoughts on the matter. NVIDIA claims "full support" for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching.

GCN has supported async shading since its inception, and it did so because we hoped and expected that gaming would lean into these workloads heavily. Mantle, Vulkan and DX12 all do. The consoles do (with gusto). PC games are chock full of compute-driven effects.

If memory serves, GCN has higher FLOPS/mm2 than any other architecture, and GCN is once again showing its prowess when utilized with common-sense workloads that are appropriate for the design of the architecture.

https://www.reddit.com/r/AdvancedMi...ide_games_made_a_post_discussing_dx12/cul9auq

http://forums.anandtech.com/showpost.php?p=37667289&postcount=560
 
Last edited:
Kinda makes me wonder whether Nvidia were lying again, or if there's something else going on... I guess we'll find out as more DX12 games come out. The fact that Nvidia asked for Async Compute to be disabled rings alarm bells.

Problem is now is that we *know* that they're gonna ask every developer to do this!! At least until they can support it properly!!! Not fair at all imo :mad::mad::mad:
 
Back
Top Bottom