• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1250#post_24358081

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1250#post_24358107


AMD LiquidVR are far better than Nvidia.AMD Latency ~ 11ms , Nvidia Latency ~ 25 to 27ms.

Carmack:
To avoid simulation sickness in the general population, the motion-to-photon latency should never exceed 20 milliseconds (ms).

It looks like this asynchronous shaders are quite the ace in AMD's pocket. If true, of course.
 
You do know you can turn off all the gameworks features, right?
Oh look if it isn't typical response 101.

An earlier article/quote of the post from a reviewer has already shown that unlike games that use Tessellation in general, GameWorks titles tends to delilberately enforce excessive level tessellation that offer no meaning return visually, most likely purely for the sake of helping Nvidia to artificially widen the performance lead over AMD on benchmark.

The legacy from Crysis 2 has became part of GameWorks it would seem.
 
Last edited:
We knew game-doesn't-works was a response to the console sweep, what we didn't know is it had a deadline to destroy AMD before async/DX12 came to fruition. No wonder NV were so aggressive with it, blatantly showing their hand with the overtessellation scandal and such.

Also explains why Ubi were the only company to go all-in with it, their execs are pure evil just like NV's and only think in terms of short-term profit.
 
And people will still continue to say GameWorks is for the "benefit" of PC gaming regardless...despite the fact all these so-called "innovations" and "features" seem bring little benefit, and causing games strugglings to do the most important and foundamental thing- running properly without dodgy performance issue.

Now that dx12 is launched, I hope Nvidia focus more on making the most out the it on the field than playing with proprietary features on GameWorks, except for may be PhysX.

I get more smoke in some of my games now, oh,. and foggy sun shafts in FC4, looks exactly like this.



Guess what that is ^^^, its not Game Works. :p
 
https://forum.beyond3d.com/posts/1868868/

It's a real shame that Mahigan is actively avoiding posting his copy/paste on the one forum that would be the perfect place to discuss this. I wonder why.

One interesting thing from all this. AMD are not looking to be in a very good position at all if you read between the lines. Oxide have put together an engine that favours GCN to such an extent that it messes with Maxwell (overuse of async shading being the new tessellation for AMD sponsored games perhaps) enough for Nvidia to request that async shading is disabled for their cards, yet Fiji is only competitive with the 980ti rather than pulling ahead by a noticeable margin.

We really need more tests on this.
 
yet Fiji is only competitive with the 980ti rather than pulling ahead by a noticeable margin.
Considering they're pretty much on par in current DX11 benchmarks (Ti a little ahead for the most part) then if nVidia were being as gimped by this as they're crying, then surely the Fury would pull WAY ahead? I'm actually impressed nVidia manage to pull even considering the architectural differences. And before any greens start whining that the Ti is faster when overclocked - no ****, when you see an OC Fury to compare to then it becomes apples for apples and a valid comparison.

We really need more tests on this.
Couldn't agree more. I've said since AOTS bench release that making judgments on a single benchmark is a dumb idea, you need much more information.

Well some users are speculating that the reason why Ark Dx12 has been held up in its current form is because it giving more of a boost to AMD than NV.

Can't have an nVidia title giving boosts to AMD, that'd be MADNESS! :O
 
One interesting thing from all this. AMD are not looking to be in a very good position at all if you read between the lines. Oxide have put together an engine that favours GCN to such an extent that it messes with Maxwell (overuse of async shading being the new tessellation for AMD sponsored games perhaps) enough for Nvidia to request that async shading is disabled for their cards, yet Fiji is only competitive with the 980ti rather than pulling ahead by a noticeable margin.

Real question is, are they over using async compute. This post from Oxide dev is very interesting:

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC.


We really need more tests on this.

Agreed
 
Well some users are speculating that the reason why Ark Dx12 has been held up in its current form is because it giving more of a boost to AMD than NV.

Unreal Engine, they are to Nvidia like Oxide, DICE and Crytek are to AMD.

I think its more likely because Unreal Engines DX12 is full of bugs, its still early days and the Engine is in Alpha Preview.
 
https://forum.beyond3d.com/posts/1868868/

It's a real shame that Mahigan is actively avoiding posting his copy/paste on the one forum that would be the perfect place to discuss this. I wonder why.

One interesting thing from all this. AMD are not looking to be in a very good position at all if you read between the lines. Oxide have put together an engine that favours GCN to such an extent that it messes with Maxwell (overuse of async shading being the new tessellation for AMD sponsored games perhaps) enough for Nvidia to request that async shading is disabled for their cards, yet Fiji is only competitive with the 980ti rather than pulling ahead by a noticeable margin.

We really need more tests on this.


Am no expert but how are AMD not looking good if they support async shading and NV dont :confused:

Going through some of that thread it seems it's favouring AMD's arch.

I will state a lot of it is over my head though.
 
can you name any games that do that?

I would assume many do, considering that gameworks was designed to provide high quality shader code within a Library. This being the reason why amd say they can't optimise most gameworks games because they can't see the shader code.

Gameworks can go far deeper into a game engine than just the extra features of tesselation etc.
 
AMD have always designed and built architectures with an idea to usher in new technologies or new and better ways of doing things.

The trouble is they often don't get it right and Intel / Nvidia just do here and now, as a result AMD are often left frustrated because their architecture isn't delivering fully in the here and now.

This time it might actually have worked with having their architecture in all the consoles and new API's coming on stream.
 
Last edited:
Real question is, are they over using async compute. This post from Oxide dev is very interesting:

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995






Agreed

It would seem from the same post they are not over using it.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

So we can devise that really this game is not really favouring AMD that much or they would have pushed Async a whole of a lot more.
 
Last edited:
You gotta love pieces like that, that say I wont make many comments about bias and unfair play, knowing full well that all the posts that will follow will only be about that side of things.

For me the most interesting piece of info in there was.
Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone
Obviously a far cry from the AMD never get hands on with developers and NVidia are permanently camped in their offices, attitude some have suggested.

The BIG question is, when did NVidia know that Async Compute could be the next big thing, before of after Pascal's initial design.

If they were in time than the whole thing is pretty much a non issue, it might mean AMD get a couple of 'really' good benchmark wins before the next generation of GPU's hit us in the middle of next year, which certainly should benefit AMD as a company.

If NVidia wasn't in time then it might the next gen GPU's quite interesting.

Only time will tell. :)
 
You gotta love pieces like that, that say I wont make many comments about bias and unfair play, knowing full well that all the posts that will follow will only be about that side of things.

For me the most interesting piece of info in there was.

Obviously a far cry from the AMD never get hands on with developers and NVidia are permanently camped in their offices, attitude some have suggested.

The BIG question is, when did NVidia know that Async Compute could be the next big thing, before of after Pascal's initial design.

If they were in time than the whole thing is pretty much a non issue, it might mean AMD get a couple of 'really' good benchmark wins before the next generation of GPU's hit us in the middle of next year, which certainly should benefit AMD as a company.

If NVidia wasn't in time then it might the next gen GPU's quite interesting.

Only time will tell. :)

I think Nvidia know all about it as Maxwell 2 moved a little in the Async direction compared to Kepler. I fully expect Nvidia with Pascal to have moved there Architecture over to suit dx12 more.
 
Back
Top Bottom