• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Async Compute is super hard & too much works for devs

To be honest I am getting a bit tired of these DX/API rubbish.

I mean think back to DX11...Tessellation was suppose to be the next big thing...but guess what? If we look back now it didn't make any groundbreaking difference (well except helping Nvidia to win more benchmarks :p) for our games.

I am starting to think developer should really stop focus so much on graphic gimmicks, and go back to the core and make games that are fun to play. There are just too many games with fancy graphic that don't have good gameplay thesedays.

Low level access is useful for games like Total Wars and mmos, but for other games, I think dx12 is most probably no more impressive than DX11 was...and to make thing worse, MS seem to be dead set on making sure it benefit themselves, and users are lower priority.
 
A-Sync like anything requires new skills, its new tech.

I'm surprised to see one say "oh.... its too hard" others are not finding it so hard and getting good gains from it.
Maybe IO Interactive need to look at updating their skill base.

Making a full game is too hard for them it seems so it no surprise.
 
http://wccftech.com/async-compute-boosted-hitmans-performance-510-amd-cards-devs-super-hard-tune/

Hitman developer IO Interactive admitted Async Compute is super hard to tune and required too much works to gain just small 5 to 10% performance boost on AMD GCN and no difference on Nvidia GPUs, Async Compute is not a magic wand AMD marketing claimed.

It seemed many developers are not interested in Async Compute that is super hard to tune and required too much efforts to achieve just little performance boost is really a waste of time and not worthwhile on PCs.

What were they smoking when they described 5 to 10% performance as small? Gives the impression that what we think of as devs are little more than script junkies leaving the real devs working on the engine and not the individual project or game.
 
There is a lot more to DX12 than async.

Originally the big selling point for low level API's was increased number of draw calls but so far we haven't seen much evidence of that except maybe in AotS.
Aside from performance, I was expecting much better graphics in DX12 games but so far Tombraider and Hitman don't seem to show any improvement over the DX11 versions.
 
it's new so ofc it's gonna be hard for some Devs, having up to 20% performance boost, is something Devs cannot ignore on the consoles, beside async isn't just about perf it's also about latency, which is crucial in VR, async is the way things should go in the future, doesnt matter if it's hard or not, and it could be just a bad DX12 implementation, maybe vulkan tools are better.
 
Originally the big selling point for low level API's was increased number of draw calls but so far we haven't seen much evidence of that except maybe in AotS.
Aside from performance, I was expecting much better graphics in DX12 games but so far Tombraider and Hitman don't seem to show any improvement over the DX11 versions.

there is so much more to DX12 than just draw calls or just async compute
they've barely scratched the surface, and much like DX9 to DX11 we won't see a proper DX12 game until the devs drop DX11 altogether and start from scratch with DX12

even then, we will still have NVidia and AMD games as both sides support different features, so various games are going to look or play better on each vendor
 
I'm a very simple person and don't understand half of what I read on Anandtech and other sites but async sounds similar to me to when CPUs were going multi-core. I remember people saying then how hard it was to write multi-threaded programs and a fair few forum "experts" (not here, I was browsing elsewhere then) proclaiming that developers would never embrace it and it was going to simply die a death.

Is programming for async on GPU broadly similar to programming for multi-threading was for CPUs? Or am I comparing oranges to monkeys here?
 
I think arguably yes you could say its similar to hyperthreading/multi-core in my limited understanding, in that it allows you to deconstruct the pipeline and have various parts being worked on out of order to better utilise parts of the chain that aren't being used at a given time or wouldn't be on the older progression.

Ergo, GPU parts A/C might be static whilst part B is being worked on under Synchronous, whilst GPU parts A/C can be doing something else at the same time as Part B doing it's thing with Asynchronous.

Like multi-CPU you don't get static 100% gains, because it doesn't work like that, but it does reduce latency from a static pipeline and allow parts that would eitherwise be dormant waiting until the pipeline reaches them again, and allows you to use them.

Rroff or another developer/programmer would be able to explain it in much greater detail no doubt :)
 
In some ways it is more like allowing the application developer to implement their own form of hyper-threading on the GPU - HT will try to split up incoming work in a way that better utilises the broad capabilities of the CPU - async allows the developer the ability to utilise the broader capabilities of the GPU when handling different types of workload - but if they get it wrong all sorts of nasty things can happen.
 
Just a small 5-10% performance boost. That is a pretty amazing performance boost surely? Now I have no idea what is involved in achieving that but writing off a 5-10% boost seems surprising.

What were they smoking when they described 5 to 10% performance as small? Gives the impression that what we think of as devs are little more than script junkies leaving the real devs working on the engine and not the individual project or game.

In fairness I believe 5-10% is/was the performance difference between a 290 and a 290X. A lot of people, including myself, decided that it wasn't a big enough performance increase to pay the extra £100 or so for the 290X.
Back then it wasn't seen as a huge performance difference, not sure why that would've changed now.

If AMD and Nvidia's new high-end cards are 5-10% faster than the current cards I'm unsure if people will still consider 5-10% an "amazing" performance boost.
 
^^ I think ultimately its too early to say async isn't capable of bigger performance increases - as I posted in another thread AMD has kind of jumped the gun in a way - with future architectures and a more mature software environment the gains from it are likely to make much more sense.
 
a 5 - 10% boost is like a change in generation for Intel CPU's. And everyone mouth gasms when that happens.

so why is it a problem when we can get 5 - 10% at the minimum/average from async?

Just because it is only free performance for AMD parts? that is the only reason.
 
a 5 - 10% boost is like a change in generation for Intel CPU's. And everyone mouth gasms when that happens.

so why is it a problem when we can get 5 - 10% at the minimum/average from async?

Just because it is only free performance for AMD parts? that is the only reason.

Because of the consoles, who support Async and not easy to upgrade every 6 months, like you do
 
lol :p

Async Compute is pretty big market. Xbone supports it. PS4 supports it, all AMD cards support it that came out since 2013.


Tiny is the Nvidia's market base in the whole market perspective.

And there are articles that Sony expected back in 2013, that when PS4 hits it's midlife period (aka 2016), Async compute going to give a big boost to graphics, tapping to the hardware resources more efficiently for physics etc. And rightly so, because it has 8 Async units, while Xbox has 4.

Its tiny on the PC.
 
http://wccftech.com/async-compute-boosted-hitmans-performance-510-amd-cards-devs-super-hard-tune/

Hitman developer IO Interactive admitted Async Compute is super hard to tune and required too much works to gain just small 5 to 10% performance boost on AMD GCN and no difference on Nvidia GPUs, Async Compute is not a magic wand AMD marketing claimed.

It seemed many developers are not interested in Async Compute that is super hard to tune and required too much efforts to achieve just little performance boost is really a waste of time and not worthwhile on PCs.

I would say that could well be true for anyone doing their own game engine. But big players that create and licence game engines to others can and will take the time to make use of every little corner of DX12 and as their engines are used by many different studios, it's going to be a modest but significant gain for AMD.
 
a 5 - 10% boost is like a change in generation for Intel CPU's. And everyone mouth gasms when that happens.

so why is it a problem when we can get 5 - 10% at the minimum/average from async?

Just because it is only free performance for AMD parts? that is the only reason.

It's not free though is it? Not if the developers need to spend time doing it that could be spent doing other things.
 
I would say that could well be true for anyone doing their own game engine. But big players that create and licence game engines to others can and will take the time to make use of every little corner of DX12 and as their engines are used by many different studios, it's going to be a modest but significant gain for AMD.

It wont neccesarily be down to the engine, it might help, but io saying it needs to be tuned for every card would indicate it needs to also be tuned specific to the effects being used, so it will be down to the game devs as well as having a strong baseline to start from

It also means that PC wont be able to fully leverage optimisation done on consoles either
 
Back
Top Bottom