• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Async Compute is super hard & too much works for devs

It's not free though is it? Not if the developers need to spend time doing it that could be spent doing other things.

We forgetting time is also ment to be saved by DirectX 12 porting to pc. So any time spent on GCN on console then time will be saved on the pc version of code. Least that's what I have always taken from all this.

DirectX 12 brings pc and console development much closer saving time and money.
 
AMD should be throwing money at devs for this really. That 5-10% isn't that much but enough to show a lead in bench threads and that is a massive selling point!
 
We forgetting time is also ment to be saved by DirectX 12 porting to pc. So any time spent on GCN on console then time will be saved on the pc version of code. Least that's what I have always taken from all this.

DirectX 12 brings pc and console development much closer saving time and money.

Surely that means that games like ROTTR should be fully optimised for DX12 and built from the ground up on DX12 as that's what the XBOX is running (or similar low level API)? So we should be seeing fully optimised DX12 games right away if there's a console version?

Plus that time saved would surely be if it was only on DX12 and not on DX12 in addition to DX11?
 
Maybe just maybe none of tomb raider code for DirectX12 was taken from the console version?

ROTTR is still a DirectX11 game under the hood we still yet to see a fully designed from ground up DirectX12 game.
Future will tell how well all this bring. Atm DirectX 12 is just tagged on.
 
In some ways it is more like allowing the application developer to implement their own form of hyper-threading on the GPU - HT will try to split up incoming work in a way that better utilises the broad capabilities of the CPU - async allows the developer the ability to utilise the broader capabilities of the GPU when handling different types of workload - but if they get it wrong all sorts of nasty things can happen.

Yeah i like this Multicore CPU analogy.

Think of your GPU as a CPU, now think of Cinebench, a single core CPU will render one little square at a time, that is serial compute, an 8 core CPU will render eight squares at a time, that is parallel compute, the later is a lot faster than the former.

So, a HD6970 GPU a single shader pipeline, so it can only perform one task at a time, an R9 290X/390X/FuryX have 8 shader pipelines (ACE Units) it can run eight tasks in parallel.

The HD 6970 when asked to perform a task will put that task in a Queue until its finished with its existing task, this introduces latency and decreases performance, the R9 390X will have 7 free threads that it can use to perform the task, no latency.

But it doesn't just do this automatically, you have to program your project to make use of multi-threading.

I clearly remember back when multicore CPU's were introduced, you also had descending voices saying "this is too hard, its pointless, it doesn't make any difference" oh how wrong they were.

Personally i think developers who don't like this should keep their opinions to them selves, they should shut up, recognise it for what it is and learn how to use it because nay sayers of A-Sync are just showing themselves up as incompetent.
 
Last edited:
& too much works for devs

It seemed many developers are not interested in Async Compute that is super hard to tune and required too much efforts to achieve just little performance boost is really a waste of time and not worthwhile on PCs.

Where did devs say that or was this pulled from where the sun don't shine?

MS stated only the big boys(likes of Unreal/Crytek) will get full use of DX12 and the rest will simply stick to DX11.*

It's not free though is it? Not if the developers need to spend time doing it that could be spent doing other things.

Same as GW's, takes extra time and a bung to use it:p

FC4/Primal is a perfect example, if GW's was that easy to implement, imagine it would have carried over to Primal, some argue the devs want to use it, it saves them time.

AMD should be throwing money at devs for this really. That 5-10% isn't that much but enough to show a lead in bench threads and that is a massive selling point!

+1

You can shout from the rooftops all you want, but if you don't pay devs to include it then it's a lot of noise for nothing.
 
Last edited:
We already know exactly how this is going to play out on the Nvidia side.

The new Pascal architecture is going to be much better at Async Compute and Nvidia will make sure that Game Works reflects this, causing all us 900 series and below users the need to upgrade. :rolleyes:

---

Then on the flip side, it's nice to see something driving the advancements in game so really can we complain?
 
Maybe just maybe none of tomb raider code for DirectX12 was taken from the console version?

ROTTR is still a DirectX11 game under the hood we still yet to see a fully designed from ground up DirectX12 game.
Future will tell how well all this bring. Atm DirectX 12 is just tagged on.

We have, Oxide completely rebuilt their Nitrous engine from the ground up to take advantage of low abstraction API's. The engine is highly multi-threaded which it needs to be with the sheer number of AI and path finding going on. Essentially DX11 just runs above the normal engine in its own module. Their rendering system has been heavily multithreaded from the start. They were just needing the front end to use it with.
 
We have, Oxide completely rebuilt their Nitrous engine from the ground up to take advantage of low abstraction API's. The engine is highly multi-threaded which it needs to be with the sheer number of AI and path finding going on. Essentially DX11 just runs above the normal engine in its own module. Their rendering system has been heavily multithreaded from the start. They were just needing the front end to use it with.


I think A-Sync is one of those things that was ported over from Mantle.

It would explain why Oxide have it right off the bat.
 
I think A-Sync is one of those things that was ported over from Mantle.

It would explain why Oxide have it right off the bat.

Yeah, from their work with mantle and working with AMD, i think they have had a good head start. Just a shame that Dice had issues with it.
 
It wont neccesarily be down to the engine, it might help, but io saying it needs to be tuned for every card would indicate it needs to also be tuned specific to the effects being used, so it will be down to the game devs as well as having a strong baseline to start from

It also means that PC wont be able to fully leverage optimisation done on consoles either

If that's the case - needs to be optimized per card, then this has a lot less value than I thought. It brings us into the Nvidia model where cards "degrade" with age because a significant part of their performance is actually dependent on very customized code (hence why AMD cards age well). Good for consoles, though, I guess as they'll get work put in on them that PCs don't.

AMD should be throwing money at devs for this really. That 5-10% isn't that much but enough to show a lead in bench threads and that is a massive selling point!

Agrees with Gregster. Falls off chair.
 
Yeah, from their work with mantle and working with AMD, i think they have had a good head start. Just a shame that Dice had issues with it.

DICE had issues with Mantle too, BF4 is Frostbite 3, which is the same engine used for BF3.

It looks like they took an existing engine and stitched the API into it, where as Oxide built their engine around the API.

Other Engines, we know UE4 has DX12 complete with A-Sync

CryEngine 5 was launched recently, its a DX12 engine and appears to be a complete rebuild from the last EaaS engine (3.8)
My first impressions are that i don't like the new engine as much as i did 3.8, i have not yet looked at its nitygritty so i don't know if its A-Sync Capable, the information is not obvious at a glance.
Edit: 3.8 did have a level of A-Syncronous compute in DX11 already, tho not quite the same thing, it was used for Ray Tracing multi-threaded CPU calc.

I have a couple of things i want to finish in 3.8 and then i will have a good look at 5.
 
Last edited:
It's not free though is it? Not if the developers need to spend time doing it that could be spent doing other things.

+1

Time is money for game devs.

For 5% or 10% performance increase they are not going to bother.

Game devs are going to want to maximise profit not performance.
 
+1

Time is money for game devs.

For 5% or 10% performance increase they are not going to bother.

Game devs are going to want to maximise profit not performance.

(5 to 10%), its really not a good idea to listen to a developer who also says "its too hard"

There is no reason why this technology should not yield a great deal more than that.
Some developers are getting more.....

And surely we want developers to explore new technologies? if we do; maybe not be so childish as to put a measure on the benefits for us, what are we saying? that developers should not bother modernising unless it yields a 20% performance boost for us?

Really? are we all just elite console gamers now?

Edit. misread that ^^^^ somewhat. :)

Inevitably yes, your right, however, some developers will modernise while others will cross their arms and say "burrrh.. its too hard"
I know whose games i will and will not buy.
 
Last edited:
Don't listen to this PR talk. What are they gonna say, "yeah we could do it and it would be a big improvement for amd but we don't want to spend money on that because amd has low % share"?
 
amd and microsoft said that dx12 is more work for developers they statet that several times

but you get advancements from it like more control over everything and you can pack more visuals because the cpu overhead is gone with more drawcalls

but you only get really more graphics performane with dx12 with async compute.

and the benchmarks show that already nvidia is getting no benefit from dx12 right now in the benchmarks

amd gets because of the drawcall overhead removed + async compute

and if devs want more performance they probably have to go with it
 
Give it a few years and I'm sure async will be commonly used for most PC games, particularly once support for it gets built into all the big engines. As with the introduction of most technologies though, it'll take a while for devs to get used to programming for it. We've been through many similar technology introductions in the past that have yielded similar reactions from game developers. Not much new here.
 
Back
Top Bottom