• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Aquanox Dev: We’d Do Async Compute Only With An Implementation That Doesn’t Limit NVIDIA Users

Caporegime
Joined
24 Sep 2008
Posts
38,284
Location
Essex innit!
while interviewing Digital Arrow about Aquanox Deep Descent, we have asked them if they intended to use Async Compute in their game. Here’s their reply:
Advertisements


We aim to develop a game that is enjoyable to everyone who wishes to join the world of Aqua. Implementing and/or focusing on technologies that would limit certain people from accessing the game is entirely against our philosophy of being a community focused developer. If at any point, there will be an implementation possible that will not limit NVIDIA card users, then we will certainly explore this option as well.

This isn’t really surprising. According to the latest Jon Peddie Research report, NVIDIA has reached 81% of the discrete GPU market share; it wouldn’t be prudent at all for developers to focus on a technology that may not translate very well for the majority of their potential user base.
This is why it is unlikely that many games in the near future will make Async Compute central to their development. Besides, there are other DirectX 12 features to exploit, some of which (such as Conservative Rasterization and Raster Order Views) are only available on NVIDIA hardware right now; Digital Arrow is currently evaluating DirectX 12 options for Aquanox Deep Descent.

Read more: http://wccftech.com/aquanox-dev-async-compute-implementation-limit-nvidia-users/#ixzz3lPs91YEx

I expected to read things like this soon enough and even though Nvidia will have Async (albeit not as fast as GCN), developers won't want to alienate the majority of GPU users.
 
According to the latest steam hardware survey. nVidia are at around 52% with AMD at 25%. That 81% figure was just for new sales in a quarter, doesn't mean it is actual market share.

And having Async on AMD if they are making a console version of the game has no detriment to nVidia. it is just something they don't support as well. But disabling it for unsupported hardware would be fine.

Now they would be complete hypocrites if the game suddenly has stupidly high levels of tessellation.
 
According to the latest steam hardware survey. nVidia are at around 52% with AMD at 25%. That 81% figure was just for new sales in a quarter, doesn't mean it is actual market share.

And having Async on AMD if they are making a console version of the game has no detriment to nVidia. it is just something they don't support as well. But disabling it for unsupported hardware would be fine.

Now they would be complete hypocrites if the game suddenly has stupidly high levels of tessellation.
Not even that...if they adapt the use of any GameWorks feature, it would already be going against the beliefs that they claim :p

Oh, and Ubisoft should definitely take note from Digial Arrow, but guess that's not going to happen being Nvidia's biggest partner and all.
 
Last edited:
Not a surprise but then I expect Nvidia's next range will be quite capable at it so it's all just talk right now.
No doubt. Nvidia always tends to have the lead when it comes to new generation launch each gen, but supporting older cards is not exactly their most favourite thing to do.

It is interesting that Nvidia was really high profile on banging on the drum marketing that they have superior dx12 comparing to AMD, as they have more cards "compatable" with dx12 dating back to the 400 series...but now the question is, can cards at lower tier than the 980Ti actually deliver meaningful performance in actual use? I guess that is the biggest question.

On one hand it would be good that people with older Nvidia cards (i.e. those that are still braving on with the GTX580, GTX670/GTX680 etc) can get a taste of dx12, but on the other hand, it could also mean it is holding back dx12 from being utilised to it maximum potential :(

Surely developer can implements the use of Async Compute with different levels, much like AA and allow people that has higher Async Compute performing cards to turn up the setting, and turn down for those that has lower performing card?
 
Last edited:
It would be unfair to AMD users if they did not implement async compute at all because of Nvidia cards not properly supporting it. I don't see how Nvidia users would be disadvantaged if they implemented async for AMD cards but turned it off for Nvidia cards.
 
We've not seen any real world benefit of AMD's Async compute yet so what does it matter? it's all just paper talk and marketing at the moment.

The way it was talked about surrounding Ashes of Singularity you'd have expected NVidia to be getting half the frame-rate in it, yet they're still faster than AMD.
 
Last edited:
It would be unfair to AMD users if they did not implement async compute at all because of Nvidia cards not properly supporting it. I don't see how Nvidia users would be disadvantaged if they implemented async for AMD cards but turned it off for Nvidia cards.
Actually I though the whole point of Async Compute was to increase performance? I simply don't understand how adapting the use of Async Compute would have negative impact on Nvidia users, when the best performance from their card (may it be with Async Compute enabled or disabled) would remain unchanged?

Hypothetically, let's say 390 and 970 have the same performance with Async Compute disabled, and enabling it would give the 390 a boost in performance become 125-130%, but it the 970 remain unchanged at 100% performance with Async Compute disabled, due to enabling Async Compute may cost a drop in performance. So how exactly is it making it a worse experience for Nvidia user than before, except for knowing that they are not getting the same benefit as AMD users, due to Nvidia didn't include the neccessary things to get this benefit on their cards? That really got me scratching my head.

We've not seen any real world benefit of AMD's Async compute yet so what does it matter? it's all just paper talk and marketing at the moment.

The way it was talked about surrounding Ashes of Singularity you'd have expected NVidia to be getting half the frame-rate in it, yet they're still faster than AMD.
Sadly 980Ti/Titan X is not the entire Nvidia product range...
 
Last edited:
AMD flavoured devs will go the route of Over-Asyncellation :D

An even playing field is good though I wouldn't mind seeing AMD ahead in the odd game in fairness.
 
Exactly

Why is my 970(the most popular mid range Nvidia gpu) getting destroyed by the 290, what happened there mmj?:p

try turning off temporal AA, which is a pointless blur filter, and then see where the land lies ;)
as you've said, the 970 is a mid range card so why would you expect to run every game on maximum settings - at 26~fps a 290 isn't going to be playable either
neither is a FuryX at 31, nor a 980ti at 38

at actual playable settings on all 4 cards you are maybe one setting better off on the top tier cards... if anything it just goes to show what good bang for buck both a 970 and a 290 are - and with the bigger market share a 970 is more likely to have games optimised around it than the 290

the way people are talking "destroyed" it sounds like the 290 gets double the frame rate over a 970, but it's less than 20% difference

this news isn't really news, we all knew that different devs were going to take different approaches to this... if async compute is extra work then I guess it just depends on who comes knocking with the cheque book open
 
Last edited:
Exactly

Why is my 970(the most popular mid range Nvidia gpu) getting destroyed by the 290, what happened there mmj?:p

It could be down to memory bandwidth (512bit bus v 256bit bus) in a game such as Ashes of Singularity, or the fact that AMD have started aggressively overriding tessellation factors to get maximum performance, whereas NVidia older GPU's process whatever is asked of them without any corner cutting.

You can't really blame AMD and their partners for the marketing blitz about async compute as it's all they can do really, they're getting absolutely trounced in all areas so what they can do to actually market their products? find a feature that their GPU's have that NVidia don't and then try to blow it all out of proportion with help from some of their industry friends.

It still makes me laugh that people say AMD are not good at marketing, they put more effort into shaping peoples views on forums such as this through media articles and their embedded reps than any other company, Intel and NVidia just do all of their talking through their products.
 
Last edited:
Actually I though the whole point of Async Compute was to increase performance? I simply don't understand how adapting the use of Async Compute would have negative impact on Nvidia users, when the best performance from their card (may it be with Async Compute enabled or disabled) would remain unchanged?

Hypothetically, let's say 390 and 970 have the same performance with Async Compute disabled, and enabling it would give the 390 a boost in performance become 125-130%, but it the 970 remain unchanged at 100% performance with Async Compute disabled, due to enabling Async Compute may cost a drop in performance. So how exactly is it making it a worse experience for Nvidia user than before, except for knowing that they are not getting the same benefit as AMD users, due to Nvidia didn't include the neccessary things to get this benefit on their cards? That really got me scratching my head.

That's basically what I said... :confused::confused::confused:
 
Consoles will ensure async is used. Since GCN in the Xbox One supports it I expect devs to make sure that feature i used to the fullest. Microsoft will demand it. Devs may reduce the impact in PC versions or code a different path for Nvidia cards...like AoTS devs have.
 
mmj, AMD have the tiniest powerhouses available yet keep putting up pics of the card on black backgrounds, where is the Steam boxes/equivalent battle box program in a sff...

If AMD were even remotely good at marketing, they would be doing what Nvidia do-making you think you need it, that's why Nvidia have double the userbase.
 
Back
Top Bottom