• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Only AMD has true Async Compute - Doom Devs

AMD will try to push as many games into async, and nvidia will try to keep devs on dx11 as much as possible.

+1

However, I really cannot see why this cannot be done by the Devs for both sides. If Async is added for AMD cards and other features are added for Nvidia cards (To give both cards the optimum performance) then that is what we all should be hoping for, surely?

I really don't give a toss if your card is faster than mine as long as any and every game is running on my card to the best of its ability then I will be happy. What no-one should want is for either AMD or Nvidia cards to suffer poor performance to the benefit/detriment of the other.

Not sure I have said that right, but I hope you get what I mean.

Why aren't devs putting out games that make the best use out of both sets of cards....there you go, I got there in the end. :p
 
+1

However, I really cannot see why this cannot be done by the Devs for both sides. If Async is added for AMD cards and other features are added for Nvidia cards (To give both cards the optimum performance) then that is what we all should be hoping for, surely?

I really don't give a toss if your card is faster than mine as long as any and every game is running on my card to the best of its ability then I will be happy. What no-one should want is for either AMD or Nvidia cards to suffer poor performance to the benefit/detriment of the other.

Not sure I have said that right, but I hope you get what I mean.

Why aren't devs putting out games that make the best use out of both sets of cards....there you go, I got there in the end. :p


I agree with this sentiment exactly, as should everybody imo.
 
+1

However, I really cannot see why this cannot be done by the Devs for both sides. If Async is added for AMD cards and other features are added for Nvidia cards (To give both cards the optimum performance) then that is what we all should be hoping for, surely?

I really don't give a toss if your card is faster than mine as long as any and every game is running on my card to the best of its ability then I will be happy. What no-one should want is for either AMD or Nvidia cards to suffer poor performance to the benefit/detriment of the other.

Not sure I have said that right, but I hope you get what I mean.

Why aren't devs putting out games that make the best use out of both sets of cards....there you go, I got there in the end. :p

nVidia seem to be trying to push Vulkan (on their terms) rather than holding onto DX11.

nVidia can do the pipeline that AMD has stamped ASync over the top of - put the same variables in get the same result back even though they go about it different ways - though that doesn't mean the performance, etc. is necessarily optimal or can/can't be utilised effectively.
 
I agree with what you are saying. For people like us it was easy to understand but if you go onto Youtube and watch parts without a good understanding then i can see why people might be taken in. It was good marketing from my stand point of view as i seen it work for real. Go back and check the thread and you will see a few on here who fell for it.

I don't think anyone "fell for it" in the way you are suggesting. There are several people of a typically AMD favouring variety that are trying to ignore the fact of what was said and make out that they "fell for it" to suit their agenda, or just to get a rise out of "the other side".
 
nVidia seem to be trying to push Vulkan (on their terms) rather than holding onto DX11.

nVidia can do the pipeline that AMD has stamped ASync over the top of - put the same variables in get the same result back even though they go about it different ways - though that doesn't mean the performance, etc. is necessarily optimal or can/can't be utilised effectively.

No they can't otherwise the AoTS devs wouldn't need to take out the Async Compute path out for Nvidia cards.
They specifically said that Nvidia drivers displayed support for Async when queried but did not actually do anything when asked to do stuff.
 
+1

However, I really cannot see why this cannot be done by the Devs for both sides. If Async is added for AMD cards and other features are added for Nvidia cards (To give both cards the optimum performance) then that is what we all should be hoping for, surely?

I really don't give a toss if your card is faster than mine as long as any and every game is running on my card to the best of its ability then I will be happy. What no-one should want is for either AMD or Nvidia cards to suffer poor performance to the benefit/detriment of the other.

Not sure I have said that right, but I hope you get what I mean.

Why aren't devs putting out games that make the best use out of both sets of cards....there you go, I got there in the end. :p

yes in a world full of rainbows that would be swell :D
nvidia cannot push for async for the simple reason that it has nothing to gain, and a lot to lose, seeing performance of AMD cards back from the 7900series, gain 20-30% performance, that practicaly pushes every card to the next segment performance wise, how would Nvidia users feel when the majority of the game released puts a cheaper card than the one he had beating his by 20%, like 960 owner watching 380 match a 970, or a 390x almost matching a 980Ti, there is a reason why Nvidia have everything to gain by keeping async to a handfull of titles than seeing the majority of released games using it, the backlash wont be pretty, and firm's repuation too.
so do i believe Nvidia is staying idle in async front ? no i dont, i believe they are actively fighting against it, because thats just common sense, or at the very least slow down adoption for the time being to win time, so that users from 700-900series move to pascal/volta that are better equiped for it.
0 to win, a lot to lose.
 
Last edited:
Respek Lol. :p
I think DP is working overtime. Damage control is hurting his fingers or his keyboard..

The really silly thing is, Nvidia promised repeatedly to bring Async compute drivers to Maxwell, month after month, game after game, insisting they support it. Now apparently there are 'new' hardware features in Pascal that enable async..... but, you know, it was promised for Maxwell. If it supported it in hardware as Nvidia claimed many many times, there was never any reason to not have drivers for it from day one of DX12 availability.

But then, Nvidia still the last time I saw, claimed Fermi was getting DX12 drivers. Even after NVidia apparently had MS hurt DX12 by adding in a further lower level just to include Fermi in it, Fermi still isn't supported.

DP insisted forever that Maxwell did async compute, now he's insisting Pascal does it, even though his explanation/understanding of async compute changes by the post. The starting arguments post 1080 info was that it has pre-emption so supports async now. When pointed out how wrong that was the tune changed, still insists it has it with zero proof of it anywhere. Well as much proof of Fermi DX12 drivers and Maxwell async compute support.
 
No they can't otherwise the AoTS devs wouldn't need to take out the Async Compute path out for Nvidia cards.
They specifically said that Nvidia drivers displayed support for Async when queried but did not actually do anything when asked to do stuff.

Maxwell is/was quite broken software wise, Pascal another story but both support the (compute) functionality in hardware - Maxwell especially needs the developers to understand how they are loading it up rather than just chuck the work at it and hope for the best.
 
Last edited:
I don't think anyone "fell for it" in the way you are suggesting. There are several people of a typically AMD favouring variety that are trying to ignore the fact of what was said and make out that they "fell for it" to suit their agenda, or just to get a rise out of "the other side".

Multiple people in chat on the event, on the reddit thread and I believe on the thread here made various statements from "zomg, it's twice as fast as Titan X", to "wait, now it's twice as fast" and everything in between. A LOT of people were confused that he spent most of the time after introducing that slide leaving out that 'in VR' qualifier when making that statement and it clearly misled a lot of people because a lot of people were saying "cool, twice as fast as a Titan X".
 
I don't like how they were showing off a side of duel 480s running at 52% efficiency or whatever that was. Even if it was a bit faster than the 1080 it would have been better to see 100fps with 90+ efficiency. Or why not just pick a different game that showed their max potential.
 
Multiple people in chat on the event, on the reddit thread and I believe on the thread here made various statements from "zomg, it's twice as fast as Titan X", to "wait, now it's twice as fast" and everything in between. A LOT of people were confused that he spent most of the time after introducing that slide leaving out that 'in VR' qualifier when making that statement and it clearly misled a lot of people because a lot of people were saying "cool, twice as fast as a Titan X".

These people are otherwise known as idiots - there was a good 10-15 minutes or so going over the non-gaming performance and aside from where it went to the power of 10 slide at the end (which was partly disorganised due to a technical malfunction) very plainly has VR up in big letters - I think there is maybe 1 instance of him saying 2X TX where it doesn't have a slide in the background saying VR = 2X (I can't remember if its on the summary slide or not).

No one with any degree of intelligence could come away from that presentation thinking the 1080 was 2X TX in non-VR gaming performance.
 
I don't like how they were showing off a side of duel 480s running at 52% efficiency or whatever that was. Even if it was a bit faster than the 1080 it would have been better to see 100fps with 90+ efficiency. Or why not just pick a different game that showed their max potential.

It is very strange but they did show it at 1440P and getting 62fps in crossfire. At 50% utilization it means a single card is getting around 35fps.

Comparing it to older cards have a look at the following link:
http://www.hardocp.com/article/2016/04/01/ashes_singularity_day_1_benchmark_preview/3#.V088qL5G52U

It looks like it is about 390X performance in a DX12 game and probably better in DX11 games due to GCN 4.0. This is all speculation since we don't know the exact settings used but judging by the 58fps for the 1080 I think it may be crazy settings.
Not too bad for a $199 card.
 
Last edited:
The really silly thing is, Nvidia promised repeatedly to bring Async compute drivers to Maxwell, month after month, game after game, insisting they support it. Now apparently there are 'new' hardware features in Pascal that enable async..... but, you know, it was promised for Maxwell. If it supported it in hardware as Nvidia claimed many many times, there was never any reason to not have drivers for it from day one of DX12 availability.

But then, Nvidia still the last time I saw, claimed Fermi was getting DX12 drivers. Even after NVidia apparently had MS hurt DX12 by adding in a further lower level just to include Fermi in it, Fermi still isn't supported.

DP insisted forever that Maxwell did async compute, now he's insisting Pascal does it, even though his explanation/understanding of async compute changes by the post. The starting arguments post 1080 info was that it has pre-emption so supports async now. When pointed out how wrong that was the tune changed, still insists it has it with zero proof of it anywhere. Well as much proof of Fermi DX12 drivers and Maxwell async compute support.

What a load of rubbish.

Maxwell does do async compute. You can make some test code and see the results for yourself. People have done this and shown that async works with current drivers on maxwell.

Do you ever research anything before going on an anti-nvidia rampage?
 
Back
Top Bottom