• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

3D Mark Time Spy not using true A-Sync, Maxwell A-Sync switched off!

Soldato
Joined
19 Feb 2011
Posts
5,849
So it appears that 3D Mark may well not be using the true A-Sync and instead a concurrent version, whether this favours one vendor or another is probably debatable.

Also it seems that ASycn is switched off in 3D Mark for Nvidia as the driver level.

Some threads to read

http://steamcommunity.com/app/223850/discussions/0/366298942110944664/

http://www.overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958

https://www.reddit.com/r/Amd/comments/4t5ckj/apparently_3dmark_doesnt_really_use_any/

https://www.reddit.com/r/Amd/comments/4t6gz3/futuremark_developer_responds_to_accusations_of/

Cant say im suprised to be honest, Concurrent is probably the best middle ground for both card vendors, under it Nvidia cards work and AMD cards work, rather if it was true A-Sync and basically only AMD cards getting the benefit it would look bad on someone (Nvidia or 3D Mark? you choose)

Its interesting though, but this is what market share gives you, some will say the benchmark is skewed towards Nvidia Bias, i say not really, its skewed towards meeting both AMD and Nvidia in the middle kinda, while really AMD lose out as their cards can do so much more if the A-Sync is implemented correctly which tends to favour their cards.

What it does kinda point at though is what people were believing that Maxwell cannot do A-Sync correctly if at all and definitely not at a hardware level, and it also seems Pascal also cannot do it at a hardware level.

Interesting times ahead, i for one think Time Spy can still be used as an honest indication of how your card will work under DX12, as some devs may not put too much work into true A-Sync, but if they do, expect AMD cards to perform a lot better.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
Mostly what I'm gathering is this is BS. Concurrent in general means parallel, not in all usage in this context is what it means. Most of this is users on random forums not really understanding what is going on.

Without seeing the actual code then what people say is relatively meaningless. FM are also only claiming async compute is turned off for Maxwell directly(though always worth remembering that Nvidia stated that Maxwell had async, and a driver would be coming soon for Ashes.. a year ago, when they were still promising DX12 drivers for Fermi).

However that doesn't mean the async compute is effective or well used. More importantly people seem to be trying to turn it into a AMD has async compute and Nvidia doesn't, reality is AMD is doing DX12 better full stop, it's not dependent on async compute. One benchmark doesn't mean much here or there, neither really does one game. I didn't think AMD sucked because the poor implementation of DX12 in RotTR sucked, and they finally 'fixed' that in the end as well.

To date, AMD gets a significant boost in DX12 game... after game. Pascal nor Maxwell make the same gains under Vulkan that AMD do in Doom, in fact no game gets the boost Nvidia does in a benchmark.

One problem as always is it's a benchmark, there is no game code, no unpredictable nature of code, it's far to, lets say, optimised as game code and it's code that lends itself to heavy optimisation. Play Tomb Raider, Doom or Starcraft, you can't as a driver team go well after 70 frames X will happen and after 900 frames Y will happen, so lets literally prepare the driver for each change as best as possible to get the best result. A benchmark being completely repeatable makes it 'meh' in general.

Lets say 9/10 games show massive gains in DX12 for AMD as compared to Pascal and Maxwell, one benchmark showing better gains for Nvidia with async should only show that in general, it's either overly optimised for Nvidia or AMD haven't optimised for it, it's zero sign of real world DX12 gaming because for that we have... real world DX12 gaming which disagrees entirely with it.


On async, most people still have a completely fundamental misunderstanding of it and too many people are using the "it's purely down to AMD not being utilised well in DX11 and Nvidia don't have that problem" argument, which is incorrect.

Interestingly this Nvidia guys who insist Nvidia absolutely don't need async because they don't suffer poor utilisation like AMD... will have a hard time explaining the async gain in 3dmark.
 
Soldato
Joined
13 Mar 2008
Posts
9,638
Location
Ireland
3D Mark is only a benchmark app so who really cares..:confused:

The FPS performance you get in real games is where it counts..;)

It'll be used in reviews just like the other 3D Mark benchmarks, and will be used to judge how good a card is for the consumer.

So people should care, it's FutureMark after all their stuff is used all the time to gauge general performance of GPUs.

Whether or not this is an NV nasty play is something else though; but I found it interesting that the developer said this.

NVIDIA have been saying Maxwell will get driver support for it for ages; is it going to disappear like Fermi support for it and Vulkan did?
The reason Maxwell doesn't take a hit is because NVIDIA has explictly disabled async compute in Maxwell drivers. So no matter how much we pile things to the queues, they cannot be set to run asynchronously because the driver says "no, I can't do that". Basically NV driver tells Time Spy to go "async off" for the run on that card.
 
Soldato
Joined
27 Nov 2005
Posts
24,697
Location
Guernsey
It'll be used in reviews just like the other 3D Mark benchmarks, and will be used to judge how good a card is for the consumer.

So people should care, it's FutureMark after all their stuff is used all the time to gauge general performance of GPUs.
Sorry for me when buying a new GPU how many FPS I will get in the games I play is far more important to me then how high a 3D mark benchmark score it going to give...

But as they say everyone is different..;)
 
Last edited:
Soldato
OP
Joined
19 Feb 2011
Posts
5,849
Sorry for me how many FPS I get in real games is far more important to me then how high a 3D mark benchmark score is...

But as they say everyone is different..;)

This is true of a lot of people, however people will check benchmarks on games and also stuff like 3D Mark online to compare a card to a previous or alternative card.

Fact is, the 3D Mark benchmark will be used by some people to purchase cards to compare to other cards, its a fact of life. While it may have no indication of real world performance it will be used as a judgement by people.

Its not an issue for me as i dont buy Nvidia products, and i tend to treat a lot of this stuff with a pinch of salt, most benchmarks are flawed anyhow, people putting reference cards against overclocked custom cards etc to skew results.

Best practice is to check real world performance by asking users on forums their experiences etc, at best review sites can be used as a rough guide to what to expect. I generally dont even read them either as i dont like to put money into the pockets of dishonest people by reading their websites when they are blatantly biased to one manufacturer or another.
 
Soldato
Joined
13 Mar 2008
Posts
9,638
Location
Ireland
Sorry for me when buying a new GPU how many FPS I will get in the games I play is far more important to me then how high a 3D mark benchmark score it going to give...

But as they say everyone is different..;)

And if someone wants to see how well their new card does in DX12 a review site will most likely use this as a reference, just like many used Tomb Raider while it also didn't have ASync; but many ignored Hitman and others.

While it may not influence you, it will do so for the vast majority most likely. Looking at PcPer and other sites they have a host of games that Gameworks Titles, and run with those settings on, but then choose Tomb Raider for DX12.

This benchmark will make the list, and what I find worrying is users and NVIDIA keep saying Maxwell can do Async; and does so via the drivers.

Yet here we have FutureMark saying:
The reason Maxwell doesn't take a hit is because NVIDIA has explictly disabled async compute in Maxwell drivers. So no matter how much we pile things to the queues, they cannot be set to run asynchronously because the driver says "no, I can't do that". Basically NV driver tells Time Spy to go "async off" for the run on that card.

It doesn't sit well with me really; especially that the 980Ti is somehow beating the Fury X in this DX12 test, but in games they match or the FX pulls ahead.

Who knows, maybe the drivers will fix it all, or people will just look at games performance; but as it stands the majority of reviewers and their audience, will be taking these numbers to heart as gauge of DX12 performance. Especially since many people dismiss Hitman as broken, and the same for other DX12 titles if it shows Maxwell cards taking a hit, or no improvement.
 
Associate
Joined
30 Mar 2009
Posts
388
This was posted by one of the devs:

Whole thread (and the Reddit threads - all six or seven of them - and a couple of other threads in other places - you guys have been posting this everywhere...) have been fowarded to the 3DMark dev team and our director of engineering and I have recommended that they should probably say something.

It is a weekend and they do not normaly work on weekends, so this may take a couple of days, but my guess is that they would further clarify this issue by expanding on the Technical Guide to be more detailed as to what 3DMark Time Spy does exactly.

Those yelling about refunds or throwing wild accusations of bias are recommended to calm down ever so slightly. I'm sure a lot more will be written on the oh-so-interesting subject of DX12 async compute over the coming days.
 
Soldato
OP
Joined
19 Feb 2011
Posts
5,849
This was posted by one of the devs:

Fairly condescending reply from someone i guess from 3D Mark? i will admit the hysteria is a bit OTT, its almost pitchfork and flaming torch level of witch huntery lol...

But thats a bit of a condescending reply, i will say the damage however has already been done, some people will no doubt never trust their benchmarks again, admittedly 99% of them will be AMD users.

I find it funny though that Nvidia are saying one thing about Maxwell and Async and pretty much every Dev is saying otherwise, even stating its turned off..

someones lying and the finger seems to be pointing Nvidias direction, they will though come up smelling of roses anyhow, as they will soon render the Maxwell gpus obsolete via driver updates as is their norm, so its no loss to them and soon to be forgotten.

That my friends is the sad state of the GPU world.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,158
With all the hysteria people would do well to keep in mind - quoting slightly out of context:

Mahigan said:
See why understanding what is actually happening behind the scenes is important rather than just looking at numbers? Not all Asynchronous Compute implementations are equal. You would do well to take note of this.

Wonder if nVidia are also trying to use their market share to deny AMD something that would increase performance on their platform aside from any of the technical story - if so in the long run they are going to back themselves into a corner.
 
Soldato
Joined
4 Feb 2006
Posts
3,204
If the code was attempting to use Async and drivers didn't support it then you would expect some performance drop unless they are using an alternate code branch or reducing the load somehow for specific cards. All cards should execute the code in the same way otherwise it risks making the benchmark results invalid.

Since we actually see the non-async cards gaining, it looks like there is no real async compute being used. FM claim that Time Spy is a showcase of Async Compute, etc but what is their interpretation of Async Compute? They need to make that clarification soon.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,158
If the code was attempting to use Async and drivers didn't support it then you would expect some performance drop

Thing is that doesn't necessarily happen - very much depends on the nature of the data you are using and how you are trying to feed it to the hardware.
 
Soldato
Joined
7 Aug 2013
Posts
3,510
More importantly people seem to be trying to turn it into a AMD has async compute and Nvidia doesn't, reality is AMD is doing DX12 better full stop, it's not dependent on async compute.
It is absolutely dependent on the method of async compute being utilized. There is nothing else about the hardware that would give it an advantage.

Dont ignore that most of these titles run better on AMD cards even using DX11. Meaning that these are simply being built with AMD cards in mind as a priority from the get-go. Not that they have some inherent DX12 advantage outside of their async compute abilities.
 
Soldato
Joined
7 Aug 2013
Posts
3,510
Best practice is to check real world performance by asking users on forums their experiences etc, at best review sites can be used as a rough guide to what to expect. I generally dont even read them either as i dont like to put money into the pockets of dishonest people by reading their websites when they are blatantly biased to one manufacturer or another.
Yea, because as we all know, users on forums would never be biased and make biased claims about their experiences....

Review sites themselves are great and what we should absolutely be looking for. They are comprehensive and do loads of comparison testing that any ordinary user could(or would) not do. And they have experience doing it so generally know how to avoid rookie benchmarking errors.

People that think all these reviews sites are 'bought' or something are just being paranoid due to their own biases, often trying to dismiss things they dont want to hear/see with accusations of being bought or personal bias. As always, it's the fanboys who always tend to see fanboys everywhere else where there are none. Just paranoia and a reflection of their own biases in the end.

It's the artificial benchmarks that we should always treat with caution. Not because of them being biased or bought or anything, but because they only represent how cards will run on that one specific app. Which isn't terribly useful for knowing what real gaming performance will be. These benches seem more useful to simply give users a quick and easy way to do benches themselves to compare with others. The problem comes when people try and make leaps to conclusions based on them.
 
Soldato
Joined
4 Feb 2006
Posts
3,204
It is absolutely dependent on the method of async compute being utilized. There is nothing else about the hardware that would give it an advantage.

Dont ignore that most of these titles run better on AMD cards even using DX11. Meaning that these are simply being built with AMD cards in mind as a priority from the get-go. Not that they have some inherent DX12 advantage outside of their async compute abilities.

Async Compute only accounts for around 10% performance increase in most scenarios. The reason AMD is doing better in low level API's like DX12, Vulkan, Mantle is because these API's are reducing the driver overhead AMD suffers in DX11.
If AMD somehow managed to make their DX11 driver overhead similar to the DX12 driver then we would automatically see the boost in older games.

The massive boost we got in Doom Vulkan is a prime example of how driver overhead affects performance. The OpenGL is akin to the DX11 driver in that case.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
It is absolutely dependent on the method of async compute being utilized. There is nothing else about the hardware that would give it an advantage.

Dont ignore that most of these titles run better on AMD cards even using DX11. Meaning that these are simply being built with AMD cards in mind as a priority from the get-go. Not that they have some inherent DX12 advantage outside of their async compute abilities.

Yup, in Doom AMD clearly beats Nvidia in DX11 and DX12. It doesn't matter how well AMD does in DX11 or DX12, it's the gain going from DX11 to DX12, which is MUCH bigger for AMD regardless of initial DX11 performance. There is nothing else about the hardware, except even in games with minimal or no async compute implementation AMD gains more in DX12 than Nvidia. Async compute certainly helps and gives a varying increase in performance.

It's funny, one day Nvidia guys are saying there is way more to DX12 than just async compute, the next you guys are saying the only difference is async compute.
 
Soldato
Joined
7 Aug 2013
Posts
3,510
Async Compute only accounts for around 10% performance increase in most scenarios.
I highly doubt it's that little. Do you have a source on that figure or just a raw guess?

Async compute certainly has the potential for some very noticeable gains. It will obviously depend on the implementation and the specific application it is being used in, though.

We definitely know that it is being used on consoles to much greater effect.

The reason AMD is doing better in low level API's like DX12, Vulkan, Mantle is because these API's are reducing the driver overhead AMD suffers in DX11.
If AMD somehow managed to make their DX11 driver overhead similar to the DX12 driver then we would automatically see the boost in older games.

The massive boost we got in Doom Vulkan is a prime example of how driver overhead affects performance. The OpenGL is akin to the DX11 driver in that case.
Fully aware of AMD lacking in DX11/OpenGL drivers compared to Nvidia. And that this will account for some of the performance gain(especially in Doom). But this is software, not hardware. It certainly doesn't mean that AMD is just inherently 'better' at DX12. Just that when you release the bottlenecks, they had more to gain.
 
Last edited:
Permabanned
Joined
8 Jul 2016
Posts
430
Lol people are quoting Mahigan? He is the same guy who said that AMD will be first to launch Next Gen cards, he said that Polaris will be on par with Pascal high end, he said Nvidia Pascal is 6 months behind Polaris by quoting some other references.

There are some people who want have a BS reason or a disbelief that Nvidia is faster then AMD.
 
Last edited:
Soldato
Joined
7 Aug 2013
Posts
3,510
Yup, in Doom AMD clearly beats Nvidia in DX11 and DX12. It doesn't matter how well AMD does in DX11 or DX12, it's the gain going from DX11 to DX12, which is MUCH bigger for AMD regardless of initial DX11 performance. There is nothing else about the hardware, except even in games with minimal or no async compute implementation AMD gains more in DX12 than Nvidia. Async compute certainly helps and gives a varying increase in performance.

It's funny, one day Nvidia guys are saying there is way more to DX12 than just async compute, the next you guys are saying the only difference is async compute.
I'm not an 'Nvidia guy', first off. I'm just not anti-Nvidia, but I can see how that would confuse a certain segment of y'all there....

Anyways, you are deliberately twisting what has been said around to try and make some half hearted attempt to accuse people of hypocrisy, but you're way off the mark. There *is* more to DX12 than just async compute. If you disagree, you're simply wrong. The point being made here is that AMD's advantage comes from async compute capabilities, not the other aspects of DX12. I know that's not difficult to grasp, so please dont try and twist it around more.
 
Back
Top Bottom