• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Has anyone else noticed how well the GTX 770 does with DX12? The performance increase puts it ahead of the 980 Ti/Titan X if the AoS benchmarks are to be believed :eek:.... I wonder why this is?
 
That had me laughing quite a bit and pretty much sums up what most people must be thinking by now :D

You'd think so but look what happened after the 970 3.5Gb debacle. People are still lapping them up.

Having said that though, anyone buying a card which is missing or does not fully support a major DX12 feature such as Async Shaders needs to think twice. The card would only be viable for a year or less before everything gradually moves to DX12.
 
When is the benchmark going to be available for public download? Is it just beta testers that can use it now?

Founders ends soon, they will then be giving founders "beta keys" to give out to people for testing multiplayer. I haven't explicitly seen anything saying they are going to be doing a free benchmark only version, I think these beta keys are the closest and they haven't said how many of those they'll be doing.
 
You'd think so but look what happened after the 970 3.5Gb debacle. People are still lapping them up.

Having said that though, anyone buying a card which is missing or does not fully support a major DX12 feature such as Async Shaders needs to think twice. The card would only be viable for a year or less before everything gradually moves to DX12.

First off, there was no 970 debacle and the 970 I owned ran everything just as well as my 290X that I owned so not sure why you keep repeating the same thing.

Secondly, whilst ACE is a good thing for AMD in this bench, with it turned off for Nvidia, what does that mean for AMD or Nvidia? As I see it, AMD closed the gap in this very CPU heavy game and very CPU demanding. With the bench thread now being run by Kaap for this game, it will be more apparent of what is what but can you still tell me what difference it makes having it running on AMD and not for Nvidia? Genuine question, as it is the second time I have seen you say this.
 
Guys at Beyond3D have done some testing with this (with the use of GPUView)

https://forum.beyond3d.com/threads/dx12-async-compute-latency-thread.57188/page-26#post-1870028

The primary thing that GPUview revealed is that GCN considers the compute portion of the test as compute, while Maxwell still considers it graphics. This either tells us that A) Maxwell has a smaller or different set of operations that it considers compute, B) Maxwell needs to be addressed a certain/different way to consider it compute or C) it's another corner case or driver bug. And it's possible that whatever was happening in the ashes benchmark that was causing the performance issues is the same thing that's happening here. But we've got enough examples of stuff actually using the compute queue from CUDA to OpenCL, so it's absolutely functional.

So first we need to find some way to send in DX12 a concurrent graphics workload alongside a compute workload that Maxwell recognizes as compute, and see if there's any async behavior in that context. Unless the async test can be modified to do this, I think it's utility has run it's course and it's revealed a lot along the way.

And then figure out why it's not being used in the current version of the test. And if it's not being used for a legitimate reason rather than a bug or programming error - that this is one of the many things that GCN can consider compute that Maxwell can't. I can certainly believe GCN is more capable in this regard, but I still find it very difficult to believe that NVIDIA outright lied about Maxwell 2 and it's async capabilities.
 
Let them, I can't see how that would be the case. The capabilities are likely very much there. They also tested async compute scenarios in other games and the utilisation makes much more sense. Fair play to them, it's not something I would have really bothered to look into until such a time performance was degraded in a game that I could play in front of me - and not a canned benchmark ;)

AMD are getting very good at this type of thing after all.

Oh, but you have to see the funny side potentially in the turn of events the architecture is incapable. Possible hate campaigns that AMD are deliberately pushing asyncrounous compute on developers when it's not required maybe? :p
 
Last edited:
You mean DICE being in bed with AMD:p

I'm a realist, lets keep it that way ;).

Everything that we see from our perspective is a projection of what is perceived as influence - AMD have found a potential sell. Currently this just seems like something to latch onto, especially if it does transpire that Maxwell 2 isn't capable of performing these workloads in parallel (which I don't think is the case).
 
Last edited:
I'm a realist, lets keep it that way ;)

I have a feeling it will be the following:
1.)Oh! Noes! It supports Async Shaders turned up to 11!! Its making Nvidia look ****e. But Nvidia has betterz driverz and morr markitsharez.AMD has rubbish tesslelamations!
2.)Oh! Noes! It supports Gameworks turned to 11!! Its makes AMD look the crapest. Nvidia propriety everything and do nothing open and like money! Barstewards!
3.)Consoles gamers just play game.

:p:D
 
Last edited:
I have a feeling it will be the following:
1.)Oh! Noes! It supports Async Shaders turned up to 11!! Its making Nvidia look ****e. But Nvidia has betterz driverz and morr markitsharez.
2.)Oh! Noes! It supports Gameworks turned to 11!! Its makes AMD look the crapest. Nvidia propriety everything and do nothing open and like money! Bartstewards! AMD has rubbish tesslelamations!
3.)Consoles gamers just play game.

:p:D

lol, or it's just a case that Maxwell 2.0 can perform both queues in the same fashion GCN can...nobody in here is qualified to say either way.
 
lol, or it's just a case that Maxwell 2.0 can perform both queues in the same fashion GCN can...nobody in here is qualified to say either way.

It doesn't matter if it does or doesn't its the same arguments anyway going back to days of the FX and 9000 series,X800 and 6000 series....

You saw exactly the same bickering when Nvidia said it supported certain features in DX12 which AMD did not and every level of deflection was used by some to negate the advantage and the others to overstate it.

Its the same here. Since Ashes"might" do better on AMD hardware and uses async shaders,its all of a sudden invalid as a benchmark to some and ultra valid to others and so on.

It will be the same when the Arc DX12 patch drops and if it runs better on Nvidia hardware,it will be soon:

"AMD was lying,Maxwell is better for DX12 despite all the AMD async shaders crap lies"

I expect the Arc DX12 benches to be the next fest.

Then Deus Ex,will drop,and if it runs better on AMD hardware:

"Nvidia was lying,GCN is better for DX12 due to async shaders and Nvidia made crap lies"

Then Mirrors Edge...

Then Tombraider...

The whole point is both companies are approaching DX12 hardware from differently directions - they are both going to have strengths and weaknesses. Different games will target this - its the way its been for years.

We are not dealing in absolutes here.

Remember we are on the first generation of DX12 hardware - both Pascal and Arctic Islands are probably where its going to be.
 
Last edited:
Oh, but you have to see the funny side potentially in the turn of events the architecture is incapable. Possible hate campaigns that AMD are deliberately pushing asyncrounous compute on developers when it's not required maybe? :p

The issue with that statement is it's not comparable with what you're comparing it to (which I'm assuming is the tessellation "issue"?). Forcing extra (<<< note that word) tessellation adds no visual benefit, it serves only to cripple AMD cards leaving nVidia performance largely unaffected. There's no plus side to over tessellation.

More use of async is not the same, async is a performance enhancer. You can't "overuse async" any more than you can overuse multithreading - it will give performance improvements up to the point where the hardware can no longer manage. Unfortunately nVidia's ability in this area is significantly more limited than AMD's (if they have the ability at all, but I'm 90% certain they can use async), so nVidia won't be able to reap the benefits of it. But I'm also fairly sure devs will be putting an alternate code path for nVidia (such as in Ashes) to not use async. A-Sync is not there to gimp nVidia, it's there as a more efficient method of utilising GPU power.

Think of it as nVidia being a dual core i3 processor and AMD being a core2quad or something (the numbers aren't gonna match, but it's metaphor so...). In multithreaded apps, the quad would be able to significantly out perform the i3 but the i3 will outperform the C2Q in single or low-threaded apps due to higher IPC. That's what we're seeing with a-sync at the minute, along with other issues muddying the results a little.

TL : DR. a-sync =/= tessellation.
 
The issue with that statement is it's not comparable with what you're comparing it to (which I'm assuming is the tessellation "issue"?). Forcing extra (<<< note that word) tessellation adds no visual benefit, it serves only to cripple AMD cards leaving nVidia performance largely unaffected. There's no plus side to over tessellation.

More use of async is not the same, async is a performance enhancer. You can't "overuse async" any more than you can overuse multithreading - it will give performance improvements up to the point where the hardware can no longer manage. Unfortunately nVidia's ability in this area is significantly more limited than AMD's (if they have the ability at all, but I'm 90% certain they can use async), so nVidia won't be able to reap the benefits of it. But I'm also fairly sure devs will be putting an alternate code path for nVidia (such as in Ashes) to not use async. A-Sync is not there to gimp nVidia, it's there as a more efficient method of utilising GPU power.

Think of it as nVidia being a dual core i3 processor and AMD being a core2quad or something (the numbers aren't gonna match, but it's metaphor so...). In multithreaded apps, the quad would be able to significantly out perform the i3 but the i3 will outperform the C2Q in single or low-threaded apps due to higher IPC. That's what we're seeing with a-sync at the minute, along with other issues muddying the results a little.

TL : DR. a-sync =/= tessellation.


Oh, you took it seriously. I can't reply to your post rationally because you seem to know more about the advantages of parallel compute than what's actually been shown so far. Appologies for the nerve it hit though, you're right - AMDs flailing tessellation performance isn't comparable, as long as it has other perks...
 
I've seen the comparison made a few times before Silent, that's all. I wasn't sure if you were joking or not, but replied anyway ;P
 
Back
Top Bottom