Associate
- Joined
- 7 Sep 2012
- Posts
- 956
- Location
- Lancashire, England
That had me laughing quite a bit and pretty much sums up what most people must be thinking by now

Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Has anyone else noticed how well the GTX 770 does with DX12? The performance increase puts it ahead of the 980 Ti/Titan X if the AoS benchmarks are to be believed.... I wonder why this is?
That had me laughing quite a bit and pretty much sums up what most people must be thinking by now![]()
When is the benchmark going to be available for public download? Is it just beta testers that can use it now?
You'd think so but look what happened after the 970 3.5Gb debacle. People are still lapping them up.
Having said that though, anyone buying a card which is missing or does not fully support a major DX12 feature such as Async Shaders needs to think twice. The card would only be viable for a year or less before everything gradually moves to DX12.
The primary thing that GPUview revealed is that GCN considers the compute portion of the test as compute, while Maxwell still considers it graphics. This either tells us that A) Maxwell has a smaller or different set of operations that it considers compute, B) Maxwell needs to be addressed a certain/different way to consider it compute or C) it's another corner case or driver bug. And it's possible that whatever was happening in the ashes benchmark that was causing the performance issues is the same thing that's happening here. But we've got enough examples of stuff actually using the compute queue from CUDA to OpenCL, so it's absolutely functional.
So first we need to find some way to send in DX12 a concurrent graphics workload alongside a compute workload that Maxwell recognizes as compute, and see if there's any async behavior in that context. Unless the async test can be modified to do this, I think it's utility has run it's course and it's revealed a lot along the way.
And then figure out why it's not being used in the current version of the test. And if it's not being used for a legitimate reason rather than a bug or programming error - that this is one of the many things that GCN can consider compute that Maxwell can't. I can certainly believe GCN is more capable in this regard, but I still find it very difficult to believe that NVIDIA outright lied about Maxwell 2 and it's async capabilities.
That had me laughing quite a bit and pretty much sums up what most people must be thinking by now![]()
Oh, but you have to see the funny side potentially in the turn of events the architecture is incapable. Possible hate campaigns that AMD are deliberately pushing asyncrounous compute on developers when it's not required maybe?![]()
You mean DICE being in bed with AMD![]()
I'm a realist, lets keep it that way![]()
I have a feeling it will be the following:
1.)Oh! Noes! It supports Async Shaders turned up to 11!! Its making Nvidia look ****e. But Nvidia has betterz driverz and morr markitsharez.
2.)Oh! Noes! It supports Gameworks turned to 11!! Its makes AMD look the crapest. Nvidia propriety everything and do nothing open and like money! Bartstewards! AMD has rubbish tesslelamations!
3.)Consoles gamers just play game.
![]()
lol, or it's just a case that Maxwell 2.0 can perform both queues in the same fashion GCN can...nobody in here is qualified to say either way.
Oh, but you have to see the funny side potentially in the turn of events the architecture is incapable. Possible hate campaigns that AMD are deliberately pushing asyncrounous compute on developers when it's not required maybe?![]()
The issue with that statement is it's not comparable with what you're comparing it to (which I'm assuming is the tessellation "issue"?). Forcing extra (<<< note that word) tessellation adds no visual benefit, it serves only to cripple AMD cards leaving nVidia performance largely unaffected. There's no plus side to over tessellation.
More use of async is not the same, async is a performance enhancer. You can't "overuse async" any more than you can overuse multithreading - it will give performance improvements up to the point where the hardware can no longer manage. Unfortunately nVidia's ability in this area is significantly more limited than AMD's (if they have the ability at all, but I'm 90% certain they can use async), so nVidia won't be able to reap the benefits of it. But I'm also fairly sure devs will be putting an alternate code path for nVidia (such as in Ashes) to not use async. A-Sync is not there to gimp nVidia, it's there as a more efficient method of utilising GPU power.
Think of it as nVidia being a dual core i3 processor and AMD being a core2quad or something (the numbers aren't gonna match, but it's metaphor so...). In multithreaded apps, the quad would be able to significantly out perform the i3 but the i3 will outperform the C2Q in single or low-threaded apps due to higher IPC. That's what we're seeing with a-sync at the minute, along with other issues muddying the results a little.
TL : DR. a-sync =/= tessellation.