Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
In that techreport bench they did used factory overclocked version if 980ti though. Aftermarket cards are cool if used in tests over references, but they should label them in sheets aswell.
It's not realistic to expect them to hunt down a 980 Ti running at the specced reference clocks, since such cards hardly exist; my bog standard reference card boosts to close to 1300 out of the box.
How have they used Async more in AOTS?
And have you actually looked at the AOTS bench thread? lol
It's impossible I would say, the 'reference' clocks are a minimum guarantee more than anything. I am yet to see an NVidia GPU that doesn't boost a few hundred megahertz above advertised speeds.
As for the Humbug crying about a top end clocking 980Ti 'skewing' results, you can get them for the price of Fury X so how is that skewing anything? AMD's system of reducing the tessellation factor down to an optimum (that is low) level for their GPU's for maximum performance, rather than rendering the application requested setting is skewing benchmark results more than anything else.
Since the Nvidia cards aren't seeing any real difference between DX11 and DX12 I can only assume that AMD has simply improved their DX12 drivers a lot over the DX11 drivers rather than any significant boost given by DX12 itself..
If Async compute was being used significantly then I would expect some real difference between the Async codepath and the Nvidia codepath of the game since the Nvidia version of the game does not use Async at all.
What I find head scratching is how in AOTS - DX11 gets more frames than DX12 on Nvidia. I know Mantle was great for AMD and gave some great performance boosts but still didn't overtake Nvidia (DX11), so it just shows how well Nvidia had DX11 coded or how badly DX12 is on Nvidia at the mo (pick one).
I think it is great to see a couple of early DX12 demo's and bench's but truthfully, it is far to early to call and I expect some massive improvements once devs and drivers are matured for DX12.
What i want to see is more mid range builds with cards like my 670 in benchmarks not 980s and furys with the lowest being a 960, and the latest cpus with i7s mainly, all the damn time in benchmarks. It's like they think PC users ONLY have the latest things. When most have a little bit older but still really good hardware like the i5 3570k and 670 for examples.
For the Avg joe Mantle was far better performance to anything else.
It take over take Nvidia DX11 stop kidding yourself. Benchmarks on here speak for themselves and yes I am ignoring runs that needs some heavy OC. For the Avg joe Mantle was far better performance to anything else.
Just looked at all of our Mantle Vs DX11 bench results and Nvidia are winning all of them, so stop being blind. And discount overclocking on a site called Overclockers as much as you like, the numbers are there and they don't lie.
Anandtech has 7970/680 results on various cpu's.
Put in a lot of hours on Mantle, while Nvidia DX11 is great, what you got on screen wasn't on the same level as Mantle, Mantle was miles ahead and gave us insight to what's coming on DX12, can't wait for titles to land.
In that techreport bench they did used factory overclocked version if 980ti though. Aftermarket cards are cool if used in tests over references, but they should label them in sheets aswell.