if your DX12 performance isn't better than the drivers' DX11 performance, you're doing it wrong.
this is effectively best-case (again, within reason) AMD GCN performance vs. worst-case (mostly -- it's probably not intentionally slowing down Nvidia hardware) Nvidia GM20x performance. All you need to do is look at the DX11 performance. That is the bar to clear, on both sides. AMD set the bar very low because they didn't optimize their DX11 drivers much at all. Nvidia set the DX11 bar as high as possible to show where the developers need to start, not where they should finish.
I'm also pretty skeptical of some of the claims and language coming from Oxide. (
http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/) "All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months." Sure,
but it's an AMD game and so AMD is actively working with the devs while Nvidia isn't. "Some optimizations that the drivers are doing in DX11 just aren’t working in DX12 yet."
Or, Nvidia has better DX11 driver optimizations than we have DX12 code optimizations. "This in no way affects the validity of a DX12 to DX12 test, as the same exact workload gets sent to everyone’s GPUs." Um... see above:
you're running AMD-tuned code on Nvidia hardware, and then saying this doesn't affect the validity? I call bunk.
In other words,
I would be extremely hesitant about making blanket statements regarding what DX12 will and won't do for various GPU architectures based on a single game from a developer that is actively receiving help from only one of the GPU vendors. If we were looking at a game with an Nvidia TWIMTBP logo and Nvidia was doing great while AMD was struggling, I'd be saying the exact same thing. Looking at high level descriptions of the hardware and theoretical GFLOPS and using that to back up the current performance is silly, because the current performance is already skewed.
Why is AMD performing better on a game with an AMD logo that isn't even in public beta yet ? (And remember that the beta stage is when a lot of optimizations take place!) Because if it was anything else, we would be really dismayed.
Why isn't Oxide actively monitoring the performance of their shaders on all GPUs? Why did Nvidia have to do the work? Oxide is the developer, and they should be largely held accountable for their performance.
As for AMD's optimized shader code, the only requirement is that it not perform worse on Nvidia hardware than the original Oxide shader code. But it seems like the level of optimizations Oxide has made without help from AMD may not be all that great to begin with. And parts of the engine can and will change, up to and beyond the time when the game ships.
It feels like more than anything, this was Oxide yelling "FIRST!!11!!" and posting a "real-world DX12 gaming benchmark". But like any and all gaming benchmarks, the only thing the benchmark truly shows is how fast this particular game -- at this particular point in time -- runs on the current hardware and drivers.
Ashes is looking more interesting as a way to see what type of CPU is the recommended minimum than as a way of evaluating the AMD and Nvidia GPUs against each other. Hell, the instructions for the benchmark even recommended testing it on AMD R9 Fury X, 390, 380, and 370... but on the Nvidia side, only the 980 Ti is recommended. They know already that their current code is so badly optimized on Nvidia hardware that they only want the press to look at the fastest Nvidia GPUs.