mmmm, whats that standard DX12 benchmark? Ah yes Timespy, which cards dominate? 'special optimisation' more like Panos clutching at straws again...
TimeSpy is rigged and that's known since 2016, because of your ignorance the facts doesn't mean I am clutching straws.
To boot is coded to strictly favor Pascal's hack-job async implementation, namely compute preemption, as per nVidia's dx12 "do's"and "dont's" list pushed down the throat of engine makers and developers. And we see that when proper DX12/Vulcan async compute games appear, how bad Nvidia cards perform and how better AMD cards fare.
Preemption means TimeSpy is coded the same way the DX11 Firestrike is, running in single path. Because outright could have been impossible to run on Kepler & Maxwell GPUs since both do not support DX12 parallelism like AMD does since Hawaii (290/290X) all way back in 2013, and Pascal performance could have been abysmal because it doesn't support proper parallel Async compute.
However running preemption, hampers significantly AMD GPUs performance, completely castrating a far more advance design in line with DX12 specifications.
Similarly we see that with MS DXR how many corners Nvidia is cutting, while trying to push their own version of Ray Tracing instead the one of Microsofts supports in their specs.
Here is the Pascal implementation of DX12 in 2016 against AMD Polaris/Fury and TimeSpy had to be written following those restrictions, otherwise it wouldn't run at all on Pascal or other Nvidia cards. Superior architecture you say?
Here is the execution of different pipelines. Pre-Emption is what TimeSpy runs on, no different than Firestrike execution pipeline. The Async Shaders is according to DX12 spec, and found in AOTS for example. And the last one, is a botched version on the drivers Nvidia was trying to implement some time in the future.
Also you forgot as recent (not even 2 months ago) as World War Z on Vulcan Ultra, even the V56 beats the RTX2080 (with similar perf to RTX2080Ti), while V64 & RVII completely trash by a big margin the RTX2080Ti at 1080p and 2560x1440 and only barely manages to pull 4 FPS more at 4K. Strange Brigade maybe is another good example? How about Doom?
How about activating DX12 on Warhammer 2. My GTX1080 saw 30% fps drop in DX12 (even Kaapstad's Titan Pascal did), and the FuryX only lost 1 fps when we run the benchmarks comparison then. And the list goes on and one. See Battlefield V how AMD cards perform at DX12. Or you will say "game is AMD optimized" here also, when is a heavily Nvidia sponsored game?
And more Vulcan games are coming out and faster now, because of Macs.