• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

Stuff like dense grass/foliage, etc. can really benefit from that kind of stuff - that said I have a slightly hardware assisted, software routine for handling that which is ridiculously ridiculously fast without needing anything more advanced than DX7.

So if you have a scene where you shoot someones glasses off his head, it flew across the room between a wine bottle, and a semi-transparent leaf of a plant while the sun shines on them throught the window then the NV cards will show their stuff? :)
 
Yeah and sparse volume textures and conservative rasterization...

I cannot see Microsoft missing the boat on all these goodies for their own console. I am not saying they are are unwanted - moreso the hardware has been built and shipped before these extras materialised that you are speaking of. For the bulk of features the 12.0 supported cards will make use of them.

far more consoles than pc gamers - game devs make games for where the money is and its not the pc market.
 
It may only be one benchmark but it could be a sign of things to come. AMD owe Nvidia a slap in face for all those DX11 stunts Nvidia pulled with games like Crysis 2 and pointless tessellation it got developers to put in parts of the game that nobody would see (like under water) which crippled Radeon cards.

There were also similar cries when Withcer 3 came out that the game had high level so of tessellation it crippled AMD cards but when tessellation was forced at a driver level to run at a lower level it looked the same and performance was fixed. It’s not just DX11, there was the incident in Batman Arkham Asylum where effectively locked AMD users from applying anti-aliasing, Gameworks implementation in Watchdogs.
 
Does this benchmark look any different on NVidia or AMD hardware, does it run exactly the same?

If the answer is yes then 'is' there a point to be made about needless amounts of Async compute being used? Isn't this just the same as the tessellation of Geralt's hair in the Witcher 3, use more for no benefit just to hurt the opposition?
 
Does this benchmark look any different on NVidia or AMD hardware, does it run exactly the same?

If the answer is yes then 'is' there a point to be made about needless amounts of Async compute being used? Isn't this just the same as the tessellation of Geralt's hair in the Witcher 3, use more for no benefit just to hurt the opposition?
Async shaders are not adding extra data like tessellation does, its just about running multiple shader programs in parallel.
 
async works best with a lot of onscreen `action` - something which large scale RTS uses 6000 units on screen for example.

Async in Ashes case and many other cases, is used for lighting calculations. As with DX12 you can produce many more Real light sources and produce more accurate shading compared to DX11. So running them in parallel is a large boost to latency and performance when you have many light sources on screen.
 
Async shaders are not adding extra data like tessellation does, its just about running multiple shader programs in parallel.

I think his point is - if nVidia can render the same features at the same fidelity without performance penalty in serial then forcing async compute which doesn't run as well apparently on nVidia hardware is needlessly handicapping it.
 
Back
Top Bottom