• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DirectX 12 destroys competing API’s with up to 1600% improvement: AMD/Nvidia.

Im super excited for DX12 for its minimums as well. All review sites should post minimum FPS. Its so critical to the good experience. In one of the benchmark threads the fury X of AMDMatt is trading blows with the titan X (multi GPU configurations) but the furys have minimums in the 10s and the titans have minimums in the 50s and 60s. Not that im trying to spark an argument but minimums tell a big story to what your experience is going to be like.

As well as that, im looking forward to proper utilized multi GPU support in games.

The drawcall improvement should, using ArmA3 as an example, I believe allow for much better draw distance/view distance without such crippling performance hits too. That will be lovely.
 
Im super excited for DX12 for its minimums as well. All review sites should post minimum FPS. Its so critical to the good experience. In one of the benchmark threads the fury X of AMDMatt is trading blows with the titan X (multi GPU configurations) but the furys have minimums in the 10s and the titans have minimums in the 50s and 60s. Not that im trying to spark an argument but minimums tell a big story to what your experience is going to be like.

As well as that, im looking forward to proper utilized multi GPU support in games.

The drawcall improvement should, using ArmA3 as an example, I believe allow for much better draw distance/view distance without such crippling performance hits too. That will be lovely.

TXs are awesome when it comes to minimums and absolutely walk it when dealing with the hardest next gen games using max settings @2160p. This has nothing to do with DX12 or Mantle but solely to the fact that the cards have 12gb of VRAM.

The downside is you need several of them to pull it off, if you want a single card you should look at a GTX 980 Ti or Fury X.
 
Starswarm was a good indicator of actual performance differences in games. The Mantle version was a lot faster than DX11 so I expect DX12 to give a similar boost with Ashes of the Singularity.
 
Starswarm was a good indicator of actual performance differences in games. The Mantle version was a lot faster than DX11 so I expect DX12 to give a similar boost with Ashes of the Singularity.

Yeah it was nearly as fast as dx11 on equivalent nvidia cards.

Hopefully by the end of 2015 there will be a few dx12 games out to compare properly.
 
While that is true, it was a very difficult for developers to optimize through the stack of API abstraction, windows management, driver voodoo etc., DX12 won't stop the need for driver optimizations. One of the major issues is that there are different GPU vendors with different architectures and even the different vendors have different chips. A game developer cannot optimize for all the different GPUs. Moreover, even if they wanted to optimize things the developers don't know details of the hardware. and then there is the issue of new hardware released after the game launch, e.g. a recently released game wouldn't have had optimization support for Fury cards.

So AMD/nvidia will still have to out a lot of effort into optimizing games, but game developers can do some optimizations themselves with more predictable results without driver voodoo and API abstraction interfering to such a degree. Of course the matter takes more work by developers, so again we will see smaller indie devs use middleware platforms like gamesworks to reduce development costs.

Yeah,of course game devs wont optimise to every gpu out there. But things should be at a level where they can write general shader code that should work ok on all cards. And with support/sponsorship might write alternate shader pathways more optimised for gcn or nvidia cards.

Most driver optimisation wont be on a per game basis though since the driver is dumb now. Most of it will be to improve the standard api calls and optimise the translation algorithms.

With the game engine handling gpu state and memory, I don't think they can use their current driver tricks to replace entire stacks of shader code for a specific game. Since the engine now prepares then sends everything directly to the gpu. Since that is the whole point of the lower abstraction. Etc.
 
Yeah,of course game devs wont optimise to every gpu out there. But things should be at a level where they can write general shader code that should work ok on all cards. And with support/sponsorship might write alternate shader pathways more optimised for gcn or nvidia cards.

Most driver optimisation wont be on a per game basis though since the driver is dumb now. Most of it will be to improve the standard api calls and optimise the translation algorithms.

With the game engine handling gpu state and memory, I don't think they can use their current driver tricks to replace entire stacks of shader code for a specific game. Since the engine now prepares then sends everything directly to the gpu. Since that is the whole point of the lower abstraction. Etc.


What you say applies equally to DX11 and Dx 10 and DX9 and OGL 4 and OGL 3 etc. In an ideal world a developer would write a single DX11 shader that runs equally well on all different GPUs and the drivers only optimize the API and draw calls. In the real world it doesn't work that way because different GPUs have different strengths and weaknesses so it always makes to have GPU specific optimizations. Moreover, as the no-free lunch theorem proves, domain specific knowledge will always yields improved results. So it may be that 1 GPU vendor doesn't do game specific optimizations, but if the oher vendor does then they will get much better performance.




DX12 does allow developers much more control over resources butt he draw calls still go through the driver and will still get optimized. DX12 allows developers with the ability and resources to do better than the AMD/Nvidia driver but at the same time, it will make it even easier for poor or resource constrained developers to make more mistakes.


it is like programmign in straight C versuses a managed langauge like C#. With great power comes great responsibility, managing your ownn memory can improve performance but cn lead to altos of other issues. NVidia and AMD willboth be very busy with drivers in the future, but the top developers and game engines will make a much better job of things than they can currently do with DX11.
 
AMD appear to be straw clutching again. Cherry picked numbers and standard smoke and mirror approach. The 'fps per inch' crap they came out with on the Nano is just bad.
 
What you say applies equally to DX11 and Dx 10 and DX9 and OGL 4 and OGL 3 etc. In an ideal world a developer would write a single DX11 shader that runs equally well on all different GPUs and the drivers only optimize the API and draw calls.

The main reason this does not work, as mentioned before, is due to the abstraction and no talking between driver and game engine. But with the engine now controlling everything all the optimisation has to be done in the games rendering engine. So a best case set of shader code can be used an analysed by the game devs.

The driver is just dumb, the rendering engine now compiles and packs the shaders and gpu state into draw calls. Before, with more abstracted api's, a stream of shader code is sent to the driver and it performs the above. so shaders can be replaced with ease in driver for a game. But it will be harder to impossible with low abstraction api's since the rendering engine has already organised and packed the Draw calls and gpu state information.

which is why developers can more easily debug performance issues with certain cards as they now have direct feedback and control. so they can then improve their shaders.

of course there will be optimisation in drivers. but it will be to improve the api calls between the driver and gpu for a specific architecture. The optimisations won't revolve around game specific things. the updates in the past with the mantle driver and BF4 were to do with feature additions or bug fixes for the api itself. nothing to do with shader code etc which is how it should be.
 
Last edited:
I agree that minimums are important,but the length and frequency of the minimums needs to be spelt out aswell.Assuming a game hits the same highs in this following scenario,a game that has a 9 fps low but only occurs once during a fade in of the level starting will be a whole lot better than a game that has say, a 40 fps minimum but tends to do so every few seconds or so.
 
Back
Top Bottom