• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Probability over the next 4+ years that DX12 Feature level 12_1 wont deliver tangible benefits?

Microsoft, ATI and nvidia were basically planning DX12 back in 2005 apparently. ATI and Nvidia wanted to get lower level acess and a proper multithreaded API. Lord know why it took so long, points finger at M$.

I don't believe anything Microsoft say on the matter, i'm not saying it isn't true, MS had every incentive to deliberately gimp PC's over the past decade and more, they have been developing low level API's for all of that time, they found there way to their consoles but never to PC.

Console users Pay MS a recurring fee just to play games on their machines.
MS tried to bring an XBox live equivalent to PC and failed in spectacular fashion, no PC Gamer will pay for the privilege of actually playing their games on their Machines.
Its where PC Gaming has the advantage, another is graphical quality and performance, this at least is something MS can control as they have the API monopoly, i think the reason why we haven't seen any of Microsoft truly modern low level API's from the last 10 years and more is delibrate.

Now they have no choice.
 
Last edited:
You have to wonder the reason why AMD/NVIDIA are not talking about DX12 readiness/performance more...

Possible reasons might be
  • By Agreeement with M$ eg. so as not dilute the dx12 compatibility/benefits message
  • Neither party has a clear home run DX12 advantage and they dont want to hurt current sales.
 
You have every game under Mantle as example, Star Swar/Ashes of the Singularity, you have 3dmark, you have nvidia Agny's philosophy and the one with the Mech, etc.
 
You have to wonder the reason why AMD/NVIDIA are not talking about DX12 readiness/performance more...

Possible reasons might be
  • By Agreeement with M$ eg. so as not dilute the dx12 compatibility/benefits message
  • Neither party has a clear home run DX12 advantage and they dont want to hurt current sales.

There simply aren't many games put there in an advanced enough stage to really make comparisons particularly useful. The only real exception is Star Swarm, you can see some benchmarks here:

http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4


but even that is an artificial/synthetic benchmark that is designed to highlight how much better the draw call performance is iwth DX12. real games likely wont be that extreme except under some circumstances.
 
Last edited:
There simply aren't many games put there in an advanced enough stage to really make comparisons particularly useful. The only real exception is Star Swarm, you can see some benchmarks here:

http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4


but even that is an artificial/synthetic benchmark that is designed to highlight how much better the draw call performance is iwth DX12. real games likely wont be that extreme except under some circumstances.

You couldn't be more wrong about starswarm. It is an engine running AI and effects with thousands of units on screen. No different to how a game would work. You can't even use it as a benchmark since it is dynamic.

Yes it highlights the better performance in draw calls but it is still an engine demonstration.

Also there is a game coming out which will do exactly what starswarm is doing. 'Ashes of the singularity' Look it up since it uses the Oxide engine that starswarm uses.
 
You couldn't be more wrong about starswarm. It is an engine running AI and effects with thousands of units on screen. No different to how a game would work. You can't even use it as a benchmark since it is dynamic.

Yes it highlights the better performance in draw calls but it is still an engine demonstration.

Also there is a game coming out which will do exactly what starswarm is doing. 'Ashes of the singularity' Look it up since it uses the Oxide engine that starswarm uses.

I never said it doesn't run AI or isn't a complete game engine, but the benchmark is designed to stress drawcall performance and show off the ability of DX12. most games simply wont be like that, some will, some will be barley any different to todays games, many will be somewhere in-between with other bottlenecks limiting performance.
 
I never said it doesn't run AI or isn't a complete game engine, but the benchmark is designed to stress drawcall performance and show off the ability of DX12. most games simply wont be like that, some will, some will be barley any different to todays games, many will be somewhere in-between with other bottlenecks limiting performance.

You would be surprised at how many draw calls devs would use if they were not as restricted as they currently are. Especially when it comes to implementing lighting since DX11 is severely limited in the number of Real lights it can draw (around 6 - 10 depending on how well they implement it.). Where as DX12 can draw thousands of Real light sources with ease.

Many games for the past decade have pushed far higher numbers of drawcalls than DX11 can handle. But it was in their console version only. On pc they have to use all the tricks in the book to limit calls. It is not a matter of being efficient, they are having to work within the limitations of the API to reduce overhead. Due to the API being bloated more than an issue with the game.

Giving GTA 4 as an example. It is not a matter of the game being poorly optimised. It was hitting a massive wall with directx 9's draw call limit. It could have run better on DX10.

Many game engines are well optimised if they were written for a console considering the CPU performance. It is just the point that when porting to pc the games suddenly come up against this severe API wall.
 
With more draw calls and transparency come more textures and polygons. Do the Gpus have enough power to make use of them?

More draw calls does not mean more polygons or effects etc.

At the moment developers have to do something called batching. it is where they will make a list of commands and condense them into a single draw call. So an object could be rendered in a single call for its model and all of its textures.

it is not a matter of efficiency, it is a necessity required to work within the draw call limit of the API.

Objects can have multiple parts to them requiring multiple draw calls. but they can be condensed into a single batch.

But batching has the limitation of reducing variability. Where as you could change things on the fly if everything used its own draw call.
 
It's what MS and graphics professionals generally are referring to it by, not just AMD - it's not an AMD trademark or marketing name (HyperQ is a trademark and marketing term). I've seen a couple of feature tables erroneously claiming that Maxwellv2 support asynch, but if Maxwellv2 supported it then so would Kepler & Maxwellv1 (both have HyperQ). There's no support except emulation and HyperQ is emulation.

Hyper-Q pre Maxwell v2 does not work in mixed graphics and compute mode. Maxwell v2 does have support for mixed mode, so you are completely wrong on this.
 
Back
Top Bottom