• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Not quiet sure it was posted before but here we go:
Wow, lots more posts here, there is just too many things to respond to so I'll try to answer what I can.

/inconvenient things I'm required to ask or they won't let me post anymore
Regarding screenshots and other info from our game, we appreciate your support but please refrain from disclosing these until after we hit early access. It won't be long now.
/end

Regarding batches, we use the term batches just because we are counting both draw calls and dispatch calls. Dispatch calls are compute shaders, draw calls are normal graphics shaders. Though sometimes everyone calls dispatchs draw calls, they are different so we thought we'd avoid the confusion by calling everything a draw call.

Regarding CPU load balancing on D3D12, that's entirely the applications responsibility. So if you see a case where it’s not load balancing, it’s probably the application not the driver/API. We’ve done some additional tunes to the engine even in the last month and can clearly see usage cases where we can load 8 cores at maybe 90-95% load. Getting to 90% on an 8 core machine makes us really happy. Keeping our application tuned to scale like this definitely on ongoing effort.

Additionally, hitches and stalls are largely the applications responsibility under D3D12. In D3D12, essentially everything that could cause a stall has been removed from the API. For example, the pipeline objects are designed such that the dreaded shader recompiles won’t ever have to happen. We also have precise control over how long a graphics command is queued up. This is pretty important for VR applications.

Also keep in mind that the memory model for D3d12 is completely different the D3D11, at an OS level. I’m not sure if you can honestly compare things like memory load against each other. In D3D12 we have more control over residency and we may, for example, intentionally keep something unused resident so that there is no chance of a micro-stutter if that resource is needed. There is no reliable way to do this in D3D11. Thus, comparing memory residency between the two APIS may not be meaningful, at least not until everyone's had a chance to really tune things for the new paradigm.

Regarding SLI and cross fire situations, yes support is coming. However, those options in the ini file probablly do not do what you think they do, just FYI. Some posters here have been remarkably perceptive on different multi-GPU modes that are coming, and let me just say that we are looking beyond just the standard Crossfire and SLI configurations of today. We think that Multi-GPU situations are an area where D3D12 will really shine. (once we get all the kinks ironed out, of course). I can't promise when this support will be unvieled, but we are commited to doing it right.

Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

Also, we are pleased that D3D12 support on Ashes should be functional on Intel hardware relatively soon, (actually, it's functional now it's just a matter of getting the right driver out to the public).

Thanks!

As i wrote before, this is a splash in the ocean at this moment.

P.s.: Fermi wddm 2.0 driver coming soon
 
Last edited:
Not quiet sure it was posted before but here we go:
Wow, lots more posts here, there is just too many things to respond to so I'll try to answer what I can.

/inconvenient things I'm required to ask or they won't let me post anymore
Regarding screenshots and other info from our game, we appreciate your support but please refrain from disclosing these until after we hit early access. It won't be long now.
/end

Regarding batches, we use the term batches just because we are counting both draw calls and dispatch calls. Dispatch calls are compute shaders, draw calls are normal graphics shaders. Though sometimes everyone calls dispatchs draw calls, they are different so we thought we'd avoid the confusion by calling everything a draw call.

Regarding CPU load balancing on D3D12, that's entirely the applications responsibility. So if you see a case where it’s not load balancing, it’s probably the application not the driver/API. We’ve done some additional tunes to the engine even in the last month and can clearly see usage cases where we can load 8 cores at maybe 90-95% load. Getting to 90% on an 8 core machine makes us really happy. Keeping our application tuned to scale like this definitely on ongoing effort.

Additionally, hitches and stalls are largely the applications responsibility under D3D12. In D3D12, essentially everything that could cause a stall has been removed from the API. For example, the pipeline objects are designed such that the dreaded shader recompiles won’t ever have to happen. We also have precise control over how long a graphics command is queued up. This is pretty important for VR applications.

Also keep in mind that the memory model for D3d12 is completely different the D3D11, at an OS level. I’m not sure if you can honestly compare things like memory load against each other. In D3D12 we have more control over residency and we may, for example, intentionally keep something unused resident so that there is no chance of a micro-stutter if that resource is needed. There is no reliable way to do this in D3D11. Thus, comparing memory residency between the two APIS may not be meaningful, at least not until everyone's had a chance to really tune things for the new paradigm.

Regarding SLI and cross fire situations, yes support is coming. However, those options in the ini file probablly do not do what you think they do, just FYI. Some posters here have been remarkably perceptive on different multi-GPU modes that are coming, and let me just say that we are looking beyond just the standard Crossfire and SLI configurations of today. We think that Multi-GPU situations are an area where D3D12 will really shine. (once we get all the kinks ironed out, of course). I can't promise when this support will be unvieled, but we are commited to doing it right.

Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

Also, we are pleased that D3D12 support on Ashes should be functional on Intel hardware relatively soon, (actually, it's functional now it's just a matter of getting the right driver out to the public).

Thanks!

As i wrote before, this is a splash in the ocean at this moment.

P.s.: Fermi wddm 2.0 driver coming soon

Might be an idea to post the link from were it's from :p

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702

Edit:

Side note. Bet this is the reason for the Ark dx12 patch delay
 
Last edited:
Last edited:
Well I hope they get it working on maxwell...I've enjoyed my switch to my 980ti a lot especially adaptive sync on my old 120hz Benq. It smashes dx11 but I did buy it with a pretty big eye on dx12 as well. Even if Nvidia don't get the same gains as AMD with a-sync it'd be nice to get something!
 
Actually a good point was brought up in that thread and one that I had forgotten. One of the major changes from Fermi to Kepler(and indeed Maxwell),was the move from hardware based scheduling to software based scheduling to improve performance per watt:

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3

AMD,went the other way which meant increased power consumption,but Nvidia probably has to do more software fine-tuning as a result.

So it again it looks like a different way of doing things.

It might be interesting to see how Mirrors Edge performs as that is confirmed to be a Nvidia title and uses async shaders.
 
Sorry, it is late night. Believe it or not i also can mistake. I even bookmarked it.

For the side note: i wouldn't judge any vendor in the next 6 months in this DX 12 matter at all. It will cause more harm then informative resolution.

To be fair, dx12 don't really interest me per se.

Been mainly a linux user it's Vulkan I'm interested for, if they ever finalize the spec that is.

As both API's do async shading it looks like it will effect both + platforms regards working the ins and outs of it all.
 
https://developer.nvidia.com/sites/...works/vr/GameWorks_VR_2015_Final_handouts.pdf

All our GPUs for the last several years do context switches at draw call boundaries. So when the GPU wants to switch contexts, it has to wait for the current draw call to finish first.

So, even with timewarp being on a high-priority context, it’s possible for it to get stuck behind a longrunning draw call on a normal context. For instance, if your game submits a single draw call that happens to take 5 ms, then async timewarp might get stuck behind it, potentially causing it to miss vsync and cause a visible hitch.
 
It might be interesting to see how Mirrors Edge performs as that is confirmed to be a Nvidia title and uses async shaders.

Where is it confirmed that it is nvidia title?
All I remember is nvidia posting 'me too' videos about certain dx12 titles, which is in no way or form a confirmation of any agreements.
 
Where is it confirmed that it is nvidia title?
All I remember is nvidia posting 'me too' videos about certain dx12 titles, which is in no way or form a confirmation of any agreements.

It is confirmed somewhere but on my phone so not able to link.
 

this is in no way indicative that ME is gameworks:

http://www.geforce.co.uk/whats-new/...w-generation-of-blockbusters-at-gamescom-2015

Here that guy is saying that they gonna show couple of exciting games, and actually later in video does not show catalyst being gameworks title, while he explicitly mentions Siege being gameworks title, same as witcher.
So nothing concrete. In my opinion, nvidia is just playing with words with catalyst that's all to stay relative in all upcoming games.
Catalyst is frostbite engine, and that engine (latest incarnation) was kinda developed with AMD at the side.
If nvidia is involved in that title, I believe and most of all hope it is gonna be same as GTA V. Some bits for nvidia users, other bits for AMD users.
 
Nvidia sure are getting behind some exciting games. Great times and can't wait. MGS to play when I get home and looking forward to it :)

I hope ME does have GameWorks.
 
Back
Top Bottom