• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Do we have any idea how the AMD 8 core FX chips are fairing in this game?

They get a nice boost in directx 12 over 11 for both sides. But since this is an RTS it is very CPU heavy to begin with. So the cards can't be pushed as hard as they can. Nothing to do with DX12 not being fully utilised, just that the game is very heavy on CPU calculations and is being cpu bottle necked by the engine more than drivers.

PCper results here
 
Last edited:
But wait this can not be right lol

We are being told that AMD cards perform better with DX12 but lets check out the GTX 980 Ti when we increase the drawcalls

Average - A win for the Fury X
5rX7g2j.jpg


High - A win for the GTX 980 Ti
DVsk806.jpg


Surely the GTX 980 Ti should fall further behind as the drawcalls go up, or could this bench be fatally flawed ?

The R9 390 is doing quite well. People who got the aftermarket R9 290 cards early on are not doing too bad for their money!
 
Well one thing has came out of this, and thats AMD drivers for DX11 is clearly not multi threaded like nVidia's as such it places more burden on the CPU hence lower high frame rates which at lower resolutions nVidia cards perform great or on lower end CPU's.

Once you start increasing the resolution it tends to balance out.

Although this game Ahes is heavy CPU dependant like all RTS games since pathfinding sucks up a huge amount of resources increasing the number of units just makes it worse, remember Supreme Commander? Killer of CPU's that game put 1000 units max each side 7 AI's and even with a 3rd part hack to balance out the threading and it will bring the latest and greatest CPU's to it's knees the less CPU overhead the better and thats the idea of DX12/Vulkan. I don't see drivers making to much of a difference anymore really with it's close to metal API as thats the idea behind these new APi's to allow the game to pull the resources it needs to run with minimal communication through the CPU and driver.

If anything this game shows a best case scenario, RPG, racing and FPS will always be video card bottlenecked long before the CPU unlike RTS games.

I can't see nVidia being able to excel any more than current as their drivers are good and their DX11 drivers are well multi threaded unlike AMD's If anything DX12 has balanced the playing field since AMD's single threaded drivers will no longer be a performance limitation, if anything it has eased the burden from both nVidia and AMD in the driver department.

Since nVidia have pretty good optimised drivers i can understand them being upset and attacking the developer cause their advantage they have enjoyed for sooo long is now gone.
 
The R9 390 is doing quite well. People who got the aftermarket R9 290 cards early on are not doing too bad for their money!

290Xs would do quite well too.:D

Edit I am waiting for a Fury X to arrive at the moment so I may give the 290Xs a go on this as they are paired up with a 5960X, my 290Xs are pretty good overclockers so it could be interesting.:)
 
Last edited:
2% margins Kaap, there is no reason as to why the 980TI should fall further behind with Nvidia's DX11 Drivers..

@ D.P

Do you understand Draw Calls? it really doesn't look like you do, if you did you would understand why this happened, a little phenomena called "CPU Bound"

why are you clutching at this so hard? its insane.

Now i know you do, you are one of the few people on here who understand whats going on at the back end of these things, so when you see something like that it should be immediately obvious to you what is going on, why you opt to pretend you don't understand it and have half the forum explain it to you is beyond me.

This thread is bonkers. like a lot of threads around here of late.
 
Last edited:
Something from their latest journal.

http://www.ashesofthesingularity.com/journals

The units of Ashes aren’t instances of each other. What we mean by that is that every single unit in Ashes is a unique object. We could (if we had the art budget) ensure that every single one of the thousands of different units had slight variations. We don’t mean cosmetics only but we could let a particular unit have a particular weapon or change in their model. This will have significant benefits in the future.

Per unit damage and variations in weapons loadouts would be so awesome. Takes me back to playing the Earth series. that had the most epic tech system and unit customisability. could select what wheels, turret, cannon, engine, tracks or legs a unit had. so ****ing sweet. a shame it was sidelined so much. the game itself was epic.

Would love a new game that had Earth 2160's tech and unit creation system.
 
Last edited:
Well one thing has came out of this, and thats AMD drivers for DX11 is clearly not multi threaded like nVidia's as such it places more burden on the CPU hence lower high frame rates which at lower resolutions nVidia cards perform great or on lower end CPU's.

...

I can't see nVidia being able to excel any more than current as their drivers are good and their DX11 drivers are well multi threaded unlike AMD's If anything DX12 has balanced the playing field since AMD's single threaded drivers will no longer be a performance limitation, if anything it has eased the burden from both nVidia and AMD in the driver department.

Since nVidia have pretty good optimised drivers i can understand them being upset and attacking the developer cause their advantage they have enjoyed for sooo long is now gone.

Its not so much due to multi-threading improvements (which do help) or better drivers as such - people seem to have skipped over the implications of the changes in the R334/7 drivers onwards.

AMD is still for the most part taking whatever the DX11 API is handing them and doing the best they can with it - nVidia have gone a step further and actively intercepting certain DX11 functions - generally those that interface with the driver directly - and optimising how they work before they hit the driver itself.
 
It is not the margin that is important, it's the fact that it did not get bigger for the AMD cards under conditions that are supposed to favour them - high drawcalls.

This is a problem with AMD's DX11 Draw Calls vs Nvidia's, Nvidia's is just a lot better so the AMD GPU suffers more, it seems to be particular bad with the Fiji Cards, which does explain why they apear to be in some way strangled when considered the size and power of them.
 
This is a problem with AMD's DX11 Draw Calls vs Nvidia's, Nvidia's is just a lot better so the AMD GPU suffers more, it seems to be particular bad with the Fiji Cards, which does explain why they apear to be in some way strangled when considered the size and power of them.

He means the DX12 results in the picture up the page. The ones that are marginal. Their results showing the 980TI ahead of the furyx in the draw call heavy scene.

But i think it is mostly statistics and a CPU bound scenario. as other results show the FURYX being ahead on a more powerful CPU in the same Draw call heavy scene. That large table of results up page used a 4770k.
 
He means the DX12 results in the picture up the page. The ones that are marginal. Their results showing the 980TI ahead of the furyx in the draw call heavy scene.

DX11 or 12?

Edit, DX12 lol.... now my head is getting messed up. I don't get the point he's making with that.
 
He means the DX12 results in the picture up the page. The ones that are marginal. Their results showing the 980TI ahead of the furyx in the draw call heavy scene.

But i think it is mostly statistics and a CPU bound scenario. as other results show the FURYX being ahead on a more powerful CPU in the same Draw call heavy scene. That large table of results up page used a 4770k.

Right, there could still be some issue with Fiji on lesser CPU's even in DX12, although its marginal and probably means nothing at all.
This is a hugely complex benchmark with a lot of information to decipher, if approached from a predisposition of it being flawed its easy to misunderstand a lot of it.

In actual fact, for me this benchmark is showing up a lot of very interesting things.

Pepole should step back from Nvidia's rant and look at it objectively, Nvidia have perverted this whole thing, Sadly.
 
Last edited:
I think most people are missing the fact that this is a very CPU heavy genre of game. And this game itself is the most CPU heavy in this Genre to date. Considering the scale of the game.

So unless we can get results for a 4770k at stock from elsewhere, for extra verification. I am thinking that it is becoming CPU bound still.
 
I think most people are missing the fact that this is a very CPU heavy genre of game. And this game itself it the most CPU heavy in this Genre to date. Considering the scale of the game.

So unless we can get results for a 4770k at stock from elsewhere, for extra verification. I am thinking that it is becoming CPU bound still.

Yeah i see where you are coming from.

At the moment i'm inclined to believe this 'as a benchmark' is deliberately tweaked to push massive amount of Calls to give the benchmark purpose.

When you think that in DX12 and 8 thread i7 can push 15 to 20X as many calls then it would in DX11 its hard for me to believe that this level of Call demand is normal for this game, it certainly doesn't look like it should require that much, there are a lot of points of light and instances in it but not 15m worth of calls.

How the actual game behaves will also be interesting.
 
Yeah i see where you are coming from.

At the moment I'm inclined to believe this 'as a benchmark' is deliberately tweaked to push massive amount of Calls to give the benchmark purpose.

When you think that in DX12 and 8 thread i7 can push 15 to 20X as many calls then it would in DX11 its hard for me to believe that this level of Call demand is normal for this game, it certainly doesn't look like it should require that much, there are a lot of points of light and instances in it but not 15m worth of calls.

How the actual game behaves will also be interesting.

The benchmark is running a full game simulation, that means AI, Path-finding, Weapon ballistics (each shell is a physical object that has its own physics like in supreme commander). And all of that is before graphics become involved.

You also have to take into consideration that each new unit bumps up the work on the AI and path finding threads. So what they have saved in overhead in DX12 is also going towards enabling more units.

I think that if you could pause the game simulation then most of the CPU benchmarks would be far closer to the 5960X benchmark.

I don't even think the game is pushing DX12 anywhere near the Draw call numbers that the GPU's can handle. Considering how close the DX11 Nvidia results are to their DX12 results. The DX11 api results being around 1.2 - 2 million draw calls depending on the processor. Just because you can push that many draw calls does not mean they will. but they will certainly use the extra budget they now have.

Just found it, but considering the Benchmark details from AMDMatts results. the batches go nowhere near what DX12 is capable of pulling. And looking at his result, he was GPU bound the entire time, although his 5960x could have been managing over 100fps if the GPU could keep up.
Jq3l3cV.jpg
The funniest part about the above image is that 11k draw calls in DX11 is already a long stretch for most games. The api test might show that a driver can do more, but it is only rendering simple textured cuboids and a simple global light.

The only thing that makes this a reproducible benchmark is that the unit movements and attacks are scripted in their execution, but they still perform all of their path finding and ballistic calculations.

This is as it stated on the benchmark itself, a "Full system test."

I think another major problem is that this benchmark gives a lot of information. but none of the websites were showing all of this extra information or explaining the benchmark correctly.
 
Last edited:
Interesting. ok :)

I also just thought of something else, The Temporal AA has a massive amount of cpu overhead from what i remember. if it used the same type as starswarm.

When i disabled that in star swarm i went from low double digits to over 30 fps.

so even on high settings and with heavy draw calls, the more CPU limited benchmarks could get a good boost in performance.
 
Back
Top Bottom