• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

If it's anything like the first Supreme Commander it will be a great game. We haven't had any recent RTS AFAIK so can't wait.

Agreed, Planetary Annihilation never lived up to the hype and Square/Enix had Supreme Commander 2 noobified to try and compete with Starcraft 2 (despite the fact the target audience all look down on baby RTS games like SC2).
 
I may be in the minority here but I am really not looking to the DX12 future from the way it's shaping up.

If we get to the point as it looks like where £500 CPUs actually offer noticeable improvements over £250 ones then that frees up the GPUs to run free and we're back to the Y2K era of having to replace our systems every year to keep running stuff at decent settings :(

People can complain all they like about how game improvements have slowed dramatically over the past 5+ years however I kind of like the fact that you can still play new games at decent settings using a 3 year old setup.

Maybe I'm just getting old lol, I remember the past and have no desire to relive it :P

This benchmark is an extreme example, an i5 can push 13m Draw Calls Per second with 33ms Latency (30 FPS), trust me nothing is going to call more than 5m instances a second for a while yet.

Like the Star Swarm Benchmark this has been tweaked to call vast amounts instances where it ordinarily wouldn't.
 
This benchmark is an extreme example, an i5 can push 13m Draw Calls Per second with 33ms Latency (30 FPS), trust me nothing is going to call more than 5m instances a second for a while yet.

Like the Star Swarm Benchmark this has been tweaked to call vast amounts instances where it ordinarily wouldn't.

True. But although this game is doing more, it is not pushing draw calls to the extreme, the performance is being made up due to the reduced driver overheads. But it does not mean the CPU is doing nothing, I was quite surprised with AMDMatt showing all 12 threads being used on his 5960x at 100%.
 
True. But although this game is doing more, it is not pushing draw calls to the extreme, the performance is being made up due to the reduced driver overheads. But it does not mean the CPU is doing nothing, I was quite surprised with AMDMatt showing all 12 threads being used on his 5960x at 100%.

He has 3 or 4 Fury-X
 
So people are still ignoring the fact that 'as the resolution is upped, the frames go up' in favour of posts like this.

In the olden days, we at least used to have some intelligent discussions :(

I honestly can't see where the framerate increases as resolution goes up (and least certainly in the DX11 to DX12 Scaling R9 390X and GTX 980 results that Ubersonic linked)

e.g.
R9 390x, 5960X:
1080p Low vs 1600p Low DX11 43.1 -> 43
1080p Low vs 1600p Low DX12 78 -> 70.8
1080p High vs 1600p High DX11 36.6 -> 34.9
1080p High vs 1600p High DX12 53.8 -> 45.8

GTX 980, 5960X:
1080p Low vs 1600p Low DX11 71.4 -> 70.7
1080p Low vs 1600p Low DX12 78.3 -> 67.1
1080p High vs 1600p High DX11 57 -> 48.4
1080p High vs 1600p High DX12 50.3 -> 42.3




What I can see is the i3-4330 / FX8370 / FX6300 are all CPU limited (and that NVIDIA's DX11 drivers have less overhead than AMDs)

On both the 5960X and 6700K, under DX11 the results are CPU limited at both 1080P and 1600P (although less so at 1600P on the GTX980 - due to NVIDIA's better DX11 drivers)

NVIDIA's DX12 drivers appear poorly optimised (in some cases slower than DX11, due to how optimised their DX11 drivers are)
 
True. But although this game is doing more, it is not pushing draw calls to the extreme, the performance is being made up due to the reduced driver overheads. But it does not mean the CPU is doing nothing, I was quite surprised with AMDMatt showing all 12 threads being used on his 5960x at 100%.

He has 3 or 4 Fury-X

I was only running one card at the time. I will have to test again with all four running.
 
Matt, nice to see so great cpu scaling. But it all makes me wonder, why didn't AMD 8 core cpu's showed any good performance in pcper testings. It was actually slower than I3's were, and Oxidegames have themselves said that they should be trading blows with I7's.

Well, quess there is reason to wait for new patches :)
 
I was only running one card at the time. I will have to test again with all four running.

I had read that multi gpu support was not being implemented for dx12 till pre beta. Or do you have a special build?

Would be Nice to see some numbers if it is working.
 
Exactly, and they perform about equally, i don't get why Nvidia are so upset by it, what did they want to see, AMD falling way behind?
after all their DX12 PR maybe.

Its all good really :)

What they wanted to see was something representative of the Dx12 performance they see in other game engines, not something that is obviously filled with bugs, exactly what nvidia publicly state. Really not hard to figure out is it.

When the first Dx12 benchmark is a complet dud it is very disappointing.
 
What they wanted to see was something representative of the Dx12 performance they see in other game engines, not something that is obviously filled with bugs, exactly what nvidia publicly state. Really not hard to figure out is it.

When the first Dx12 benchmark is a complet dud it is very disappointing.


The bugs are in Nvidia's Drivers. :)

'Nvidia mistakenly stated that there is a bug in the Ashes code regarding MSAA. By Sunday, we had verified that the issue is in their DirectX 12 driver. Unfortunately, this was not before they had told the media that Ashes has a buggy MSAA mode. More on that issue here. On top of that, the effect on their numbers is fairly inconsequential. As the HW vendor's DirectX 12 drivers mature, you will see DirectX 12 performance pull out ahead even further.'



We've offered to do the optimization for their DirectX 12 driver on the app side that is in line with what they had in their DirectX 11 driver. Though, it would be helpful if Nvidia quit shooting the messenger.

http://forums.oxidegames.com/470406
http://forums.oxidegames.com/470406
 
What they wanted to see was something representative of the Dx12 performance they see in other game engines, not something that is obviously filled with bugs, exactly what nvidia publicly state. Really not hard to figure out is it.

When the first Dx12 benchmark is a complet dud it is very disappointing.

It's a brand new API, it's obviously going to have issues. Hardly makes it a "complete dud". Nvidia's public statement was a pile of BS which was already debunked by the developer. You're delusional if you think any PR crap Nvidia comes up with is even remotely accurate, that goes for AMD too.
 
I honestly can't see where the framerate increases as resolution goes up (and least certainly in the DX11 to DX12 Scaling R9 390X and GTX 980 results that Ubersonic linked)

e.g.
R9 390x, 5960X:
1080p Low vs 1600p Low DX11 43.1 -> 43
1080p Low vs 1600p Low DX12 78 -> 70.8
1080p High vs 1600p High DX11 36.6 -> 34.9
1080p High vs 1600p High DX12 53.8 -> 45.8

GTX 980, 5960X:
1080p Low vs 1600p Low DX11 71.4 -> 70.7
1080p Low vs 1600p Low DX12 78.3 -> 67.1
1080p High vs 1600p High DX11 57 -> 48.4
1080p High vs 1600p High DX12 50.3 -> 42.3




What I can see is the i3-4330 / FX8370 / FX6300 are all CPU limited (and that NVIDIA's DX11 drivers have less overhead than AMDs)

On both the 5960X and 6700K, under DX11 the results are CPU limited at both 1080P and 1600P (although less so at 1600P on the GTX980 - due to NVIDIA's better DX11 drivers)

NVIDIA's DX12 drivers appear poorly optimised (in some cases slower than DX11, due to how optimised their DX11 drivers are)



The numbers are all over the palc,e so indeed on same websites, some combos of settings, CPU and GPU make some sense, but other combos and websites ther is completely wrong scaling.

Then when you say things likes Nvidia dx12 drivers are poorly optimized that is disproven by other results. E.g. The 770 in a CPU bound setting got a 180% boost going from DX11 to Dx12, but the 980ti lost something like 5%. DX12 should have the biggest positive impact on a combo that is CPU limited and has mountains of additional GPU resources untapped, low end GPUs should see minimal irmpovements, yet he result are the exact oposite because quite frankly, the game engine is completely flawed and no meaningful conclusions can be drawn from it.
 
The numbers are all over the palc,e so indeed on same websites, some combos of settings, CPU and GPU make some sense, but other combos and websites ther is completely wrong scaling.

Then when you say things likes Nvidia dx12 drivers are poorly optimized that is disproven by other results. E.g. The 770 in a CPU bound setting got a 180% boost going from DX11 to Dx12, but the 980ti lost something like 5%. DX12 should have the biggest positive impact on a combo that is CPU limited and has mountains of additional GPU resources untapped, low end GPUs should see minimal irmpovements, yet he result are the exact oposite because quite frankly, the game engine is completely flawed and no meaningful conclusions can be drawn from it.
Its the same story with DX11 in pretty much anything Kepler vs Maxwell.

For whatever reason, be it Drivers or Something in the Hardware Kepler does not pull as many Draw Calls as Maxwell.
 
Last edited:
True. But although this game is doing more, it is not pushing draw calls to the extreme, the performance is being made up due to the reduced driver overheads. But it does not mean the CPU is doing nothing, I was quite surprised with AMDMatt showing all 12 threads being used on his 5960x at 100%.

I'm pretty sure Matt did test in cpu benchmark mode, which simulates future infinite speed gpu (disgards frames). This benchmark has some neat features in it.
 
The bugs are in Nvidia's Drivers. :)

http://forums.oxidegames.com/470406


and possibly affect any DirectX 12 game/benchmark

http://forums.ashesofthesingularity.com/470406/page/1 said:
I won't try to speak for Nvidia PR.

What I can say, with absolute certainty: The MSAA issue they described will happen on any DirectX 12 game currently. That's why we were surprised they tried to describe the issue as a "bug" in our code.

I don't expect it to be an issue for long though. Even as recently as last week, Nvidia released an updated driver that made significant performance gains with Ashes.
 
There is an option to enable AFR but it does not appear to work as the results were similar to my single gpu scores.

One thing i did notice though, with AFR ticked the cpu load dropped over all threads, even though the FPS remained the same. I made the mistake of running a lower Temporal AA setting from earlier, so that explains the slight difference in FPS from my first results.

All four gpu's showed usage in AB, but this is normal as CrossFire was not physically disabled in CCC.

20ieuqe.jpg
 
Back
Top Bottom