• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

and possibly affect any DirectX 12 game/benchmark

Yeah it didn't make much sense Nvidia pointing the finger of blame at Oxide, MSAA is part of DX11/12, it has nothing to do with the engine code.
Nvidia should know this.
 
Last edited:
What they wanted to see was something representative of the Dx12 performance they see in other game engines, not something that is obviously filled with bugs, exactly what nvidia publicly state. Really not hard to figure out is it.

When the first Dx12 benchmark is a complet dud it is very disappointing.

Ashes is doing what DX12 is designed to do, use tons of drawcalls. If that is not representative of DX12 I don't know what is. The 3dMark API test showed the 290X on par with high end Nvidia cards and now when we have a real world test which is giving the same results, the fanboys are calling it useless.:rolleyes:
Is it hard to accept for you guys that maybe AMD has improved their drivers significantly?

I suppose it's understandable that the Nvidia crew will see this as disappointing since their manufacturer of choice is not performing faster than AMD.
 
Last edited:
Ashes is doing what DX12 is designed to do, use tons of drawcalls. If that is not representative of DX12 I don't know what is. The 3dMark API test showed the 290X on par with high end Nvidia cards and now when we have a real world test which is giving the same results, the fanboys are calling it useless.:rolleyes:
Is it hard to accept for you guys that maybe AMD has improved their drivers significantly?

I suppose it's understandable that the Nvidia crew will see this as disappointing since their manufacturer of choice is not performing faster than AMD.

I dont think you're quite grasping it.

Personally, AMD could have technology that can finally get someone on the moon for all I care. It could have a graphics card a million times faster than Nvidia, I dont care a jot and I'm the biggest Nvidia fanboy going.

Baffles me though that a shoddy benchmark is posted, and from a game that is very cosy in bed with AMD is now gospel for how good AMD/DX12 is.

This reeks of Mantle all over again.
 
Instead of arguing about AMD DX11=>DX12 performance.. Why is no one looking at nVidia DX12 => AMD DX12. Surely that's what all it's all going to boil down to in the end?


This +100


The only thing I would say about this benchmark is, If this is indicative of the final game, then you definitely don't want to be running it on an AMD card without DX12. Truly shocking performance.

I've just read that the shocking performance on AMD in DirectX11 is deliberate to make the DX12 result look better. Well I suppose it takes all sorts to wear a tin foil hat. :p


LtMatt I assume that the Theoretical CPU Framerate result goes up and down with higher or lower CPU clocks ?



Oh and one last thing. Called it back on post 19 ;)

Its going to be interesting, but I can imagine which ever team turns out to be slower, there will be cries of 'its not a fair test because XYZ'.
Fun times ahead. :)
 
The irony at least for me is that Ashes is/will be the quintessential DX12 vehicle. You don't need DX12 to see strands of wavy hair on a middle aged swashbuckler, but for the thousands of assets in a large scale rts battle.
 
The irony at least for me is that Ashes is/will be the quintessential DX12 vehicle. You don't need DX12 to see strands of wavy hair on a middle aged swashbuckler, but for the thousands of assets in a large scale rts battle.


Or 900 Thousand strands of Grass, 80 Thousand Tress and Shrubs to make ones world of vegetation looks like something that actually resembles reality.
 
OK, so riddle me this, according to legit reviews Oxide said this in the release notes;

MSAA is implemented differently on DirectX 12 than DirectX 11. Because it is so new, it has not been optimized yet by us or by the graphics vendors. During benchmarking, we recommend disabling MSAA until we (Oxide/Nidia/AMD/Microsoft) have had more time to assess best use cases

But when nvidia said effectively the same thing they instead said the code for msaa in dx11 and 12 was identical and couldn't be the source of any problems

Eh?
 
...
If this is indicative of the final game, then you definitely don't want to be running it on an AMD card without DX12. Truly shocking performance.

I've just read that the shocking performance on AMD in DirectX11 is deliberate to make the DX12 result look better. Well I suppose it takes all sorts to wear a tin foil hat. :p
...

Or with all DX11 OS options having a free upgrade path to DX12 they didn't bother putting any effort into making the DX11 code particularly awesome, because what would be the point? (Does detract a bit from comparisons if so, but then it's always going to be hard to tell how well optimised two different codepaths may be!)

Still, hard to argue that you wouldn't want to be running it on AMD DX11 looking at the early results!
 
Last edited:
Looks working to me:

79b.jpg


79a.jpg

I know you like synthetic benches but for things like DX12 is it really best top use a bench that deliberately tries to avoid any kind of CPU bottleneck while testing the GPU, thus ignoring one of the main points of DX12?

Different benchmarks are suited to different things and scores won't necessarily change based on resolution if CPU-bound - just because it isn't relevant to your setup doesn't make it a useless bench. Given the bench shows GPU scaling just fine with a high-end CPU I'm not really sure what you're so worked up about anyway. Yes, being able to separately bench CPU & GPU is nice, but real games all use both so combined tests have relevance too. I'm not saying synthetic ones that try to avoid bottlenecks on one side or the other shouldn't exist, but to suggest anything that uses more of the system is pointless suggests you've forgotten that other people do more than bench their machines.

Of course, if we're not thinking about buying this game then it being 'real game performance' or a moderate approximation of this becomes far less relevant :p

Edit: Not saying this is a great bench, may be super-flawed for some reason or another, just saying that it being possible to be CPU or GPU bound is not inherently bad.

The problem is a 6700k seems to do just as well as a 5960X

AMDMatt is using a 5960X which shows 100% usage on all cores threads.

The 5960X has twice the cores and threads that a 6700k has so something does not add up.:eek:

The bench can not be CPU bottlenecked if you get the same scores with both CPUs but then why does the 8/16 core/thread 5960X run @100% ?

Coming at it from the other direction why are the 1080p and 1600p results so close with the GPUs.

Is this bench CPU or GPU bottlenecked or both or just rubbish as nothing adds up.:)
 
Kaap

There is an option to enable AFR but it does not appear to work as the results were similar to my single gpu scores.

One thing i did notice though, with AFR ticked the cpu load dropped over all threads, even though the FPS remained the same. I made the mistake of running a lower Temporal AA setting from earlier, so that explains the slight difference in FPS from my first results.

All four gpu's showed usage in AB, but this is normal as CrossFire was not physically disabled in CCC.

20ieuqe.jpg
 
Kaap, don't forget Matt's cpu scores are way higher than his gpu fps is. So it's no wonder you are getting same scores with 6700k, which is also enough to feed one Fury X in DX12.

And it's actually faster under DX11 (which is natural) because of better single thread performance.
 
Stands to reason to me that AMD would have a lead in performance and drivers given they have been optimising for Mantle since at least the HD7xxx series and aspects of mantle made it into both Vulcan and DX12, seems they would have a pretty decent lead in DX12 optimisations, be that hardware or software.

Nvidia's response seemed a bit panicky which only gives credence to the speculation of some that AMDs hardware may be better at DX12. Perhaps though they just have a long memory and ATIs massive dominance in early DX9 gave them a few shivers.

Personally, I think it's a non issue at the moment but time will tell I guess.

Think I'll just enjoy the show
 
This +100


The only thing I would say about this benchmark is, If this is indicative of the final game, then you definitely don't want to be running it on an AMD card without DX12. Truly shocking performance.

I've just read that the shocking performance on AMD in DirectX11 is deliberate to make the DX12 result look better. Well I suppose it takes all sorts to wear a tin foil hat. :p


LtMatt I assume that the Theoretical CPU Framerate result goes up and down with higher or lower CPU clocks ?



Oh and one last thing. Called it back on post 19 ;)



If this is anythign liek DX12 then game developers simply wont touch it.

Luckily for gamers the benchmark is hopelessly wrong.
 
OK, so riddle me this, according to legit reviews Oxide said this in the release notes;



But when nvidia said effectively the same thing they instead said the code for msaa in dx11 and 12 was identical and couldn't be the source of any problems

Eh?

This also confuses me....

"Some optimizations that the drivers are doing in DX11 just aren’t working in DX12 yet. Oxide believes it has identified some of the issues with MSAA and is working to implement workarounds on our code."
 
Back
Top Bottom