and possibly affect any DirectX 12 game/benchmark
Yeah it didn't make much sense Nvidia pointing the finger of blame at Oxide, MSAA is part of DX11/12, it has nothing to do with the engine code.
Nvidia should know this.
Last edited:
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
and possibly affect any DirectX 12 game/benchmark
What they wanted to see was something representative of the Dx12 performance they see in other game engines, not something that is obviously filled with bugs, exactly what nvidia publicly state. Really not hard to figure out is it.
When the first Dx12 benchmark is a complet dud it is very disappointing.
Ashes is doing what DX12 is designed to do, use tons of drawcalls. If that is not representative of DX12 I don't know what is. The 3dMark API test showed the 290X on par with high end Nvidia cards and now when we have a real world test which is giving the same results, the fanboys are calling it useless.![]()
Is it hard to accept for you guys that maybe AMD has improved their drivers significantly?
I suppose it's understandable that the Nvidia crew will see this as disappointing since their manufacturer of choice is not performing faster than AMD.
Instead of arguing about AMD DX11=>DX12 performance.. Why is no one looking at nVidia DX12 => AMD DX12. Surely that's what all it's all going to boil down to in the end?
Its going to be interesting, but I can imagine which ever team turns out to be slower, there will be cries of 'its not a fair test because XYZ'.
Fun times ahead.![]()
The irony at least for me is that Ashes is/will be the quintessential DX12 vehicle. You don't need DX12 to see strands of wavy hair on a middle aged swashbuckler, but for the thousands of assets in a large scale rts battle.
MSAA is implemented differently on DirectX 12 than DirectX 11. Because it is so new, it has not been optimized yet by us or by the graphics vendors. During benchmarking, we recommend disabling MSAA until we (Oxide/Nidia/AMD/Microsoft) have had more time to assess best use cases
...
If this is indicative of the final game, then you definitely don't want to be running it on an AMD card without DX12. Truly shocking performance.
I've just read that the shocking performance on AMD in DirectX11 is deliberate to make the DX12 result look better. Well I suppose it takes all sorts to wear a tin foil hat.
...
OK, so riddle me this, according to legit reviews Oxide said this in the release notes;
Does that enable you to shapeshift into objects?
Looks working to me:
![]()
![]()
I know you like synthetic benches but for things like DX12 is it really best top use a bench that deliberately tries to avoid any kind of CPU bottleneck while testing the GPU, thus ignoring one of the main points of DX12?
Different benchmarks are suited to different things and scores won't necessarily change based on resolution if CPU-bound - just because it isn't relevant to your setup doesn't make it a useless bench. Given the bench shows GPU scaling just fine with a high-end CPU I'm not really sure what you're so worked up about anyway. Yes, being able to separately bench CPU & GPU is nice, but real games all use both so combined tests have relevance too. I'm not saying synthetic ones that try to avoid bottlenecks on one side or the other shouldn't exist, but to suggest anything that uses more of the system is pointless suggests you've forgotten that other people do more than bench their machines.
Of course, if we're not thinking about buying this game then it being 'real game performance' or a moderate approximation of this becomes far less relevant
Edit: Not saying this is a great bench, may be super-flawed for some reason or another, just saying that it being possible to be CPU or GPU bound is not inherently bad.
There is an option to enable AFR but it does not appear to work as the results were similar to my single gpu scores.
One thing i did notice though, with AFR ticked the cpu load dropped over all threads, even though the FPS remained the same. I made the mistake of running a lower Temporal AA setting from earlier, so that explains the slight difference in FPS from my first results.
All four gpu's showed usage in AB, but this is normal as CrossFire was not physically disabled in CCC.
![]()
Or with all LEGAL DX11 OS options having a free upgrade path to DX12
This +100
The only thing I would say about this benchmark is, If this is indicative of the final game, then you definitely don't want to be running it on an AMD card without DX12. Truly shocking performance.
I've just read that the shocking performance on AMD in DirectX11 is deliberate to make the DX12 result look better. Well I suppose it takes all sorts to wear a tin foil hat.
LtMatt I assume that the Theoretical CPU Framerate result goes up and down with higher or lower CPU clocks ?
Oh and one last thing. Called it back on post 19![]()
OK, so riddle me this, according to legit reviews Oxide said this in the release notes;
But when nvidia said effectively the same thing they instead said the code for msaa in dx11 and 12 was identical and couldn't be the source of any problems
Eh?