• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Those slides highlight perfectly why the benchmark is flawed, DX12 results end up slower on the 980Ti, pretty damn obvious soemthing is broken somewhere. Either the game engine or Nvidia drivers but since many of the AMd results are completely flawed I owuld go with engine problems primarily. NVidia have stated as much.

I was aiming more towards the Fury X getting a massive gain at 1080p in directx 12. And ending up with its performance where it should be. But nice diversion there ;)

Also as i mentioned before, those situations are getting CPU bound for the reasons i stated in the post with the furyx vs 980ti. so at higher res/detail the cards can perform a little better.

But with the Directx12 situation and the nvidia cards losing a few fps, that can be explained like i mentioned earlier. It can be a case where DX12 is showing only up to date frames where as DX11 will be quinig up older frames so the fps can become artificially inflated.

You can check the above yourself by changing the 'Frames to render ahead' option in the NV control panel, if you set it to 1 or 0 from 3 (default) you will lose fps but gain responsiveness as the game will only show up to date frames.
 
Last edited:
Good on AMD to actually have a nice performance gain from the start. Looks like Nvidia and my 980 ti SLI has some catchup to do...
 
I was aiming more towards the Fury X getting a massive gain at 1080p in directx 12. But nice diversion there ;)
Yes, it is a nice gain, the sane gains that the 980Ti should be making, which as I said, just shwos that soemthign is very broken somehwere.

Also as i mentioned before, those situations are getting CPU bound for the reasons i stated in the post with the furyx vs 980ti. so at higher res/detail the cards can perform a little better.

There is absolutely zero logic there at all, If a game is CPU bound at 1080P the results can not be better at higher resolutions or details. The results may not drop much, but they certainly cant icnrease unless there is a major flaw. With the 980TI results it is obvious there is soemthign very wrong.

But with the Directx12 situation and the nvidia cards losing a few fps, that can be explained like i mentioned earlier. It can be a case where DX12 is showing only up to date frames where as DX11 will be quinig up older frames so the fps can become artificially inflated.

Again, absolutely zero logic here.

You can check the above yourself by changing the 'Frames to render ahead' option in the NV control panel, if you set it to 1 or 0 from 3 (default) you will lose fps but gain responsiveness as the game will only show up to date frames.
I don't think you understand what that otpion is doing.
 
The results I have seen so far are total garbage. Any benchmark that does not scale with resolution is totally flawed. Any DX12 bench that does not scale with resolution even more so as the whole idea of the new API is to remove the CPU bottleneck.

I will only take any new DX12 bench seriously when it scales in the same way as Heaven 4 both with resolution and number of GPUs.

I expect there will be a new version of the Heaven bench that uses DX12 and I think that will be a far better guide.

Really?

First off, DX12 isn't to remove CPU overhead it's to REDUCE it. This isn't a new concept and the idea behind it is to then use more of the CPU now it's available.

IE 1mil draw calls use 30% of the cpu on DX11 and this is reduced to 2% on DX12, now you can straight up take that or as in the case of this game, use the decreased performance to increase the drawcalls dramatically. So now using 10million using 15-20% of the cpu maybe. The difference being that 10million draw calls would use 300% of your available CPU power.

The idea behind reducing overhead it to then reallocate that spare power. This is also a RTS, RTSs are in general and always have been massively cpu limited outside of the graphics side. SO removing the DX11 overhead allows the fps to jump, changing resolution DOESN'T change the cpu amount being used for the game itself and all the AI being involved.

In many many RTSs performance doesn't scale well across resolution if at all.



http://www.tomshardware.co.uk/overclocked-graphics-gpu-afterburner,review-31955-14.html

the 5870 is faster (by 0.1 fps) at 1920 over 1650. Yes old cards but meh, supreme commander is my go to RTS that I remember and the first result I found. This isn't uncommon or even remotely new.

There are many more possibilities, efficiency, performance. CPU cores are designed to focus load onto less cores because it saves power. It could be a quirk that at 1080p the driver overhead means it can work in two cores and power down the other two cores. At 4k, the driver overhead means the driver spreads the cpu load across 4 cores which actually frees up a few percent of spare power from the main game cpu thread giving a boost to the game threads rather than the driver threads.
 
Yes, it is a nice gain, the sane gains that the 980Ti should be making, which as I said, just shwos that soemthign is very broken somehwere.

There is absolutely zero logic there at all, If a game is CPU bound at 1080P the results can not be better at higher resolutions or details. The results may not drop much, but they certainly cant icnrease unless there is a major flaw. With the 980TI results it is obvious there is soemthign very wrong.

Again, absolutely zero logic here.

I don't think you understand what that otpion is doing.

But why should the 980TI show massive gains as much as the Furyx? maybe the situation is that the 980 is already running near its hardware limit without DX12. Since the nvidia DX11 drivers already have nearly half the overhead of the AMD Dx11 drivers.

And with frames to render ahead. it is a fact that with DX11 the Driver will prepare some frames ahead of the gpu and will even send those older frames to the gpu to increase apparent performance (FPS) but it add's latency. i think you don't know what the 'Frames to render ahead' option does.

You only have to try the option yourself. you will get lower fps if you decrease it from its default of 3.

And there are plenty of articles, some nearly a decade old, about modifying the number of frames that drivers will render ahead to improve input latency.
 
Really?

First off, DX12 isn't to remove CPU overhead it's to REDUCE it. This isn't a new concept and the idea behind it is to then use more of the CPU now it's available.

IE 1mil draw calls use 30% of the cpu on DX11 and this is reduced to 2% on DX12, now you can straight up take that or as in the case of this game, use the decreased performance to increase the drawcalls dramatically. So now using 10million using 15-20% of the cpu maybe. The difference being that 10million draw calls would use 300% of your available CPU power.

The idea behind reducing overhead it to then reallocate that spare power. This is also a RTS, RTSs are in general and always have been massively cpu limited outside of the graphics side. SO removing the DX11 overhead allows the fps to jump, changing resolution DOESN'T change the cpu amount being used for the game itself and all the AI being involved.

In many many RTSs performance doesn't scale well across resolution if at all.



http://www.tomshardware.co.uk/overclocked-graphics-gpu-afterburner,review-31955-14.html

the 5870 is faster (by 0.1 fps) at 1920 over 1650. Yes old cards but meh, supreme commander is my go to RTS that I remember and the first result I found. This isn't uncommon or even remotely new.

There are many more possibilities, efficiency, performance. CPU cores are designed to focus load onto less cores because it saves power. It could be a quirk that at 1080p the driver overhead means it can work in two cores and power down the other two cores. At 4k, the driver overhead means the driver spreads the cpu load across 4 cores which actually frees up a few percent of spare power from the main game cpu thread giving a boost to the game threads rather than the driver threads.

I remember getting dodgy results in supreme commander too. Never could explain why.
 
Nice to see the Nvidia peeps up in arms for a change. :p

That said, those performance number do look flawed. DX12 is VERY new so i think it needs a bit more time to mature.
 
If the shoe was on the other foot i can see these people saying very differently and wouldn't be calling garbage or flawed at all lol. Because nVidia has the gains. AMD have worked with a very similar API which does pretty much the same or rather similar to what DX12 does. Reduce the CPU overhead. Like Drunken said RTS usually are CPU limited and all DX12 is doing is reducing it.

For all we know the game could still be quite cpu limited as we don't entirely know how the game is using the CPU usually physics and ai and such and RTS these are big factors. And you can see in the carts the game seems to favor more IPC performance which also kinda reflects this! Now when increasing the resolution its more GPU bound than cpu right. If this bench/game is more cpu limited then we are seeing/assuming, that FPS on say 1080p it is not too hard to assume we could see a similar FPS on a resolution jump as for all we know we are not gpu limited yet. So the gpu gets more workload and can handle it. hence similar FPS.

We all know AMD has worked very hard on DX12 and before than was mantle which is very similar so they have a good head start. Nvida on the other hand have worked very hard on DX11 to make sure it put mantle in a bad light or at least not let it look godlike!

That's why nVidia are awesome in DX11 games and destroy AMD most of the time. However it dosn't appear nVidia have been working as hard with DX12 because they focused on DX11. The charts seem to reflect this! This is how i see it so far!
However i'm still investing in the Inno3d 980Ti hybrid when i get the money for it! As we know nVidia are quite good in the driver department so we can see improvements quite quickly especially in time when the games hits the shelves.
 
Last edited:
When it comes to Dx12 and Mantle/Vulkan for example, CPU benchmarks needs to be top priority, not GPU benchmarks. You need to compare the best CPU and worst CPU to see how it affects the card either Nvidia or AMD. Obviously its more time consuming since changing the CPU is more difficult then changing the GPU but hopefully more reviews get to it.
 
An RTS game like that will probably benefit from Fury X's lopsided shader focussed architecture, so NVidia probably do have a point in that it won't be reflective of other DX12 games. Fury X is a bit of a shading monster but the rest of the architecture (HBM aside) is a bit pathetic compared to NVidia.
 
Last edited:
1.) Using an FX6300 is fail.
2.) Is it a static benchmark?
3.) Negative scaling with a more efficient API. Sounds legit.

You can't take anything away from AMD here, but it's obvious that there's something wrong with Nvidia's stuff here (Which is an Nvidia negative)
 
Gota agree that there is something wrong on the nVidia side as i would have assumed they would have slightly better FPS performance gains. But i wouldn't go as far as saying benchmark is garbage etc.
 
1.) Using an FX6300 is fail.
2.) Is it a static benchmark?
3.) Negative scaling with a more efficient API. Sounds legit.

You can't take anything away from AMD here, but it's obvious that there's something wrong with Nvidia's stuff here (Which is an Nvidia negative)

How about the 770ti getting a 180% performance boost? is that a negative?

Ignore this result if anyone reads it later on, WCCFtech fudged this particular test and chose the theoretical 'CPU' performance benchmark with the 770 instead of the 'Full system test' option.
 
Last edited:
Back
Top Bottom