• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Most of the reviews show nvidia losing performance on DX12, hence the discussions. But this is one game... the other demos we've heard of nvidia gain performance under DX12, so what is happening with this one is up for debate... nvidia say it's the devs fault, the devs blame nvidia, and round and round it goes on forums.
Doesn't really feel like there's much to debate, honestly. We can theorize, but that's about it.
 
This is just straight up bizarre. ArsTechnica's benchmarks indicates that 980Ti actually *loses* performance under DX12.

I don't know what to make of it. It was always clear that AMD were behind in their drivers when it came to DX11 performance, but was their GPU architecture always that far ahead of the game that when 'unlocked' with DX12, they perform just as well as much more expensive Nvidia cards? Have Nvidia simply focused too much on DX11 serial functionality?

Interesting times ahead, it looks like.

I really woudln't look too much into it at all. The benchmark is flawed and is in no way indicative of general DX12 performance.

When low end Kepler card get huge boosts and high end Maxwell V2s have a a performance decrease it is a pretty sure sign the benchmark is seriously wrong, or there is a major driver bug with Nvidia. Either way, it is meaningless comparison to make.


Hopefully we will see anew benchmark from 3Dmakr, Uniengine etc.
 
I really woudln't look too much into it at all. The benchmark is flawed and is in no way indicative of general DX12 performance.

When low end Kepler card get huge boosts and high end Maxwell V2s have a a performance decrease it is a pretty sure sign the benchmark is seriously wrong, or there is a major driver bug with Nvidia. Either way, it is meaningless comparison to make.


Hopefully we will see anew benchmark from 3Dmakr, Uniengine etc.

The 770 never got a 180% performance boost in the end. I double checked things and wccftech ran the wrong benchmark with that card. They ran the cpu theoretical bench instead of the full system one by mistake. It even says that on the benchmark pictures if you look.
 
Could it be that in dx12 a lot more is going on and that amd driver overhead is cut and the wider bus is helping?.


No, because low end Nvidia cards see a boost with DX12. The wider AMD bus makes little difference, what ultimately counts is bandwidth and there although AMD have a solid lead there are other factors to consider such as compression. You can see when overclocking the memory of a 980Ti there is very little performance gain simply because it is not limited by bandwidth. besides which, changing the API wont change the bandwidth limits. bandwidth is only really going to be a actor at 4K.


The problem most likely is that the developers are simply not experienced with the unique aspects of DX12. There is much ore work required of developers in order to maximize performance compared with DX11. Somewhere they have likely done something sub-optimal for Maxwell architecture that is leading to performance drops.
 
The 770 never got a 180% performance boost in the end. I double checked things and wccftech ran the wrong benchmark with that card. They ran the cpu theoretical bench instead of the full system one by mistake. It even says that on the benchmark pictures if you look.

The point still stands that low end Kepler cards seem to get a positive boost while high end Maxwells sometimes see a performance drop. That is definitive proof that there is a bug somewhere.
 
I really woudln't look too much into it at all. The benchmark is flawed and is in no way indicative of general DX12 performance.

When low end Kepler card get huge boosts and high end Maxwell V2s have a a performance decrease it is a pretty sure sign the benchmark is seriously wrong, or there is a major driver bug with Nvidia. Either way, it is meaningless comparison to make.


Hopefully we will see anew benchmark from 3Dmakr, Uniengine etc.
How is the benchmark flawed, though? Specifically.

I agree something strange is going on, and I agree we shouldn't be jumping to conclusions, but well........I'm not exactly going to write it off as meaningless, either. I'll need more evidence before being comfortable doing that.
 
Clearly AMD Drivers are working as expected with DX12 while NVIDIAs are not. This isn't the only DX12 application that shows a decrease in performance on NVIDIA Maxwell. My GTX 960 ran an Unreal Engine DX11 demo at 180 FPS but the DX12 version was only 120 FPS. In another Unreal Engine Demo I lost around 20 - 30 FPS Due to DX12.

NVIDIA has a problem with DX12 currently. It will be sorted before any actual games come out so I don't know why they're trying to discredit a benchmark that doesn't show their GPUs in the most positive light.
 
How is the benchmark flawed, though? Specifically.

I agree something strange is going on, and I agree we shouldn't be jumping to conclusions, but well........I'm not exactly going to write it off as meaningless, either. I'll need more evidence before being comfortable doing that.


Increasing the resoluteness increases FPS on both AMD and Nvida
Increasing graphical detail form low to high increases FPS on both AMD and Nvidia
Low end Nvidia cards get a significant boost going to Dx12 when they are the cards that should be least affected, high end Nvidia cards get a performance drop when they should see the biggest gains.
Some of the CPU scaling is completely broken.


The benchmark is flawed for the purposes of trying to estimate future performance gains from DX12 in mature games.

Some of the results within the set may make sense and be representative but it is hard to know what results are meaningful.



If you want to see what DX12 will ring look at what performance gain Mantle brought to AMD cards, apply that to both AMD and Nvidia and that will give you a good ball park. If anyone thinks that NVidia's cards will be 5% slower with DX12 than Dx11 then they are deluding themselves.
 
Clearly AMD Drivers are working as expected with DX12 while NVIDIAs are not. This isn't the only DX12 application that shows a decrease in performance on NVIDIA Maxwell. My GTX 960 ran an Unreal Engine DX11 demo at 180 FPS but the DX12 version was only 120 FPS. In another Unreal Engine Demo I lost around 20 - 30 FPS Due to DX12.

NVIDIA has a problem with DX12 currently. It will be sorted before any actual games come out so I don't know why they're trying to discredit a benchmark that doesn't show their GPUs in the most positive light.

My Fury also lost performance with dx12 on that unreal tech demo. Its not just Nvidia.
 
Clearly AMD Drivers are working as expected with DX12 while NVIDIAs are not. This isn't the only DX12 application that shows a decrease in performance on NVIDIA Maxwell. My GTX 960 ran an Unreal Engine DX11 demo at 180 FPS but the DX12 version was only 120 FPS. In another Unreal Engine Demo I lost around 20 - 30 FPS Due to DX12.

NVIDIA has a problem with DX12 currently. It will be sorted before any actual games come out so I don't know why they're trying to discredit a benchmark that doesn't show their GPUs in the most positive light.

What demos in particular? You have to be careful because some DX12 demos include more advanced graphical effects that require significantly more GPU grunt and hence will give slower results. These are effects that just aren't possible on DX11 or would have a much bigger performance hit.

this is going to be a general trend in DX11 vs DX12 benchmarks going forwards. The APIs can do different things so apples to apples comparisons wont be straightforward.

I know for a fact from speaking to people who develop with Unreal that they get a good performance bump on Nvidia GPUs with DX12 and are happy with Nvidia DX12 performance. Moreover, the UE4 cDX12 codepath is still in development which is why they haven't released any suitable benchmarks.


DX12 is much harder to develop for, requires more time and resources from developers, and they need to know more details about the hardware to get good performance. With DX11 the driver and NVida/Amd would make smart decisions, now the game developer has to make those smart decisions, and they wont always get it right.
 
Increasing the resoluteness increases FPS on both AMD and Nvida

No it doesn't on the whole (In the PCPer article it only occurs on the i3 and AMD Chips which are presumably CPU limited anyway)

Increasing graphical detail form low to high increases FPS on both AMD and Nvidia

As above

Low end Nvidia cards get a significant boost going to Dx12 when they are the cards that should be least affected, high end Nvidia cards get a performance drop when they should see the biggest gains.
Some of the CPU scaling is completely broken.

The 770 results have already been debunked

The only thing meaningless around here is carrying on with the same pointless arguments
 
Last edited:
The point still stands that low end Kepler cards seem to get a positive boost while high end Maxwells sometimes see a performance drop. That is definitive proof that there is a bug somewhere.

http://www.computerbase.de/2015-08/...of-the-singularity-unterschiede-amd-nvidia/2/

The kepler cards actually saw regression in performance when paired with a high end intel. The only times the Nvidia cards saw performance improvements is when they were paired with low-Mid range CPU's.

There more than likely is some bug somewhere or there is something happening in the nvidia DX11 driver that is not in the DX12 pathway in the game. but we need to understand what is happening with the DX11 path compared to the DX12 one.

We just don't see it with the AMD cards since the DX11 driver is always falling behind in this benchmark.
 
Some of the results within the set may make sense and be representative but it is hard to know what results are meaningful.
That's all I'm saying.

I'm not even remotely close to making any conclusions about anything over this. But I'm also not going to dismiss this as completely meaningless until we get more data that suggests this is just an outlying scenario or that it is unrepresentative for whatever reason(sometimes purpose made benchmarks aren't always the best indicators).

So yea, just saying, waiting for more information to come in before I start believing anything either way.
 
I reckon Ashes is one of those games where increased drawcalls will use as much stream processors/CUDA cores as possible hence why AMD is getting a decent boost.

Once we see other DX12 games that are basically standard fare which could easily have been done on DX11, Nvidia will probably move ahead. Unless of course AMD have truly caught up on the drivers side.

Fable Legends is out this month so will be interesting.
 
In DX 11 the lowly 960 is beating the FuryX: http://www.computerbase.de/2015-08/...-nvidia/2/#abschnitt_benchmarks_in_1920__1080

Do people really think this benchmark tells us anything meaningful? In how many games has the 960 been faster than the FuryX?


Completely junk.

Lets wait and see what happens when we get a DX12 bench that supports 4 way GPU setups, then we will be able to really tell if the extra drawcalls are actually doing anything or not.

Sadly this bench does mot support more than single cards usage.
 
Back
Top Bottom