• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Looks working to me:

79b.jpg


79a.jpg

The problem with the above is the scores are not reflecting the changes in resolution.

It could be argued that some of the CPUs will find running this bench difficult but I am pretty sure if a 5960X with 8/16 cores threads is struggling to show the difference something is very wrong with the bench.:)

Edit looking at the 6700k scores I would say the 5960X is only using 4/8 cores threads.
 
It could be argued that some of the CPUs will find running this bench difficult but I am pretty sure if a 5960X with 8/16 cores threads is struggling to show the difference something is very wrong with the bench.:)

Looking at the graph I *assume* that the test is GPU limited, which would make sense as it's basically Supreme Commander with better graphics and that hardly stresses a modern CPU.

Hence the 6700K and 5960X getting the same score at the highest setting (unless ofc the 6700K's higher speed and IPC are being perfectly offset by the 5960X's extra cores, which is very unlikely lol).
 
The problem with the above is the scores are not reflecting the changes in resolution.

It could be argued that some of the CPUs will find running this bench difficult but I am pretty sure if a 5960X with 8/16 cores threads is struggling to show the difference something is very wrong with the bench.:)

Edit looking at the 6700k scores I would say the 5960X is only using 4/8 cores threads.

It is not scaling well on processors other than the 5960x because it is becoming CPU bound. This benchmark is not a pure GPU benchmark like the uniengine ones. It is running a FULL game simulation during the run. So if some cores are bottlenecking the system then it will not scale with resolution. Far more is going on than with a simple FPS benchmark. But that can be revealing that the game may not be scaling past a certain core count yet as you mentioned. Since it seems to be bottoming out at 4 cores/threads. But the 5960x and 6700k with their better IPC can muscle through that compared to the 8370/fx6300/i3 etc.

And you can see when the system becomes CPU bound in the benchmark itself, since it tells you what you theoretical FPS should be fir a given CPU. But pcper and a few other sites did not give any screenshots that show what is going on.
 
Last edited:
I have no problem with a 6700k and 5960X getting the same score - it's GPU bound
when they have multi card working then lets see what happens with 4 core vs 8 core on say 3 or 4 cards

that still doesn't explain why increasing the resolution actually increases the fps, if its GPU bound giving the GPU more to do should not mean it produces more frames
 
that still doesn't explain why increasing the resolution actually increases the fps, if its GPU bound giving the GPU more to do should not mean it produces more frames

It is only doing that in situations where the game is CPU bound. So at higher quality/resolution it can load the GPU better. But the increase is only tiny. This situation is not occurring with the 6700K or 5960X because it is becoming GPU bound then.

The GPU is becoming better loaded in the CPU bound situations but even then the fps increase is small compared to the fps increases between DX11 and 12 on the 5960x/6700K
 
People got complete confused about Starswarm benchmarks and they are currently confused about this one as well. Oxide does benchmarks like no other ;) usually fps is no not the measure of performance.
 
because star swarm wasn't a benchmark - benchmarks need to be repeatable

It was tech demo. That's all. But all the sites took it for benchmark when dx12 patch came in. Same here. AotS benchmark has so many data points to consider and all of them are linked together, that all those benchmarketers are just confusing themselves and people get hung up on these single sided numbers. This is the reason you get some paradoxical numbers.
 
oxide are calling this one a benchmark... none of the reviewer have commented on how repeatable it is or how many runs they did (that I can see), so where it falls is currently unknown until people get their hands on it and we see some more testing of different scenarios

that would certainly explain the odd results, if the reviewers are averaging runs and not mentioning it, that would explain getting higher fps at higher res if its just a rounding error

edit: PCPer say they are averaging runs, but don't say how many they did or how repeatable it was
 
Last edited:
oxide are calling this one a benchmark... none of the reviewer have commented on how repeatable it is or how many runs they did (that I can see), so where it falls is currently unknown until people get their hands on it and we see some more testing of different scenarios

that would certainly explain the odd results, if the reviewers are averaging runs and not mentioning it, that would explain getting higher fps at higher res if its just a rounding error

Oxide was calling Starswarm a benchmark as well ;) But what they were doing is just showcasing what different APIs can do while maintaining certain fps.
AotS the data outpust is a bit more organised, but it is still showing a lot of different numbers which influence each other. As is most of the time with reviewers they just rush things out, and rarely take a better look what they are rushing out.
And then we have everyone talking how AMD drivers are bad for dx11, how nvidia is crying and how performance increase when increasing resolution. Excel charts on its own will never tell us full picture.
It is a shame that we have such review sites who have no clue what they are actually benchmarking.

On the other hand game code is in alpha state, dx12 is out for just several weeks, drivers are young, and game was showcased on AMD hardware many times. Why some people jump the gun and declare that this is end all results, I have no clue, though I must admit, watching nvidia release a driver for this game and few days later cry how this is buggy and not representative is extremely fun. Can we imagine what kind of performance we would have gotten out of nvidia if they did not release game ready drivers? Some would say the game was completely unplayable before nvidia release new drivers ;) But there is no need to concentrate on that, since again, everything is in early stages.
 
Good find there Humbug. :)

Nvidia writes bad code and driver, dont know they are doing it and blame someone else. seen that pattern a lot from them.

Seems ashes allow us to peek into dx12 and what it will do for us gamers.
dice frostbite engine with dx12 going to be cool to watch also.
 
that still doesn't explain why increasing the resolution actually increases the fps, if its GPU bound giving the GPU more to do should not mean it produces more frames

Because 1080p high is more of a load than 1600p low.

it isn't high vs low, it's 1080p low vs 1600p low for DX12 and 1080p high vs 1600p high for DX11

in both cases it's low vs low or high vs high, not high vs low

I'm not sure what you're trying to say, maybe you looked at the chart wrong?

Increasing the resolution does not increase the FPS, except when comparing 1080p high to 1600p low, which is understandable as it's lessening the load.
 
Last edited:
I have access to the same build as the reviewers are using so here's a few spoilers of how the results look. Additionally, results are also placed in a My Documents folder however the file is pretty long. It looks like the results screen displays all the important info though, which is good.

When you select the high preset there are two other AA options you can increase, MSAA to x4 and Temporal AA can be set to Ultra from High.

Xxq11Zy.jpg

CS9MzO1.jpg

Jq3l3cV.jpg

I can confirm the benchmark is consistent. I'm seeing pretty much same score on each run, depending on the settings used and batch count.

I had to run borderless mode to grab a screenshot as no overlay was working with the benchmark.
 
Last edited:
Game released, amd performance is gash, developers and nvidia are to blame.

This demo is released, nvidia performance is sub par (apparently), benchmark is to blame.

Corrected. Haven't you seen the million green posts telling you the benchmark is clearly gash?
 
People really stop looking at small differencies in frames. IF you are gpu bound in dx11 and dx12, it's only normal that you can get better score in DX11 sometimes, as it's not synthetic benchmark and workload is a bit different each time. For non synthetic benchmark it's suprisingly consistent.

First tests look promising. However there really is something odd going on with AMD cpu's . Let's wait for new versions of drivers and game to see how things evolve.

Edit, AMDMatt's post shows that cpu scaling for 8 cores is there. But the game just is very demanding for GPU aswell (unlike starswarm), that's why we don't see big differencies between fastest 4 cores and 6/8 cores.

Matt, would it be possible for you to run cpu test in that benchmark, and post cpu core usages?
 
Last edited:
Corrected. Haven't you seen the million green posts telling you the benchmark is clearly gash?

So people are still ignoring the fact that 'as the resolution is upped, the frames go up' in favour of posts like this.

In the olden days, we at least used to have some intelligent discussions :(

In short, by any reasonable meaning of the phrase, Ashes is absolutely a real benchmark. We wouldn’t recommend taking these results as a guaranteed predictor of future DX12 performance between Red and Green — Windows 10 only just launched, the game is still in pre-beta, and AMD and Nvidia still have issues to iron out of their drivers. While Oxide strongly disputes that their MSAA is bugged for any meaningful definition of the word, they acknowledge that gamers may want to disable MSAA until both AMD and NV have had more time to work on their drivers. In deference to this view, our own benchmarks have been performed with MSAA both enabled and disabled.

http://www.extremetech.com/gaming/2...he-singularity-amd-and-nvidia-go-head-to-head
 
Last edited:
Back
Top Bottom