• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Or the benchmark is broken. Considering that the whole Kepler lost performance thing was widely disproven as an AMD fanboy myth, AND there Maxwell architecture is proven to be 1.5-2.0X more efficient that Kepler, I really wouldn't pay any attention to this so called 'benchmark'.



Why dpo some people find it so hard to understand that Nvidia made a massive increase in performance per watt with Maxwell when all the facts are there to see in plain daylight.

Thanks but Il stick with option one, nvidia held back the 780.
 
Or the benchmark is broken. Considering that the whole Kepler lost performance thing was widely disproven as an AMD fanboy myth, AND there Maxwell architecture is proven to be 1.5-2.0X more efficient that Kepler, I really wouldn't pay any attention to this so called 'benchmark'.



Why dpo some people find it so hard to understand that Nvidia made a massive increase in performance per watt with Maxwell when all the facts are there to see in plain daylight.

I don't agree with D.P on much but on the Kepler vs Maxwell thing i do, Maxwell is just a better architecture and with better Draw Call efficiency. That is why a GTX 970 is a match for a GTX 780TI.

Another reason given for Kepler being held back is because the R9 290 is now a match for the 780TI when before the 290X was slightly behind, AMD have just upped thier games with Drivers just after Maxwell V2 was released.
 
Do people forget that Kepler is compute card for nvidia, thus it opens up with dx12, since that API can take advantage of theoretical performance more easily.
And people who actually are downplaying this game engine and benchmark just simply have no clue what that benchmark puts out in results. Oxide has their own way of benchmarking, it is different than simple fps output in other benchmarks. It was similar to Starswarm benchmark/demo, everyone misunderstood it completely.
I will have utmost pleasure in watching nvidians try to come up with more excuses when other dx12 titles come out and if they will have same outcome as AotS.
At the moment excuses are:
1) its only one game
2) it is bad benchmark (because we have no clue what it benchmarks)
3) it is still in alpha
4) this gen cards are irrelevant for dx12 and everyone is very rich to upgrade their cards every year.
 
Do people forget that Kepler is compute card for nvidia, thus it opens up with dx12, since that API can take advantage of theoretical performance more easily.
And people who actually are downplaying this game engine and benchmark just simply have no clue what that benchmark puts out in results. Oxide has their own way of benchmarking, it is different than simple fps output in other benchmarks. It was similar to Starswarm benchmark/demo, everyone misunderstood it completely.
I will have utmost pleasure in watching nvidians try to come up with more excuses when other dx12 titles come out and if they will have same outcome as AotS.
At the moment excuses are:
1) its only one game
2) it is bad benchmark (because we have no clue what it benchmarks)
3) it is still in alpha
4) this gen cards are irrelevant for dx12 and everyone is very rich to upgrade their cards every year.

Pretty much. One thing is for sure the Maxwell cards will not receive the same 180% increase that some older cards got. Where is the extra power coming from exactly? Especially since maxwell are all so much more efficient I'm told...
 
No, Maxwell is a marvel of and arch so to speak, since it is designed for one thing and one thing only - gaming. IT is crap at any sophisticated compute tasks. Thus you have good power to perf ratio, since chip is not clobbered by useless to gaming stuff. And as that guy on overclock.net said, it is perfectly fit for serial tasks. That's why we have Maxwell excel in folding@home projects which have small amount of atoms, while AMD cards love projects with huge amounts of atoms.
nvidia did great job on optimising their dx11 drivers for maxwell, but since drivers are less influential on dx12, nvidia cannot do much more magic. Obviously they can still do their job on making them drivers better(like at least working on win 10, for a change).
I do understand that this is just one game, early one at that, but performance is frikkin logical:
DX12 is closer to metal, and AMD has much better metal (theoretical perf) than nvidia cards. So it is no surprise that dx12 bring out more from AMD, because there is more performance on AMD cards than on nvidia.
 
Pretty much. One thing is for sure the Maxwell cards will not receive the same 180% increase that some older cards got. Where is the extra power coming from exactly? Especially since maxwell are all so much more efficient I'm told...

The 770 never got a 180% increase in the end. Wccftech used the wrong benchmark for those results. if you go on thier page you will see on thier screenshots that they chose the "CPU" benchmark instead of the "Full System" benchmark.

other websites benchmarked the 770 properly and it came out beneath the 780Ti where it should be.
 
The 770 never got a 180% increase in the end. Wccftech used the wrong benchmark for those results. if you go on thier page you will see on thier screenshots that they chose the "CPU" benchmark instead of the "Full System" benchmark.

other websites benchmarked the 770 properly and it came out beneath the 780Ti where it should be.

You're right I never noticed that.

Oh well I take back what I said about 780's still being held back at driver level :D
 
Not sure if anyone mentioned this in here but AMD's performance in this DX12 game seems to mirror their 3dmark API overhead test results. In DX11 the API overhead was virtually half of Nvidia's but in DX12/Mantle the results were about the same for both.

We will see if this is consistent once more DX12 tests come out.
 
Oh my, I was sure I could feed my Fury X in ashes for 100% gpu usage. But no way with my old 3770k. Just look how it struggles under load.


Afterburner shows minimum gpu usage at 90%, so gotta say that my cpu and gpu are quite evenly matches in ashes in general. I get mostly cpu bound in normal scenarios, where gpu would like to go 80, but cpu can only feed it 75 (it those situations all 8 threads of cpu are running at 100% usage).

Clocks for this run, [email protected], Fury X @ 1100/550MHz

Now, I'm sure if I would have more modern I7, like 5770k or new skylake, I'm sure with all that improved IPC, I would be able to keep it at 100% load all the time with these settings. Quess I need to upgrade that side next :)
 
Oh my, I was sure I could feed my Fury X in ashes for 100% gpu usage. But no way with my old 3770k. Just look how it struggles under load.


Afterburner shows minimum gpu usage at 90%, so gotta say that my cpu and gpu are quite evenly matches in ashes in general. I get mostly cpu bound in normal scenarios, where gpu would like to go 80, but cpu can only feed it 75 (it those situations all 8 threads of cpu are running at 100% usage).

Clocks for this run, [email protected], Fury X @ 1100/550MHz

Now, I'm sure if I would have more modern I7, like 5770k or new skylake, I'm sure with all that improved IPC, I would be able to keep it at 100% load all the time with these settings. Quess I need to upgrade that side next :)

i wouldn't look at it like that mate. It's got a lot of cpu intensive stuff going on. Im sure without this stuff it wouldn't struggle feeding your gpu at all. it's the other stuff thats making it struggle! Your cpu is fine bro :D
 
i wouldn't look at it like that mate. It's got a lot of cpu intensive stuff going on. Im sure without this stuff it wouldn't struggle feeding your gpu at all. it's the other stuff thats making it struggle! Your cpu is fine bro :D

I know it's fine. Ashes most likely will be most cpu intense caming coming up in few years. But it does feel quite funny to see 100% cpu usage on I7, and still be cpu 'necked :)

I wish I would have quicker rams to test difference between DDR1600, and faster modules.
 
So people who have bought what do you think of the game?

Only played couple of maps. Both have ended me alt tabbing out of the game and crashing that way :D

Game itself looks quite promising. Those who likes supreme commanders will surely like this aswell. Btw, benchmark shows one map with many units. But there are actually bigger maps in game, with space for a lot more units :)
 
9 series is probably going to be crap for DX12 games, who cares?

GTX 480 is a DX11 card, who uses that?

Speak for yourself only. Many of us still at Fermi. You like it or not , deal with it.

Edit: in fact, i got pretty good result in DX 11. Well, no chance to try DX 12 yet since no driver released.
 
Last edited:
No performance improvements in DX12 at all for me in AotS.

15.7:
aots2.png

15.8:
aotsb.png
 
Back
Top Bottom