• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

So AMD released a 'special' driver just for that review.......why can't they be so proactive with other titles :p

Maybe this is the beginning of something beautiful. Well we have been waiting four years for AMDs technology to finally bear fruit. Fingers crossed that it will happen.

I will say one thing though....If this IS an indication of what will happen when a lot more DX12 games start to come out then I can see Nvidia's Gameworks Sponsorship going into overdrive. Let alone tessellation levels.

Good on yer RTG :)
 
IIA1q0A.png
 

why is FuryX+980Ti have better perfs than 980Ti+FuryX ?
is like primary card matter ? or am i missing something
this mixed muli-gpu is beautifull thing, but come to think of it, there is something that will poison it, i will let you guess...
Gameworks Titles
 
Last edited:
If there's this much difference between AMD/NV regards IQ due to they've switched something off on NV hardware it makes the whole benchmark useless.

Skip to 1.14 to see what I mean

 
Thinking about this a little more it would be interesting to have a breakdown of shaders/framerate for each card. If DX12 allows the full potential of cards to be unlocked is it really that surprising that the Fury X with ~4000 shaders beats a card (980ti) with ~2800. 17.5% FPS more at 1440p with 45% more shaders.

EDIT: Alternatively you could look at cost (£) per frame. Looking at the cheapest Fury X, 980ti currently on here. 980ti £8.42 per frame at 1440p, Fury X £7.16 per frame at 1440p.
 
Last edited:
Oh boy, someone grab the popcorn. This is about to get ugly :D

Disappointed with the mutli GPU scaling for both vendors.

The multi gpu scaling is only that bad becasue in multi gpu he is hitting a heavy CPU limit. This game is very CPU heavy, if it was less CPU bound in multi GPU then the FPS would have been closer to double.

Just imagine the fury X in crossfire hitting 150 ish FPS :P

Thinking about this a little more it would be interesting to have a breakdown of shaders/framerate for each card. If DX12 allows the full potential of cards to be unlocked is it really that surprising that the Fury X with ~4000 shaders beats a card (980ti) with ~2800. 17.5% FPS more at 1440p with 45% more shaders.

You also cannot compare the number of shaders between two very different architectures. the AMD ones are more than likely smaller individual units. You can work out relative performance, but that is all.
 
The multi gpu scaling is only that bad becasue in multi gpu he is hitting a heavy CPU limit. This game is very CPU heavy, if it was less CPU bound in multi GPU then the FPS would have been closer to double....
Anandtech were running this on a recent six core Intel CPU clocked @ 4.2Ghz. Given the rendering engine is making use of multiple command queues I'm surprised it is hitting a CPU bottleneck this easily. Would have been interesting to see some pics of the CPU/thread utilisation, all maxed at 100%???
 
Anandtech were running this on a recent six core Intel CPU clocked @ 4.2Ghz. Given the rendering engine is making use of multiple command queues I'm surprised it is hitting a CPU bottleneck this easily. Would have been interesting to see some pics of the CPU/thread utilisation, all maxed at 100%???

People have shown it before. And since this is an RTS it is VERY cpu heavy. The benchmark is not just a graphical and fully scripted scene. it is running a full game simulation at the same time, so it is not just a GPU benchmark, but a full system benchmark with all of the AI and pathfinding running for every unit.

this was a major problem with people when the benchmark first came out, they did not understand what they were looking at. They just though that it was a pure GPU and graphical benchmark like in other games with a fully scripted scene.
 
Last edited:
Yeah rts games require insane cpu power. Supreme commander still bogs down any cpu out today once the unit count gets into the 1000s.
I'm playing a lot of planetary annihilation it's the same. Starts of a solid 120fps but can go as low as 10 on a large system with multiple armies.
 
Yeah rts games require insane cpu power. Supreme commander still bogs down any cpu out today once the unit count gets into the 1000s.
I'm playing a lot of planetary annihilation it's the same. Starts of a solid 120fps but can go as low as 10 on a large system with multiple armies.

Quite a bit of that is contributed from the driver overhead. In ashes there are 4000 - 8000 units in a scene at any one time. There is far more cycles used for unit AI in ashes compared to what Supreme commander was putting out.

But supcom would have been far better if it was made with a DX10/11 renderer rather than DX9 at the time.
 
So AMD released a 'special' driver just for that review.......why can't they be so proactive with other titles :p

what special driver? latest is 16.1.1 hotfix, which has nothing mentioned about AoS. Unless there some new version which is still not available?
 
Back
Top Bottom