• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

So we are just accepting that DX11 should run faster than DX12 and that increasing the resolution actually increases FPS?
umkay

Yes, because if you try and explain the Nvidia discrepancy it means you're fanboying Nvidia.

Nvidia should be given criticism, but anyone taking this benchmark as "legit" isn't thinking with their objective hat on.

I see Orangey's getting in on the action.
 
This is the important paragraph for me. :)

Being fair to all the graphics vendors

Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).

I'd rather you just stick to the rules and not mention competators.
 
Really?

First off, DX12 isn't to remove CPU overhead it's to REDUCE it. This isn't a new concept and the idea behind it is to then use more of the CPU now it's available.

IE 1mil draw calls use 30% of the cpu on DX11 and this is reduced to 2% on DX12, now you can straight up take that or as in the case of this game, use the decreased performance to increase the drawcalls dramatically. So now using 10million using 15-20% of the cpu maybe. The difference being that 10million draw calls would use 300% of your available CPU power.

The idea behind reducing overhead it to then reallocate that spare power. This is also a RTS, RTSs are in general and always have been massively cpu limited outside of the graphics side. SO removing the DX11 overhead allows the fps to jump, changing resolution DOESN'T change the cpu amount being used for the game itself and all the AI being involved.

In many many RTSs performance doesn't scale well across resolution if at all.



http://www.tomshardware.co.uk/overclocked-graphics-gpu-afterburner,review-31955-14.html

the 5870 is faster (by 0.1 fps) at 1920 over 1650. Yes old cards but meh, supreme commander is my go to RTS that I remember and the first result I found. This isn't uncommon or even remotely new.

There are many more possibilities, efficiency, performance. CPU cores are designed to focus load onto less cores because it saves power. It could be a quirk that at 1080p the driver overhead means it can work in two cores and power down the other two cores. At 4k, the driver overhead means the driver spreads the cpu load across 4 cores which actually frees up a few percent of spare power from the main game cpu thread giving a boost to the game threads rather than the driver threads.

So much waffle so little knowledge.

On a single GPU I have not found a RTS that does not scale with resolution and I have played quite a few. There again that was on DX11, if this is not the case with DX12 there must be something wrong with the API.

What I would recommend you do is actually use some of the hardware you are talking about and also do some benching as you don't seem to know that good well written ones like Heaven, Firetrike, Vantage etc do scale with resolution. You only start seeing this failing with multi GPU setups or tiny resolutions.

Before people draw conclusions about DX12 we need to see results on some reliable benches and games. Also I think both the AMD and NVidia results look wrong so it is not a brand thing.

Without getting wrapped up in the differences, the bench is a strange one and all my years of benching, the numbers don't go up as the resolution goes up. That is a basic fact.

+1

More pixels = more work = lower FPS regardless of what API or bench you are using.:)
 
Ah so.. tracked down the game and I'd like to buy me that lifetime membership, but I can't buy anything as all my gaming money is on steam and it looks like I can only buy the alpha on Stardock's site.

I would love to be wrong about this, so feel free to correct me.. anyone.. anyone at all..?

sadface.com >>>:(<<<<
 
Shame we can't all agree with each other. DX12 is good for everyone.

Spot on. Whilst AOTS is a CPU bench and shows how DX12 helps with the CPU overheads, it isn't a benchmark that I would ever use as a basis for GPU benching. Get something like Heaven/3DMark and some games and then we will have a clearer indication of what we can fanboy over :D
 
Watching the Digital Foundry run on the 390, it does look great for AMD mind and a very nice improvement from 11 to 12 :cool:

DX12 could be very good indeed but people must see it for what it is rather than what they would like it to be.

I am sure both Unigine and Futuremark will be releasing DX12 benchmarks that are very polished and well balanced, when this happens we will see what the new API really can do.:)
 
Spot on. Whilst AOTS is a CPU bench and shows how DX12 helps with the CPU overheads, it isn't a benchmark that I would ever use as a basis for GPU benching. Get something like Heaven/3DMark and some games and then we will have a clearer indication of what we can fanboy over :D

It looks great for AMD so far. Nvidia drivers do seem like they need a bit of work. Both still have work to do, AMD could easily improve their DX11 CPU overhead further like they've been doing for the past few months.
 
Spot on. Whilst AOTS is a CPU bench and shows how DX12 helps with the CPU overheads, it isn't a benchmark that I would ever use as a basis for GPU benching. Get something like Heaven/3DMark and some games and then we will have a clearer indication of what we can fanboy over :D

What is strange is I have not seen a single bench for AOTS using a 295X2 as this is the fastest most demanding card available and puts a lot of demands on the system with 2 GPUs using the same PCI-E slot.
 
Spot on. Whilst AOTS is a CPU bench and shows how DX12 helps with the CPU overheads, it isn't a benchmark that I would ever use as a basis for GPU benching. Get something like Heaven/3DMark and some games and then we will have a clearer indication of what we can fanboy over :D

All the performance increases will be in RTS and MMOs really, the improvements that we will get in existing FPS and other games will probably be small but we could push those games in ways we never could before. AOTS, alpha aside should pretty much be the absolute pinnacle of what DX12 and the other LL APIs can do.
 
What is strange is I have not seen a single bench for AOTS using a 295X2 as this is the fastest most demanding card available and puts a lot of demands on the system with 2 GPUs using the same PCI-E slot.

No multi GPU support as yet. Which is kind of a bit odd as they really worked that into DX12 from what I've been reading.
 
DX12 could be very good indeed but people must see it for what it is rather than what they would like it to be.

I am sure both Unigine and Futuremark will be releasing DX12 benchmarks that are very polished and well balanced, when this happens we will see what the new API really can do.:)

Agreed Kaap. I am keen as mustard to see some DX12 bench runs and I would really enjoy seeing some decent games come of it.

It looks great for AMD so far. Nvidia drivers do seem like they need a bit of work. Both still have work to do, AMD could easily improve their DX11 CPU overhead further like they've been doing for the past few months.

I remember the Star Swarm bench thread going pair shaped and even I threw the toys out of the pram a few times.... Stardock don't do normal bench tests it seems, so whilst I can see a nice gain on the DF video for AMD, I will reserve judgement for another bench that is more GPU and CPU demanding.
 
There needs to be more i5 benchmarks really, since its a very popular CPU. Either way these graphic card wars needs to die down, we should not be arguing which GPU does better. Instead we should be arguing how much performance do we get from removing the "CPU bottleneck" not GPU bottleneck and that would involve comparing different CPUs with the same GPU.

Looking at that DX12 980 TI review compared to DX11, it seems it gets a framerate less, so basically no improvement at all. It is weird, but at the same time the review was specifically mentioning GPU testing not CPU and the test bench was on a very high end intel CPU, Mantle has shown that high end CPU do not benefit much.
 
No multi GPU support as yet. Which is kind of a bit odd as they really worked that into DX12 from what I've been reading.

That is not how DX12 works at all, The developers have to write multi gpu support into their Rendering engine. All that happened with DX12 is that it allowed developers to do this themselves in the engine which is more efficient than the current bolted on top of DX11 method.

Spot on. Whilst AOTS is a CPU bench and shows how DX12 helps with the CPU overheads, it isn't a benchmark that I would ever use as a basis for GPU benching. Get something like Heaven/3DMark and some games and then we will have a clearer indication of what we can fanboy over :D

It can also be used as a GPU benchmark, if you read around some of the info and look at a few benchmark screencaps, the benchmark itself has multiple metrics, one of them is the FPS that the cpu is outputting for a given scene.

so it essentially means that it will show you if you are GPU bound or not. but not every site showed these screen caps, although PC per has a lot of good info, it would have been good to get a screen showing what is going on wit the i3 and 8370. It is more than likely CPU bound.

And in the screenshots i have seen the Titan X is GPU bound in the demo, even the fury x, 980/ti and 390 are GPU bound. But this is when they are all run on a 5960X

This is what i mean, the top CPU FPS metric is what performace the cpu can output if you had an infinitly fast GPU to render the CPU output.

11900533_1628234940748533_1821199148_o.jpg

So the above picture is essentially showing us that if the game had an infinitly fast gpu to render to, it would be getting 95 FPS average for that CPU.

Agreed Kaap. I am keen as mustard to see some DX12 bench runs and I would really enjoy seeing some decent games come of it.

I remember the Star Swarm bench thread going pair shaped and even I threw the toys out of the pram a few times.... Stardock don't do normal bench tests it seems, so whilst I can see a nice gain on the DF video for AMD, I will reserve judgement for another bench that is more GPU and CPU demanding.

The Star swarm was still a good bench and close to what a game is doing. But this ashes benchmark is running a full game simulation although the movement of the bots and their attack pattern is scripted. they are still performing normal AI routines for firing patterns and pathfinding. also it has full sound etc.
 
Last edited:
Back
Top Bottom