• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

DX12 is in it's absolutely infancy. We got no idea how it is going to evolve over it's life cycle.

Remember that the way game designers design games is going to change too. No doubt Nvidia with their bigger budget, more engineers, closer relations with game devs, will benefit again just as they have done during the DX11 era.

Just saying it's not quite as easy as it was with DX11, DX11 alone had a large amount of CPU overhead which needed driver tweaks by Nvidia and eventually AMD.

DX12 was designed to have far lower CPU overhead, only so much you can squeeze out of an API that is already far more efficient than DX11. Nvidia cannot pull a rabbit out of a hat on that one. DX12 relies on the dev to do the optimisation for the most part.

As you said it is early days, if any tweaks are made to DX12 it'll be done to the benefit of both vendors as Microsoft optimises the API further. Both vendors seem very keen on DX12, it's a good step forward for gaming.
 
Last edited:
Maybe with DX12 AMD has indeed caught up in the driver department. In that case we should see performance that reflects the actual hardware rather than driver tweaks.

Nvidia may or may not get more performance from ther drivers but I have a feeling the lead they always had with DX11 has narrowed significantly in DX12.
 
Did anyone else read opeth's post as if he was vehemently agreeing with his other post? :D

Yup.:D

^

This man knows what he's talking about.

Just found this review from Extreme Tech.

At first glance, these results may not seem impressive. The magnitude of AMD’s improvement from DX11 to DX12 is undercut by Nvidia’s stellar DX11 performance. The Fury X beats or ties Nvidia in both our benchmarks, and that’s definitely significant for AMD, considering that the Fury X normally lags the GTX 980 Ti, but Microsoft didn’t sell DirectX 12 as offering incremental, evolutionary performance improvements. Is the API a wash?

We don’t think so, but demonstrating why that’s the case will require more testing with lower-end CPUs and perhaps some power consumption profiling comparing DX11 to DX12. We expect DirectX 12 to deliver higher performance than anything DirectX 11 can match in the long run. It’s not just an API – it’s the beginning of a fundamental change within the GPU and gaming industry.

Consider Nvidia. One of the fundamental differences between Nvidia and AMD is that Nvidia has a far more hands-on approach to game development. Nvidia often dedicates engineering resources and personnel to improving performance in specific titles. In many cases, this includes embedding engineers on-site, where they work with the developer directly for weeks or months. Features like multi-GPU support, for instance, require specific support from the IHV (Integrated Hardware Vendor). Because DirectX 11 is a high level API that doesn’t map cleanly to any single GPU architecture, there’s a great deal that Nvidia can do to optimize its performance from within their own drivers. That’s even before we get to GameWorks, which licenses GeForce-optimized libraries for direct integration as middleware (GameWorks, as a program, will continue and expand under DirectX 12).

DirectX 12, in contrast, gives the developer far more control over how resources are used and allocated. It offers vastly superior tools for monitoring CPU and GPU workloads, and allows for fine-tuning in ways that were simply impossible under DX11. It also puts Nvidia at a relative disadvantage. For a decade or more, Nvidia has done enormous amounts of work to improve performance in-driver. DirectX 12 makes much of that work obsolete. That doesn’t mean Nvidia won’t work with developers to improve performance or that the company can’t optimize its drivers for DX12, but the very nature of DirectX 12 precludes certain kinds of optimization and requires different techniques.

AMD, meanwhile, faces a different set of challenges. The company’s GPUs look much better under D3D 12 precisely because it doesn’t require Team Red to perform enormous, game-specific optimizations. AMD shouldn’t assume, however, that rapid uptake of Windows 10 will translate into being able to walk away from DirectX 11 performance. DirectX 12 may be ramping up, but Ashes of the Singularity and possibly Fable Legends are the only near-term DX12 launches, and neither is in finished form just yet. DX11 and even DX9 are going to remain important for years to come, and AMD needs to balance its admittedly limited pool of resources between encouraging DX12 adoption and ensuring that gamers who don’t have Windows 10 don’t end up left in the cold.

As things stand right now, AMD showcases the kind of performance that DirectX 12 can deliver over DirectX 11, and Nvidia offers more consistent performance between the two APIs. Nvidia’s strong performance in DX11, however, is overshadowed by negative scaling in DirectX 12 and the complete non-existence of any MSAA bug. Given this, it’s hard not to think that Nvidia’s strenuous objections to Ashes had more to do with its decision to focus on DX11 performance over DX12 or its hardware’s lackluster performance when running in that API.

100% agree with Joel Hruska's conclusion opeth, thanks mate.:)

On another note:

Any mods able to step in here and help put a stop with the hating in here please:

In the olden days, we at least used to have some intelligent discussions :(

^
This, as greg said, the discussions were better than now, because you can't have a discussion any more, only fights n snipes@OcUK.:o

Iv'e not been posting in here much due to the relentless arguments, please sort it out, this used to be a great place to come:( and we really need it back.:)

The 3 strike rule launched, and it was great, everyone got on and it was good again, is it still enforced, or could you re-instate it again???
 
This is a BENCHMARK, based on the game. It is not THE GAME. It is meant to stress the machine - both CPU & GPU, or one or the other depending on configuration. It's meant to expose bottlenecks and show limitations.

It's running on the game engine, running actual game code e.g. AI. Why is freeing up CPU from graphics a big deal to an RTS game? Because the CPU then has more time to run the rest of the game engine, reducing the slowdown e.g. when hundreds of units are on screen. Not sure why you think "THE GAME" will not max the cpu?


Hopefully we'll see the same for CPUs, since I remain totally unconvinced by HT ... I have it disabled on my 3770k unless I know I'm going to be doing something that will significantly benefit from it, because I hate the stutter it often causes in normal desktop use and gaming.

2002 Called - It wants it's Pentium 4 Hyperthreading complaints back.

If you are having stuttering on your desktop, then it's something you have changed. Issues with HT even in games are few and far between, Windows 7 onwards does a much better job of scheduling threads to avoid running on a logical core if a physical core is available. If you are still doing anything from the old days, such as manually setting affinity, parking cores, changing priorities then that is where things start to trip up.
 
The DX11 performance on the AMD cards in the bench is shocking and nothing to do with the hardware, this is an epic and arrogant fail by the game devs.

The way I see it, the games DX11 mode was just a placeholder until they got the DX12 mode sorted, if a GPU isn't DX12 compatible then it isn't going to be good enough to run this game anyway so the devs don't need to focus resources on it.
 
Nvidia would've been better off just saying nothing rather than release this crappy PR statement which does nothing other than make them look stupid.

Do you not think there is a possibility they were happy to look stupid as most forums are now talking about Nvidia and DX12 than they are about AMD??

The thing with Nvidia is they don't do anything without reason and that reason is usually self promotion and for me this is just another example. - We all know that most issues will have been ironed out before release anyway, this was just a way to gain publicity.

Even on the off chance it was a slip up because AMD are ahead in terms of DX12 they've still come out on top with more people talking about them. I say well played Nvidia, well played.
 
A RTS game like this will be far more shader intensive than geometry, Fury X is a shading monster there is no denying that but it's relatively pants when it comes to geometry/tessellation side of things, add to that we don't know what level of tessellation AMD are running as they don't actually adhere to what the application asks for.
 
Do you not think there is a possibility they were happy to look stupid as most forums are now talking about Nvidia and DX12 than they are about AMD??

The thing with Nvidia is they don't do anything without reason and that reason is usually self promotion and for me this is just another example. - We all know that most issues will have been ironed out before release anyway, this was just a way to gain publicity.

Even on the off chance it was a slip up because AMD are ahead in terms of DX12 they've still come out on top with more people talking about them. I say well played Nvidia, well played.

Not so much when its bad PR.

DX12 is great for everyone though, should probably just focus on that. Shame they can't work on DX12 without releasing some crap PR statement alienating the developer, I mean that looks so good for Nvidia. A large amount of their users will probably believe whatever tuff Nvidia farts out in a PR statement.

Before anyone decides to flame me, I should add that AMD are no better with releasing crappy PR statements. At least Nvidia doesn't have Roy Taylor. :p
 
Last edited:
This, as greg said, the discussions were better than now, because you can't have a discussion any more, only fights n snipes@OcUK.:o

Iv'e not been posting in here much due to the relentless arguments, please sort it out, this used to be a great place to come:( and we really need it back.:)

The 3 strike rule launched, and it was great, everyone got on and it was good again, is it still enforced, or could you re-instate it again???

This.

This sub is an absolute joke, any semi intelligent or genuine reply/question is met with childish dismissal.

Sod the three strike rule, a few in this sub need perma'd.
 
It's running on the game engine, running actual game code e.g. AI. Why is freeing up CPU from graphics a big deal to an RTS game? Because the CPU then has more time to run the rest of the game engine, reducing the slowdown e.g. when hundreds of units are on screen. Not sure why you think "THE GAME" will not max the cpu?




2002 Called - It wants it's Pentium 4 Hyperthreading complaints back.

If you are having stuttering on your desktop, then it's something you have changed. Issues with HT even in games are few and far between, Windows 7 onwards does a much better job of scheduling threads to avoid running on a logical core if a physical core is available. If you are still doing anything from the old days, such as manually setting affinity, parking cores, changing priorities then that is where things start to trip up.

Do you know what a stress test is? It seems not.

Also, yes there are CONSTANT issues with HT still now, and no I don't. The results I mentioned re: FX8370 vs. 5960X have been reproduced by others ... and it is ameliorated by disabling HT. The stutter and freezing in desktop use (in my use), and frame drops in games disappears when HT is disabled, unless under highly threaded workloads where it could benefit. This is something I have never encountered on the FX6xxxs and FX8xxxs I've built for others, ever, and have encountered on every single HT (enabled) Intel CPU I've ever used (never used a Skylake or Haswell-E - but at least the latter certainly still exhibits this behaviour).
 
Last edited:
The vocal few as they say.

But one other reason i just thought of why the AMD and NVidia hardware are neck and neck, is because the game is not employing Tessellation, or there are only low amounts of tessellation by default. Considering it is an RTS game.

So it shows that the hardware is near matched in terms of performance.

Unless DX12 also improves tessellation throughput on AMD hardware.

But who knows, the DX12 results are awesome for an RTS of this scale.
 
One game I'd love to see with dx12 is Crysis 3. That would be amazing.

I would love to see an MMO get Low Abstraction API support, i can see WoW getting it fairly quickly when Vulkan drops seeing that Blizzard are backers of it.

Another game would be Planetside 2. Would be such a better game, even if it was just bumped to DX11. Still stuck on DX9, it takes the ****. they should drop it now considering they have gone 64bit only. So no XP support at all. (xp 64bit doesn't count.)
 
So the bottom line about this benchmark is that the 980 and the 390x are pretty much tied give a or take a frame or two. (according to the PCper article)

11 pages of drivel and argument to get to that point, which of course we pretty much knew even before this benchmark was released.

We know the pecking order of the currant cards, anyone who thinks that DirectX 12 is going to make any meaningful amount of difference is just deluding themselves. Yes of course one side will be better in some games and the other in other games, but the overall pecking order of cards wont change at all.
 
This.

This sub is an absolute joke, any semi intelligent or genuine reply/question is met with childish dismissal.

Sod the three strike rule, a few in this sub need perma'd.

This thread was always going to be met with a garrison given the content, so chose not to post. Especially given a heavy talking point is the now exceedingly obvious overhead issues that have held back AMD for years. Like anything with marketing undertones though people should take all results with a pinch of salt until both the game is available and the driver optimisation has settled.
 
Last edited:
Back
Top Bottom