• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

Infact, it doesnt. It's only lowly forum members that say that.

If Nvidia have had that code for months like Oxide say, and have updates to code to compile on a daily basis like Oxide say, then I'm pretty sure that Oxide have something they need to look at. Oxide are a two bit dev heavily involved with AMD so like AMD will say anything to have a go at Nvidia. Infact Oxide have said themselves that there are issues with their MSAA code.

If it's a bug in the MSAA code of the game then it also affects AMD too... Don't know where you got the idea that somehow it's rigged in AMDs favour.

Glad you think i'm a "lowly forum member", more than can be said for you. You lost the moral high ground by making that insult.
 
Last edited:
They never said the problem was with their MSAA code, they said the MSAA path in the current DX12 drivers is broke/sub optimal. Hence Nvidia and everyone else need to fix it in their drivers. And they even offered to implement a workaround by using a DX11 type MSAA shader instead.

Oxide believes it has identified some of the issues with MSAA and is working to implement workarounds on our code.
 
By that logic every dev involved with Nvidia is a two bit dev too, right?

Nvidia cards are getting the same performance as AMD so whats the problem?

If you are going to reply, reply to what I wrote. I have already responded to the AMD performance a few posts of mine back.
 
If it's a bug in the MSAA code of the game then it also affects AMD too... Don't know where you got the idea that somehow it's rigged in AMDs favour.

Glad you think i'm a "lowly forum member", more than can be said for you. You lost the moral high ground by making that insult.

Nice inja edit. I was talking about myself too in the forum member comments. Were nobodies. Oxide are a nothing dev with AMD influences. Nvidia are a major corproation.

Again like fs123, you seem to not respond directly to whats written and add in assumptions to fit. I have never once talked about anything being in AMD's favour.
 
Oxide believes it has identified some of the issues with MSAA and is working to implement workarounds on our code.

Just as i said. And nice way to cut out the important parts relating to them using the DX11 MSAA code instead as a workaround for the time being.

Also if you go on their forum one of their devs replied. here

Nvidia mistakenly stated that there is a bug in the Ashes code regarding MSAA. By Sunday, we had verified that the issue is in their DirectX 12 driver. Unfortunately, this was not before they had told the media that Ashes has a buggy MSAA mode. More on that issue here. On top of that, the effect on their numbers is fairly inconsequential. As the HW vendor's DirectX 12 drivers mature, you will see DirectX 12 performance pull out ahead even further.

I won't try to speak for Nvidia PR.

What I can say, with absolute certainty: The MSAA issue they described will happen on any DirectX 12 game currently. That's why we were surprised they tried to describe the issue as a "bug" in our code.

I don't expect it to be an issue for long though. Even as recently as last week, Nvidia released an updated driver that made significant performance gains with Ashes.
 
Nice inja edit. I was talking about myself too in the forum member comments. Were nobodies. Oxide are a nothing dev with AMD influences. Nvidia are a major corproation.

Again like fs123, you seem to not respond directly to whats written and add in assumptions to fit. I have never once talked about anything being in AMD's favour.

Couldn't care less what you have to say at this point. You lost any credibility by insulting me. Then again, you never had any.
 
Agreed, Planetary Annihilation never lived up to the hype and Square/Enix had Supreme Commander 2 noobified to try and compete with Starcraft 2 (despite the fact the target audience all look down on baby RTS games like SC2).

Planetary Annihilation took a while but it's gotten there. It's had so many patches/fixes that it is now a really really good game compared with when it first came out.

I just noticed they have released an update for it called "Titans". It looks like there is now experimental units along with 16 other units. :)
 
Except Mantle has nothing to do with DX12.

I'm not so sure about that given Microsoft specifically mentioned Mantle in their DX12 presentations, still I don't remember the exact context, it could have just been gratitude but it wouldn't surprise me if the mapping was very similar. There are only so many ways to get from Liverpool to Manchester quickly, if you get my meaning.
 
Wasn't it that Star Swarm benchmark?

http://www.oxidegames.com/about/

Brian Wade and a few other partners have extensive experience in RTS games hence the development of the Nitrous engine and Ashes of the Singularity..Basically Oxide is not a novice outfit..I loved SC, LOTR:The Battle for Middle Earth I & II which makes me more excited for AotS.

Brian Wade, partner has been a leader in software development for nearly three decades. He won a prestigious BAFTA for his work as the lead programmer of Civilization V, and served as lead programmer for Command and Conquer 3: Kane’s Wrath, The Lord of the Rings: The Battle for Middle Earth II: The Rise of the Witch-king, and A Force More Powerful as well. In addition to his time in the games industry, Brian has nearly 20 years of experience developing both mission-critical and simulation software for the military. Beyond demonstrating leadership capabilities in bringing many successful software projects to release, Brian is a skilled gameplay programmer with specializations in artificial intelligence, physics and game systems design.
 
Last edited:
The level of stupidity in this thread, even by senior and usually clued-up posters is breathtaking.

This is a BENCHMARK, based on the game. It is not THE GAME. It is meant to stress the machine - both CPU & GPU, or one or the other depending on configuration. It's meant to expose bottlenecks and show limitations.

Actual in-game performance (where there aren't such crazy loads) aren't that bad on AMD DX11 at all ... not quite as quick as NVIDIA, but perfectly acceptable. I strongly suspect in-game performance of AMD DX12 will show a more significant lead than the benchmark did.

The low DX11 performance isn't an illustration of OMG-LUL-AMD-DRIVERS-ARE-TEH-SUCKZORZ-NVIDIA-ARE-80-to-90-%-FASTER-AND-I'M-NOW-GOING-TO-PRETEND-THIS-IS-THE-CASE-IN-ACTUAL-GAMES-ASIDE-FROM-PROJECT-GIMPWORKS(CARS). It's an illustration of NVIDIA having a significant lead in CPU draw calls on DX11 .. something most games never even get close to running into at the moment. The Oxide Engine, as people should know by now, is designed to take advantage of massively more draw calls than any other extant game / engine (first on Mantle, now on DX12 and later on Vulkan). This is a unique situation as of now. No other game or gaming benchmark requires as many draw calls (even in DX11), so hence the disparity between AMD & NVIDIA.

AMD no doubt could have improved the DX11 driver specifically for this game / engine, or in general. But why would they? They have finite resources and games & engines requiring huge numbers of draw calls simply are not going to ever be designed around DX11. So they're far better off concentrating their efforts on the work which brought us this major advancement in the first place; Mantle, and now Vulkan & DX12 (and Mantle's new home in Liquid VR - which incidentally thrashes GWVR due to low level code, lower latency and access to more draw calls).

As for a 5960X (OC'D or not) being slower than a 6700k. How on earth is this surprising, in single card or dual CF / SLI? It has higher IPC and clocks, and the game is fundamentally NOT CPU bound with these CPUs at any modern resolution. It may even remain the case on quadCF / SLI, as DX12 / Mantle / Vulkan liberate the CPU to such an extent anyway. This may well change in future, and also I suspect that even the new APIs / completely new engines may struggle to efficiently load 16 logical cores (HT). I suspect the results could differ if HT was disabled on the 5960X and it was 8 physical cores vs 8 logical cores (4 physical). Besides .. a 5960X being beaten by vastly cheaper processors in games is nothing new either. Some tests last week showed an 8 core FX roundly thumping it (and I suspect any Intel CPU with HT enabled) in GPU bound scenarios in DX11 games in terms of lack of frame drops and frame times (as opposed to average FPS).

Further to the last paragraph, people are going to need to get used to fully GPU bound scenarious as the norm, in both a lot of 'old' DX11 titles and the vast majority of new Vulkan / DX12 games. As we've seen re: GPUs (until it suited NVIDIA to stop), there was a shift in the discussion to frame 'quality' over pure FPS. Hopefully we'll see the same for CPUs, since I remain totally unconvinced by HT ... I have it disabled on my 3770k unless I know I'm going to be doing something that will significantly benefit from it, because I hate the stutter it often causes in normal desktop use and gaming. On this topic, I really hope that AMD's implementation of SMT on Zen isn't blighted by the same issues ... something their CMT earth-moving CPUs don't have issues with. If Zen doesn't have the HT-stutter like issues, this could be a big advantage for them.
 
Last edited:
Had a look around about Oxide. Started 2013, made up of veteran developers of good caliber apparently. They are primarily a game engine developer, looking to make modern engines. No idea if it's just RTS game engines or what though. Probably formed alongside Mantle I imagine.
 
The problem with AMD is they come off as a company with too few resources and too many fronts to fight.

It's only time until they get stretched again and fall behind in a certain area.

Either, CPU, GPU drivers, game optimizations, xfire performance etc etc etc....

They can not fight both Intel and Nvidia it's impossible.

^

This man knows what he's talking about.

Just found this review from Extreme Tech.

At first glance, these results may not seem impressive. The magnitude of AMD’s improvement from DX11 to DX12 is undercut by Nvidia’s stellar DX11 performance. The Fury X beats or ties Nvidia in both our benchmarks, and that’s definitely significant for AMD, considering that the Fury X normally lags the GTX 980 Ti, but Microsoft didn’t sell DirectX 12 as offering incremental, evolutionary performance improvements. Is the API a wash?

We don’t think so, but demonstrating why that’s the case will require more testing with lower-end CPUs and perhaps some power consumption profiling comparing DX11 to DX12. We expect DirectX 12 to deliver higher performance than anything DirectX 11 can match in the long run. It’s not just an API – it’s the beginning of a fundamental change within the GPU and gaming industry.

Consider Nvidia. One of the fundamental differences between Nvidia and AMD is that Nvidia has a far more hands-on approach to game development. Nvidia often dedicates engineering resources and personnel to improving performance in specific titles. In many cases, this includes embedding engineers on-site, where they work with the developer directly for weeks or months. Features like multi-GPU support, for instance, require specific support from the IHV (Integrated Hardware Vendor). Because DirectX 11 is a high level API that doesn’t map cleanly to any single GPU architecture, there’s a great deal that Nvidia can do to optimize its performance from within their own drivers. That’s even before we get to GameWorks, which licenses GeForce-optimized libraries for direct integration as middleware (GameWorks, as a program, will continue and expand under DirectX 12).

DirectX 12, in contrast, gives the developer far more control over how resources are used and allocated. It offers vastly superior tools for monitoring CPU and GPU workloads, and allows for fine-tuning in ways that were simply impossible under DX11. It also puts Nvidia at a relative disadvantage. For a decade or more, Nvidia has done enormous amounts of work to improve performance in-driver. DirectX 12 makes much of that work obsolete. That doesn’t mean Nvidia won’t work with developers to improve performance or that the company can’t optimize its drivers for DX12, but the very nature of DirectX 12 precludes certain kinds of optimization and requires different techniques.

AMD, meanwhile, faces a different set of challenges. The company’s GPUs look much better under D3D 12 precisely because it doesn’t require Team Red to perform enormous, game-specific optimizations. AMD shouldn’t assume, however, that rapid uptake of Windows 10 will translate into being able to walk away from DirectX 11 performance. DirectX 12 may be ramping up, but Ashes of the Singularity and possibly Fable Legends are the only near-term DX12 launches, and neither is in finished form just yet. DX11 and even DX9 are going to remain important for years to come, and AMD needs to balance its admittedly limited pool of resources between encouraging DX12 adoption and ensuring that gamers who don’t have Windows 10 don’t end up left in the cold.

As things stand right now, AMD showcases the kind of performance that DirectX 12 can deliver over DirectX 11, and Nvidia offers more consistent performance between the two APIs. Nvidia’s strong performance in DX11, however, is overshadowed by negative scaling in DirectX 12 and the complete non-existence of any MSAA bug. Given this, it’s hard not to think that Nvidia’s strenuous objections to Ashes had more to do with its decision to focus on DX11 performance over DX12 or its hardware’s lackluster performance when running in that API.

Eventually Nvidia will find away again to stretch the gap. Just time.
 
Eventually Nvidia will find away again to stretch the gap. Just time.

Doubt it, how are they going to do that with a low-level API? Not quite as easy as just a few driver tweaks like with DX11. Nvidia were also supposedly working on DX12 with Microsoft, so much for that. Where are the improvements to show that they were working with MS?
 
Couldn't care less what you have to say at this point. You lost any credibility by insulting me. Then again, you never had any.

That was a little irony joke you played on me there, don't worry I got it ;)

What are oxide games famed for? I am not such an RTS fan, so excuse my ignorance.

As Oxide, they are famed for nothing. I think they have a DX11 engine, who hasnt. They were created to take up Mantle tech, but now as that is dead it looks like they are creating AMD optomised games.
 
Doubt it, how are they going to do that with a low-level API? Not quite as easy as just a few driver tweaks like with DX11. Nvidia were also supposedly working on DX12 with Microsoft, so much for that. Where are the improvements to show that they were working with MS?

DX12 is in it's absolutely infancy. We got no idea how it is going to evolve over it's life cycle.

Remember that the way game designers design games is going to change too. No doubt Nvidia with their bigger budget, more engineers, closer relations with game devs, will benefit again just as they have done during the DX11 era.
 
Doubt it, how are they going to do that with a low-level API? Not quite as easy as just a few driver tweaks like with DX11. Nvidia were also supposedly working on DX12 with Microsoft, so much for that. Where are the improvements to show that they were working with MS?

whay are you basing this on, this one benchmark?
 
Back
Top Bottom