• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Anandtechs look at DX 12 in action with star swarm.

Just looked at the CPU used and again a very expensive high end Core i7-4960X, there is more to it then just threads, IPS, cache and other aspects so no just reducing how many cores are used and clock speeds is not accurate of emulating other CPU scaling at all, they should have least used a AMD CPU as well because you simply can not use a Intel CPU to represent it..
 
Last edited:
Microsoft appears to have done a good job on the whole. It will likely be years before games take full advantage of the leverage dx12 brings, and in the mean time dx11 will no doubt fade into retirement over a prolonged period of time, but at least this api is no longer going to be such a limiting constraint to the graphical fidelity of our games.
 
To be fair, they do cover the fact that the mantle benchmarks have a small batch optimisation pass that increases cpu load and frees up gpu resources, and that makes up most of the difference. If dx12 offers mantle gains, but in a package that requires less effort for devs(one api for all vendors) then we should see better support in theory. I'm excited! Bring on the new games. I will reserve judgement until then though
 
Whats going to come first? DX12 or the new AMD cards? will be interesting to see DX12 and a DX12 game with inbuilt benchmark and then the scores across the board for AMD and Nividia GPU's to see exactly how it comes to DX11, basically were going to need to see a game that supports both DX11 and DX12 and im not even sure thats possible?
 
Results look promising for the future of gaming, starswarm is not a benchmark but shows how these api's can run with very high numbers of individual draw calls and is close to what a game could be, there is a lot of ai running in the demo, far more than people give it credit all while every ship, gun and bullet is being individually modelled in real-time.
Many draw calls is not inefficient, its required if you want to draw many unique objects, some may say its inefficient, but that is only because they are stuck in the mindset of older, managed directx, where draw call overhead is a problem.

If anything, it is a better representation of the cpu reductions that will be seen in games than a benchmark, as a benchmark is scripted and is heavily weighted to show gpu performance, although it will still be nice to have one for a raw gpu performance comparison.

We already see the great cpu benefits in multiplayer on bf4 etc, especially in map areas where draw call issues crop up.

On the performance of the 980, its simply the case that it was still being heavily bottlenecked.
One thing about gpu load monitors, they are showing the load on the driver more than actual gpu load. Its a case of the directx pipeline becoming fully loaded when the gpu load shows 98-100% etc, while the card itself could have many shaders going unutilised.
This is where cpu bottlenecking occurs, essentially at the quality setting in star swarm, the 980 was still very cpu bound by directx 11.
Another reason for the large gap, is that the Nvidia cards have a tendency to boost to over 200mhz more than their stated boost, while the unknown 290x was most likely only boosting to 1050mhz, if it boosted.

Essentially with AMD's poor driver performance, they are also trying to show the disparity between giving the developer the tools to perform the optimisations themselves against needing to optimise in drivers on a situation by situation basis, which only increases driver size and complexity an also as shown with nvidia, required a lot of work to get the performance they did, but even then the 980 was still very bottlenecked.

Other reasons for the dramatic increase in performance in directx 12 on the 980 are the same as with mantle, having a monolithic pipeline and being able to parrarelise more work in the pipeline, so the card is doing more per cycle, with the pipeline being structured and predictable to how the developer wants to utilise it, all while being better able to utilise all of the cards resources, its on the mantle developer slides from a year or so ago.

Just throwing out my ideas on the subject and hello to all since this is my first post from just being a lurker. If anyone spoke to me on the BF 4 forums a while ago, I'm the same Mauller, so hello again.
 
Nice vid :)

You guys seen this one too? http://youtu.be/D1bTr96ZHrc

Thanks and nope, I had not seen that before, so good find and really shows what DX12 can and will do with all the cores.

Here is fable Legends running on a 980/70 with DX12 + UE4 (also another I hadn't seen).


And this guy talks about DX12 and what devs have said about the performance improvements and devs have been playing down the gains, because it just sounds like BS but gains of 40% and more are the norm.


A voice worse than mine but talks some good sense. :D
 
Last edited:
Thanks and nope, I had not seen that before, so good find and really shows what DX12 can and will do with all the cores.

Here is fable Legends running on a 980/70 with DX12 + UE4 (also another I hadn't seen).


And this guy talks about DX12 and what devs have said about the performance improvements and devs have been playing down the gains, because it just sounds like BS but gains of 40% and more are the norm.


A voice worse than mine but talks some good sense. :D

Awesome vids! Really showing DX12 in some of its glory now :) I'm just happy to see more than AMD jump on this improving the API to get closer to the metal and remove the CPU overhead as well as come and add in new and better features.

Just shows as pc gamers how much our hardware is getting held back. Just can't wait to see games coming out using DX12 now with all its glory lol.
 
Awesome vids! Really showing DX12 in some of its glory now :) I'm just happy to see more than AMD jump on this improving the API to get closer to the metal and remove the CPU overhead as well as come and add in new and better features.

Just shows as pc gamers how much our hardware is getting held back. Just can't wait to see games coming out using DX12 now with all its glory lol.

Yep and we will be hearing of DX12 games soon hopefully. That Fable Legends looks pretty decent on the UE4 but not my cuppa in truth. What would be nice is to see a DX11 version and then a DX12 version, so we gamers can compare like for like. Maybe some older games will get some DX12 love.
 
I think another issue around the 40 - 300% metrics that we read, are that the performance improvement metrics are not all talking about the same thing.

Some of the performance metric comes from the reductions in CPU load, allowing the game engines performance to improve, as it is no longer being stalled by the Directx driver.

Others are from utilisation improvements of GPU hardware.

Others are from Software performance improvements due to both of those.

Its why they don't want to release performance metrics, because not all of them are as easy to quanticise into a single figure, without making the figure look ridiculous to many people.
 
DX12 throughput is considerably higher over DX11, what happens with the 970?

970 is already struggling in some titles, vram usage on Mantle is higher, if DX12 exhibits the same, well 970 will struggle more than what it is now.

oh look its tommy the AMD super man :D

funny thing is ive seen a couple of benchies woops missus :p . i think youll be surprised ;)
 
at 45mins on that Podcast is a very interesting comment from Brad, AMD is working on something "Crazy" by all accounts... definitely urge anyone to listen to the full podcast, its extremely informative and in some ways kinda eye opening
 
Thanks and nope, I had not seen that before, so good find and really shows what DX12 can and will do with all the cores.

Have a gander at physically based rendering - refinements of those techniques I think are going to bring the biggest "next generation" jump in visuals IMO - when some of those effects can be done without so heavy specular making everything a little too shiny.
 
After listening to that interview one thing does stand out, DX12 is going to love more cores, so the high end Intels and the AMD top end CPU's are really going to shine.

Tempted to trade in my 4770k for an 8350 or something before everyone realises hahaha
 
After listening to that interview one thing does stand out, DX12 is going to love more cores, so the high end Intels and the AMD top end CPU's are really going to shine.

Tempted to trade in my 4770k for an 8350 or something before everyone realises hahaha

I'd wait till it's actually being used and benchmarks are performed before making a knee jerk reaction to be honest.

I'd be shocked if suddenly a 4770K's playing second fiddle to an FX83, regardless of the number of cores being used.
 
Back
Top Bottom