• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Anandtechs look at DX 12 in action with star swarm.

From the following article

"DX12 Is Able to Handle 600K Draw Calls, 600% Performance Increase Achieved on AMD GPUs. "

http://www.dsogaming.com/news/dx12-is-able-to-handle-600k-draw-calls-600-performance-increase-achieved-on-amd-gpus/

The Podcast by Wardell

https://www.youtube.com/watch?v=47cnFWK0dRM

A summary :
Anandtech AMD cards 7fps to 43fps running Nitrous engine DX12 engine.
-600% performance increase on AMD cards running full DX12 game engine.
-CPU cores n their performance more important in the future games .
-Directx 12 on Xbox Nov.
-Directx 12 runs better than Mantle on AMD cards.
-Dx9 uses max 6000 draw calls vs 600,000 on dx12.
-Naysayers are eating crow!
-“Marketing chose 40% increase slogan as real gains were too unbelievable for the masses to digest”
-limitless light sources in games.
-1000+ characters AI characters on screen.
-toy story/ lord of the rings graphics no deffered rendering.
-future Xbox cpu bound exclusives DX12 games to have these gains however dx12 games initially 30% gains as they transition from Dx11 to full new dx12 game engines using esram etc
-Not so much games for cross platform games.
-Fable Legends Dx12 game
-AMD/MS has mega DX12 news at GDC.
-Stardock developed Starswarm dx12 demo in 2 months. Starswarm would have similar performance gains on Xbox one i.e 600%. They have something major at GDC in the Microsoft booth.
-Stardock are developing DX12 game engine “Nitrous”. Star Control game will make sense console. They will license it to 3rd parties in future after they release 2 games on Nitrous engine.
-Phil Spencer managing expectations, as in November with DX12 on Xbox….. the games released then wont look much different then until the new DX 12 game engines.


I highlighted the bit I found especially interesting. :p
 
Last edited:
No one noticed or mention Anandtech say DX12 usage increased the power output of the 980 by 50W?

http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/6

Under DX11 the 980 uses a massive... 19W less than the ultra power hungry 290x, 241 vs 222W. Under DX12 the difference is 14W, 285 vs 271W.

What did I say specifically that no one else even mentioned or hinted at in that Maxwell power consumption thread.... I said, voltage regulation under Idle was the main reason Maxwell was "efficient", not architecture or some process magic, but inefficiency in API being handled well. It's why a 980gtx draws at max similar to a 290x, just drops power usage hundreds of times when idle to reduce average power. I also speculated that under DX12 with significantly less idle time Maxwell would use significantly more power.


EDIT:- as for AMD/MS mega news at GDC< there is a rumour going around that Nvidia is due to break some bad news RE DX12 support around/after April this year. I suspect it will be something along the lines of some fairly major features missing from Nvidia, I think the rumour mentioned Kepler as being the main issue.
 
You do relalise that is System power draw and not GPU power draw?

Did you have a point somewhere? GPU load goes up, CPU load actually goes down drastically compare the numbers, the system supposedly using a 100W higher TDP gpu is using 14W more under DX12, 19W more under DX11.

DX11 vs DX12, cpu load goes down, gpu usage goes up, the power increase is down to the GPU, not CPU and it's actually likely to be larger than the 49W as cpu power usage probably decreased at the same time.

The massively efficient Maxwell is barley using less than a supposedly awfully inefficient 290X but the main point was that DX12 drastically increased power usage on the 980gtx, something I speculated would happen months ago, making a bit of a mockery of it's efficiency.

It's 'efficiency' is it's not being used all that much. Give it a sustained load and the power saving of the architecture disappears.
 
Did you have a point somewhere? GPU load goes up, CPU load actually goes down drastically compare the numbers, the system supposedly using a 100W higher TDP gpu is using 14W more under DX12, 19W more under DX11.

DX11 vs DX12, cpu load goes down, gpu usage goes up, the power increase is down to the GPU, not CPU and it's actually likely to be larger than the 49W as cpu power usage probably decreased at the same time.

The massively efficient Maxwell is barley using less than a supposedly awfully inefficient 290X but the main point was that DX12 drastically increased power usage on the 980gtx, something I speculated would happen months ago, making a bit of a mockery of it's efficiency.

It's 'efficiency' is it's not being used all that much. Give it a sustained load and the power saving of the architecture disappears.

Solid Points!
 
^^ I think his point was if the API bottleneck was removed meaning that the CPU was able to work harder* then not all of that is ascribed to the GPU and if you've got a fair performance differential then potentially quite a bit of that could be down to the CPU and not GPU.

* API bottlenecks are often called CPU bottlenecks but infact often during those situations the CPU isn't actually working flat out just fully utilised as far as its capable with the workload.
 
In some situations yes, but the CPU in this situation is doing less work, hence why every comparison shows a significantly simplified area on a graph of cpu time. The game is working either way, it's updating and the AI is going regardless, the DX overhead is extra work that is no longer being done. Bottlenecks don't always mean most work possible being done, but in this case pretty much the entire problem with DX11 is that it is inefficient and creating drastically more work and overhead than is required. The simplicity and most of the advantage of DX12/Mantle is that there is very very little overhead in the driver, it's game + tell the gpu what to do in as little work as possible. DX11 is game + tell the gpu what to do in as stupid an inefficient way as possible doing 10 times as much work.

Regardless, gpu performance increases drastically in both cases, the systems are the same except for the gpu. People like to say that a 290x uses 100W more than a 980gtx, yet the difference between the systems ranges from 19-14W.
 
No one noticed or mention Anandtech say DX12 usage increased the power output of the 980 by 50W?

Most likely because other people probably bothered to read and understand the entire article before desperately looking for some spin to counter how poor the 290X was in comparison. If you had done the same you would see that the 290X is bottle neck for the CPU.

What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, finds itself CPU-bound with just 2 cores. For the AMD cards and other NVIDIA cards we can get GPU bound with the equivalent of an Intel Core i3 processor, showcasing just how effective DirectX 12’s improved batch submission process can be. In fact it’s so efficient that Oxide is running both batch submission and a complete AI simulation over just 2 cores.

So while the 290x is hitting a wall (and allowing the CPU to idle) the 980 is really pushing the system, therefore pulling more power to the CPU in order to feed the mighty 980. So even while it is trashing the 290x in framerate and pulling more CPU power it is still less total load.
 
Last edited:
EDIT:- as for AMD/MS mega news at GDC< there is a rumour going around that Nvidia is due to break some bad news RE DX12 support around/after April this year. I suspect it will be something along the lines of some fairly major features missing from Nvidia, I think the rumour mentioned Kepler as being the main issue.

So what, nvidia and Sony are effectively out of the running when new DX12 games are released?
 
This isn't news, Nvidia already admitted that Kepler would not have full DX12 support and wont support conservative rasterisation for a start, not surprising that they will be announcing other new features that won't be supported on 3 year old hardware - you would bloody well hope so too
 
So what, nvidia and Sony are effectively out of the running when new DX12 games are released?

I have no idea what either of those things mean in relation to what I said, Sony? Where in the sweet jesus is Sony involved in this. For one thing the PS4 has a AMD GCN based gpu. Second, an old Nvidia architecture not supporting DX12 things isn't particularly big news although they(and Nvidia guys on here) made a massive deal about the statements about existing cards supporting it while AMD weren't quick to announce what cards supported what.
 
erm, no, actually nvidia were the first to announce that kepler and fermi would not have full support for DX12, where as AMD just said that "all GCN cards will support DX12" without saying which cards might or might not have full support

the point people were picking up on was that nvidia said fermi onwards would have basic support, so 480 onwards, where as for AMD users it is GCN only, so not as far back
 
Last edited:
So what, nvidia and Sony are effectively out of the running when new DX12 games are released?

I think they maxwell cards are DX12 compatible but you never know with Nvidia. They might come out with a 'revised' Maxwell with full DX12 support which would render the gtx 970/980 old hat within 6 months.
 
I have no idea what either of those things mean in relation to what I said, Sony? Where in the sweet jesus is Sony involved in this. For one thing the PS4 has a AMD GCN based gpu. Second, an old Nvidia architecture not supporting DX12 things isn't particularly big news although they(and Nvidia guys on here) made a massive deal about the statements about existing cards supporting it while AMD weren't quick to announce what cards supported what.

I was only asking!

Thanks for the reply anyway. My 980 is good for DX12, as is my son's PS4. The 670 in the media server, on the other hand... I think that'll be hitting the post office on its way to a new owner soon.

Now, if I had to hang off until the 8gb 980ti is released, I could justify shifting the 980 into the media server and....
 
I was only asking!

Thanks for the reply anyway. My 980 is good for DX12, as is my son's PS4. The 670 in the media server, on the other hand... I think that'll be hitting the post office on its way to a new owner soon.

Now, if I had to hang off until the 8gb 980ti is released, I could justify shifting the 980 into the media server and....

your 670 will still support DX12
the PS4 won't be getting DX12 though ?
 
Worth noting DM is misusing efficiency. System draw with the 980 although only 14w off the system with the 290X is pulling 50% more frames? So even if it were to draw the same....you get the rest.
 
erm, no, actually nvidia were the first to announce that kepler and fermi would not have full support for DX12, where as AMD just said that "all GCN cards will support DX12" without saying which cards might or might not have full support

the point people were picking up on was that nvidia said fermi onwards would have basic support, so 480 onwards, where as for AMD users it is GCN only, so not as far back

Which hardware will be DX12-compatible? AMD said all of its Graphics Core Next-based Radeon GPUs (i.e. Radeon HD 7000 series and newer) will work with the new API. Nvidia pledged support for all Fermi, Kepler, and Maxwell (i.e. GeForce GTX 400 series and newer) parts. The keynote included a demo of Forza 5 running in DirectX 12 mode atop an Nvidia GPU. Finally, Intel said the integrated graphics in its existing Haswell processors will also have DX12 support. All in all, Microsoft estimates that 100% of new desktop GPUs, 80% of new gaming PCs, and 50% of PC gamers will be able to take advantage of the new API.

GCN is HD7000 series onwards, Nvidia's first statements mentioned nothing about limits, just flat out said Fermi onwards, which is a full generation earlier than AMD(gtx 480 direct comparison card was a 5870).

They were also banging on and on about having a huge market share of DX12 gpu's ready for DX12 launch, blah blah blah. They were very much talking the "everything for years will support it" route from day one. Later on they were like "oh yeah, support.... well, kinda". After they big announcements.


http://blogs.nvidia.com/blog/2014/03/20/directx-12/

direct from Nvidia, "DX12 on all DX11 gpus Nvidia has shipped".

From what I can tell, Nvidia wasn't really talking about what Kepler couldn't do in DX12 till Maxwell was released.


Oh, and btw...

http://www.extremetech.com/gaming/1...ption-by-50-boosts-fps-by-60-in-new-tech-demo

Nvidia's IGPU, using DX12 reduced power by 60%, all cpu side saving, because there is vastly less work being done. But sure, that 50W increase came directly from the CPU... working hard before to after, we all know an 80W cpu uses almost nothing then magically uses 50W more, all under load.

The fact that we can see Maxwell using more power under GPGPU loads than gaming(ostensibly DX11 based) loads. It's totally the CPU using way more power.

Again a gpu that is so much less efficient it uses supposed 100W more, is still only using 14-19W more with everything else in the system being the same.
 
Most likely because other people probably bothered to read and understand the entire article before desperately looking for some spin to counter how poor the 290X was in comparison. If you had done the same you would see that the 290X is bottle neck for the CPU.



So while the 290x is hitting a wall (and allowing the CPU to idle) the 980 is really pushing the system, therefore pulling more power to the CPU in order to feed the mighty 980. So even while it is trashing the 290x in framerate and pulling more CPU power it is still less total load.

Most likely drivers, but we'll see I guess.
 
Back
Top Bottom