• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

In actuality, the number of light sources that the DX12 version of the game uses could never be run in DX11. This could also be a reason why there is some performance regression in the DX12 benchmarks for the nvidia cards.

I'd wager the regression is due to the code base being heavily optimised for AMD's architecture given that they are sponsoring the game and it was originally put up as a Mantle showpiece, not that there is anything wrong with this, just please don't treat it as the last word of DX12 performance. Developer optimisation is more important than it ever was in DX12.

Fable should be a better test of where things lie in regards to DX12. A sample base of one title with very clear vendor involvement really is not enough for wild claims of winner and loser to be made.

People said AMD show be a bit better with their PR, well they certainly are, this benchmark has been PR gold for them, hopefully a kick to an ever more complacent Nvidia.
 
The thing is though not all games will have massively parallelised workloads like an RTS game, it's not easy to do unless you have a million different things going on concurrently which is pretty much an RTS game down to a tee.

I'd wager that in the long run there'll be more DX12 games where NVidia's serial approach and emphasis on geometry instead of compute will see them come out on top.
 
Oh dear, not long before the NVIDIA damage control regulars arrive here to try and manage this situation.

Good job AMD, for great DX12 performance :)
 
I'd wager the regression is due to the code base being heavily optimised for AMD's architecture given that they are sponsoring the game and it was originally put up as a Mantle showpiece.

Nvidia had access to the code-base for over a year and even submitted new shaders when they noticed some were running slow on their hardware.

They should have known how things would be for their own hardware long before the public benchmark dropped. They had internal benchmarks to use for a long while now.

The thing is though not all games will have massively parallelised workloads like an RTS game, it's not easy to do unless you have a million different things going on concurrently which is pretty much an RTS game down to a tee.

I'd wager that in the long run there'll be more DX12 games where NVidia's serial approach and emphasis on geometry instead of compute will see them come out on top.

The majority of many games are very sparse when it comes to unique objects and dynamic environments. Parallel workloads are more of a must have when it comes to rendering more dense and varied scenery.

Many open world games have little happening within them and few unique objects that are not a part of the map itself.

And the majority of FPS games restrict the majority of combat to open wastelands or confined spaces to limit the number of draw calls.

But in the future when we want more dynamic and unique environments and Open worlds with large numbers of NPC's visible, then parallel is the only way to go if you want the frame times to remain low.

Nvidia made their hardware good for what we have with DX11. It just happens that their hardware was not as future proof as AMD's.

But all in all, it is not like Nvidia lost a lot of performance. And just shows the strength of their drivers and hardware with the serialised workloads of DX11.
 
Last edited:
EDIT: Again an example of how AMD are sticklers for how things "should be done" in the face of the actual messy reality and how forward thinking features often end up biting them in the rear due a mixture of things never actually working out in the idealised fashion and just bringing them too the table too early to make a difference - on the flipside the architecture should perform well in the future once DX12, etc. become dominant which is a positive in the long term but by then the TX will be consigned to history and nVidia will be on an architecture that works with the realities of DX12 and likewise people probably won't be using the FX.


* This goes beyond just recompiling shaders or trying to make the workload more parallel for the GPU, etc. but actually alleviating bottlenecks at that level before you throw the GPU architecture into the equation.

Yes AMD are sticklers for doing things right and not changing the code.

But then AMD released drivers for Witcher 3 (amongst many other games) that worked... because the Witcher 3 devs were working to DX11, not Nvidia's by passed additions to DX11. Nvidia spent 2 years ploughing money and time into DX11 hacks.... as DX11 was dying... which probably resulted in big performance losses on Kepler over a year which then only finally addressed after 6 months of Kepler owners making hundreds of thousands of posts of complaints on nvidias forums. Why, because when you're making extensions to the API based on your architecture it's a lot of extra driver work that needs to be done. So I would suggest that Nvidia were only optimising Maxwell because they didn't have the time to optimise every game for every architecture... or at least they didn't bother till enough users complained after months of crappy performance.

How many dodgy drivers have Nvidia made recently? I can't remember the last game I played for AMD where on a single card I had instability or problems in any game. Xfire only had a problem in a few games. So AMD could have ploughed loads of time and money into lots of DX11 hacks, mostly to win some API benchmarks, for a end of the line API where all the hacks mean a huge amount of per game driver work to get them working properly which has caused millions of users problems in the past year? Yeah... bad AMD for being sticklers and using the actual API the devs are using.

OpenGL lets you add these extensions PROPERLY, but takes a long time to get approve but when they are the devs can address those extensions and make sure the game works properly with them from the start. That is also the 'proper' way to extend the API.

I'm thankful that every game I've tried has worked flawlessly on a single card for as long as I can remember, I'm thankful a TWIMTBP game in the Witcher 3 worked great for me on single card on a driver I think I installed about 3 months before the game came out, I'm thankful money wasn't wasted on DX11 when the majority of games from here on out will be DX12.

But you be proud of the non sticklers Nvidia, the dozens of awful drivers, the constant need for multiple drivers to get one that doesn't crash with most AAA titles, the 6 months of laughable Kepler performance where your up to £900 cards were performing worse than Maxwell £150 cards because Nvidia couldn't be bothered with you any more.
 
Last edited:
This isn't about the fact that the sky is falling (except for Nvidia fanboys, which abound on this forum) but about the fact that GPU technology is very complicated and the two giants have different strengths which don't bear out in benchmarks results since you only see see the result rather than the process that begets it. In other words, if all you know about GPU tech is the specs of the GPUs and judge it by that & benchmark results, your ability to intelligently comment is severely limited (after all, how many people will put in the 1000s of hours required for understanding it?). Therefore it should be no surprise that most discussions are simply shouting matches between people who can at best say something like 'omg amd card z does 25 fps in x/y game & requires 330w while nvidia card w does 27 & requires 280w; amd suxx why cant they do anything right & fix drivers ffs!' Just yesterday someone was lamenting how they bought ati/amd since forever but couldn't take it any longer because his crossfire setup had problems in WoW - laughable because it just goes to show what an idiot he is, since sli works even worse in WoW & less often (multi-gpu solutions don't always work with it) and the problems are to do with Blizz's software team rather than AMD/Nvidia's. Yet here is mis-attributing blame and then tens of others jumping on his bandwagon even though they are as clueless as him. Sad state of affairs, but it is what is.

OMG where have you been hiding, a sensible poster that puts some reasonable points in the Graphics forum? :o

Oh dear, not long before the NVIDIA damage control regulars arrive here to try and manage this situation.

Good job AMD, for great DX12 performance :)

:cool::D
 
None of their other benches are sponsered by AMD or NVidia so I think we are quite safe there.

We could also do with some DX12 games to test on in the near future and prefablely nothing to do with Gameworks after the Batman fiasco.:D

I guess they will do a few different types of benchmark in the end. One Benchmark alone can't tell you everything with DX12.

No doubt like with Ashes (which only goes up to 60k draw-calls in their heavy bench) they will have a few different batch levels etc.
 
Oh dear, not long before the NVIDIA damage control regulars arrive here to try and manage this situation.

Good job AMD, for great DX12 performance :)

You may right but they are too busy defending G-sync at the moment. Besides this architectural difference may be too difficult to understand for some them let alone defend :D
 
Yes AMD are sticklers for doing things right and not changing the code.

But then AMD released drivers for Witcher 3 (amongst many other games) that worked... because the Witcher 3 devs were working to DX11, not Nvidia's by passed additions to DX11. Nvidia spent 2 years ploughing money and time into DX11 hacks.... as DX11 was dying... which probably resulted in big performance losses on Kepler over a year which then only finally addressed after 6 months of Kepler owners making hundreds of thousands of posts of complaints on nvidias forums. Why, because when you're making extensions to the API based on your architecture it's a lot of extra driver work that needs to be done. So I would suggest that Nvidia were only optimising Maxwell because they didn't have the time to optimise every game for every architecture... or at least they didn't bother till enough users complained after months of crappy performance.

How many dodgy drivers have Nvidia made recently? I can't remember the last game I played for AMD where on a single card I had instability or problems in any game. Xfire only had a problem in a few games. So AMD could have ploughed loads of time and money into lots of DX11 hacks, mostly to win some API benchmarks, for a end of the line API where all the hacks mean a huge amount of per game driver work to get them working properly which has caused millions of users problems in the past year? Yeah... bad AMD for being sticklers and using the actual API the devs are using.

OpenGL lets you add these extensions PROPERLY, but takes a long time to get approve but when they are the devs can address those extensions and make sure the game works properly with them from the start. That is also the 'proper' way to extend the API.

I'm thankful that every game I've tried has worked flawlessly on a single card for as long as I can remember, I'm thankful a TWIMTBP game in the Witcher 3 worked great for me on single card on a driver I think I installed about 3 months before the game came out, I'm thankful money wasn't wasted on DX11 when the majority of games from here on out will be DX12.

But you be proud of the non sticklers Nvidia, the dozens of awful drivers, the constant need for multiple drivers to get one that doesn't crash with most AAA titles, the 6 months of laughable Kepler performance where your up to £900 cards were performing worse than Maxwell £150 cards because Nvidia couldn't be bothered with you any more.

DX11 dying you can not be serious !!!

The only API that has died recently is Mantle RIP.

As to NVidia drivers being poor recently, I would not argue there.

Having said that AMD drivers are just as bad for other reasons.

Witcher 3 on AMD cards is absolutely dreadful.

Witcher 3 is not that great on NVidia cards but it is more than 4x as fast as AMD cards @2160p

Witcher 3 maxed 2160p

On my AMD cards 18fps

On my NVidia cards 80fps
 
I guess they will do a few different types of benchmark in the end. One Benchmark alone can't tell you everything with DX12.

No doubt like with Ashes (which only goes up to 60k draw-calls in their heavy bench) they will have a few different batch levels etc.

Hopefully Unigine will sort out the Heaven bench performance for the Fury X as it is scoring way short of what it should be. This is one area in which the bench is flawed at the moment.
 
DX11 dying you can not be serious !!!

The only API that has died recently is Mantle RIP.

As to NVidia drivers being poor recently, I would not argue there.

Having said that AMD drivers are just as bad for other reasons.

Witcher 3 on AMD cards is absolutely dreadful.

Witcher 3 is not that great on NVidia cards but it is more than 4x as fast as AMD cards @2160p

Witcher 3 maxed 2160p

On my AMD cards 18fps

On my NVidia cards 80fps

What AMD cards are you using? I run at 1080P but get 55-60fps (Hairworks ON) with the latest drivers. Even with 1440P VSR it runs between 40-50 fps. AA tanks the fps so maybe you have that enabled.
 
Last edited:
Witcher 3 is not that great on NVidia cards but it is more than 4x as fast as AMD cards @2160p

Witcher 3 maxed 2160p

On my AMD cards 18fps

On my NVidia cards 80fps

Wait, what? Even with HairWorks maxed I still get over 60 for the most part at 4k.. I don't usually see you painting such inaccurate pictures, so I'm a little surprised o.O Even when W3 first released I got playable framerates at 4k (with HairWorks off because it was totes broken) on my 7970s, let alone the Fury's. I know some folk had issues, but that's on both sides.
 
There's an article on extremetech showing 980Ti vs Fury x. They basically match each other now in performance, trading very small blows here and there in this one particular game. AMD's performance just caught up with Nv by going DX12.

I don't see this as a bad thing at all, but i see many fan boys acting as if AMD are miles ahead now when all they did is catch up. In fact DX11 performance is still at Nvidia's advantage.

Am i missing something or is that about right?
 
Pretty much nail on the head chaos.

When it comes to GPU limited situation then the cards tend to match.

It is mainly at lower resolutions where CPU throughput and driver overhead cause AMD cards to fall behind. As the Drivers cannot drive the cards fast enough.

Although higher resolutions can see some performance improvement in DX12 due to better hardware utilisation. Such as with Asynch shaders.
 
Wait, what? Even with HairWorks maxed I still get over 60 for the most part at 4k.. I don't usually see you painting such inaccurate pictures, so I'm a little surprised o.O Even when W3 first released I got playable framerates at 4k (with HairWorks off because it was totes broken) on my 7970s, let alone the Fury's. I know some folk had issues, but that's on both sides.

Yes either he's having some driver problems or is using some old 5850's :p

edit: If he has everything maxed then there is known issue with AMD cards and AA. AA significantly tanks performance and CDProject/AMD have not fixed that issue AFAIK.
I just tested with everything maxed@1440P VSR and got 22 fps (no tweaks via CCC). Then disabled AA only and got 45 fps straight away so definitely AA causing the problem.
 
Last edited:
Back
Top Bottom