• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia DX12 driver holding back Ryzen


This might explain the issue or part of it

Thanks for posting this. Very interesting, and seems a very likely explanation based on the observed situations. Like that he doesn't try to badge it as anyone "cheating" or "being at fault", simply explains the different approaches Nvidia and AMD have taken and under what circumstances the strengths & weaknesses of each approach are highlighted.
 
Note I'm not talking about Ryzen here as obviously it's just released and needs driver work but if anyone says AMD is great at DX12 but crap at DX11, while at the same time NV's DX12 is crap and great at DX11, that's only bigging up AMD and slating NV, it's not saying both are good at one but crap at the other?

How many times are we to witness pap DX12/11 performance across both vendors with one win a piece before it sinks in?
But AMD is not great at DX12 just look at the industry standard synthetic Timespy benchmarks, where for art thou AMD :p
 
But AMD is not great at DX12 just look at the industry standard synthetic Timespy benchmarks, where for art thou AMD :p

Timespy uses a general/"neutral" DX12 path that mostly relies on DX11 functions. Unlike games they do or should have hardware specific codepaths, in an attempt to have a level playing field.

As such it's not a full representation of what DX12 can do for games, where developers should follow Best Practices where both NVIDIA and AMD had a presentation at GDC outlying it all, where they stated if you cannot manage hardware specific code paths you should just stick to DX11.

Futuremark themselves have stated

3DMark Time Spy engine is specifically written to be a neutral, "reference implementation" engine for DX12 FL11_0 ( FL11 = Feature Level 11 )

If you want to benchmark with a game engine, run a game made using that engine.
https://forums.anandtech.com/thread...ctx-12-benchmark.2480259/page-4#post-38363396

Gd6ztrt.png


EX18iLV.png
 
Timespy uses a general/"neutral" DX12 path that mostly relies on DX11 functions. Unlike games they do or should have hardware specific codepaths, in an attempt to have a level playing field.

As such it's not a full representation of what DX12 can do for games, where developers should follow Best Practices where both NVIDIA and AMD had a presentation at GDC outlying it all, where they stated if you cannot manage hardware specific code paths you should just stick to DX11.

Futuremark themselves have stated


https://forums.anandtech.com/thread...ctx-12-benchmark.2480259/page-4#post-38363396

Gd6ztrt.png


EX18iLV.png
So it's not an industry standard DX12 benchmark?
 
It looks like AMD Ryzen's poor performance in some DX12 games is due to Nvidia's DX12 driver. The interesting bit starts at 10:37.



TL : DW
AdoredTv tested a Ryzen and i7 with a 1070 and RX480 mGPU and made a potentially shocking discovery.
The 1800X virtually catches up to the i7 7700K when dual RX480's are used in DX12 mode but significantly lags behind it when a 1070 is used. Since nearly every Ryzen review used a Nvidia gpu, this revelation could invalidate a lot of DX12 benchmarks.


To further validate the above findings, another youtuber did a similar test in The Division and also found that Ryzen ran better with an AMD RX470 compared to a Nvidia 1060 in DX12.


If we look at this the other way round it could be said that -

AMD are guilty of holding back NVidia GPU performance with Ryzen.
 

This might explain the issue or part of it

So basically AMD is not "really" limited in DX11, but rather the developers need to spread the workload on a different thread(s) and leave the main one for rendering. Since this isn't the default working mode, they need to actually improve communication with the devs. so the code is adapted for GCN arch.
 
So basically AMD is not "really" limited in DX11, but rather the developers need to spread the workload on a different thread(s) and leave the main one for rendering. Since this isn't the default working mode, they need to actually improve communication with the devs. so the code is adapted for GCN arch.

As mentioned in the video though is isn't as trivial as it sounds and is actually a massive undertaking. nVidia actually does some interception before things normally hit the driver to enable what they do - which is beyond what developers can really do as they don't have access to either the driver source or full DX source which makes things a lot more complex.
 
If we look at this the other way round it could be said that -

AMD are guilty of holding back NVidia GPU performance with Ryzen.

Please kaap dont tell me you actually believes this? your better than that m8. You are just being the devils advocate right?.. right?
 
Please kaap dont tell me you actually believes this? your better than that m8. You are just being the devils advocate right?.. right?

What I am saying is my statement above is just as silly as the one in the OP.

No hardware manufacturer is going to show their own product in a bad light just to make their rivals look bad in a market that does not matter to them (CPUs).
 
What I am saying is my statement above is just as silly as the one in the OP.

No hardware manufacturer is going to show their own product in a bad light just to make their rivals look bad in a market that does not matter to them (CPUs).

Phew.. I thought for a second you had lost your mind /joke. Personally i do think the nvidia drivers are not working as they should with Ryzen but i don't for a second believe its intended. Its a bug it simply has to be for the exact reason you said. No one, especially not nvidia with their due to their pride/ego, will gimp their own hardware that much on purpose to make the competition look bad in another field(gpu vs cpu). What i wont be surprised by is if nvidia, like everyone else, for the past many years has been testing on and optimized for intel platforms.
 
Hmm interesting that one factor seems to be that faster memory seems to help offset the bottleneck - as I'm on a quad channel platform with the RAM tuned to **** and my second PC also has some pretty special RAM on it I wonder if that also would explain some of my other results i.e. whenever I've tested 2nd gen Kepler cards the numbers have been way faster in newer games than often shown in reviews (which usually use a dual channel consumer platform).

EDIT: While I believe he is wrong about DX and RoTTR using the same engine - they are very similar and the dawn engine as used in DX seems pretty meh at the best of times.
 
Last edited:
The fact Nvidia released a 'special' DX12 driver a few weeks back.... is of no coincidence now.

But you cant fix in software what you lack in hardware. It was always Nvidia's plan to bring to market a proper DX12 card after the current 1000 series.
 
Lots of vids showing the same results now, any vids available that say otherwise yet as NV users looking to upgrade that are aware of the poor NV performance aren't going to be keen going Ryzen?
 
Back
Top Bottom