• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia DX12 driver holding back Ryzen

i'm talking about the AMD GPU driver eg crimeson NOT game patches

this is my point
if AMD knows for example switching CCX's on AMD CPU requires 140ns (which it does) they could code in to their AMD GPU driver to use instruct the threads on Ryzen to use the the first 4 cores more effectively (increasing their lead) over Nvidia on AMD cpu system.

a good/interesting test would be

Test 1: Ryzen + RX480 + Orignal Tomb Raider driver
Test 2: Ryzen + RX480 + last updated driver

how big is the difference?

once AMD optimized Their GPU driver for Tomb Raider, they would generally optimize further.

so how big is the gap between Test 1 & 2.
if the gap is large then AMD GPU drivers are helping out the tuning of AMD CPU's

thus increasing their lead over running AMD+Nvidia.

that not to say this is wrong, or NVdia/intel drivers are amazing

the point is: would an AMD+AMD combo be much more effective
(if AMD are actually tuning CPU instructions in their GPU driver)

Nvidia did it with DX11 + intel, so why not AMD+AMD

They can do some extra optimizing between the two sure,
That's a little advantage that comes with being the brand that has both cpu's and gpu's in the stable,
As you said there's nothing wrong with that, and as long as they are not harming the competitions
performance or holding back optimizations that could also benefit Nvidia gpu's in some way it's fair.
If true it'll be good news once we have Vega as it might give it a little leg up over the competition.
 
a good/interesting test would be

Test 1: Ryzen + RX480 + Orignal Tomb Raider driver
Test 2: Ryzen + RX480 + last updated driver

You'd also have to test the drivers on an Intel CPU or something like that to rule out any application specific optimisations between the 2 driver revisions or to be able to remove any contribution from that from the performance difference on Ryzen testing.
 
nothing to do with amd backing all tombraider games then....also having a very deep involvement with them :p

ROTTR was an Nvidia backed PC title, They removed all the a-sync compute and made it as non dx12 as they could on release.
Then they added some extra HQ textures to make sure the memory usage went over the 4gb limit of the Fiji competition so no it wasn't an AMD title.

Rise of the Tomb Raider is an NVIDIA Gamework's title. :eek:

What he said :D
 
AMD did put it on consoles, on their GPU and even managed to integrate it to the 2 new APIs on PC, maybe you cannot patent async compute, as graphic and compute simultaneous processing, but i am pretty sure the way it's done by AMD, probably has been patented, and finding another way without heavy drawbacks, is not an easy task.
all of this is just guess work on my part, because at first, i thought Nvidia was intentionally slowing down the feature to benefit as long as they can from DX11, but 2 years later seem a bit much, with no alternative in sight, so maybe there is more to it.

This is all tinfoil stuff, 2 years isn't a long time too hold DX12 implementation back,
They haven't even got their hardware ready Volta cards on the table yet due to delays,
So they'll want to hold it back for another year if need be.
The second Volta's out they'll do a turnaround and push forward with it to convince all
the Pascalytes that they need Volta.
That's another bit of tinfoil though. :p
 
Yet another potential annoyance to add to the minefield of hardware buying for pc gaming. It is getting pretty tedious. if you want good dx11 performance you have to go Nvidia, if you want good DX12 you go AMD (well, you do when Vega comes out potentially as there is no competition at the high end at the moment). Now it looks like which brand CPU further changes things in this regard. Then add all that to the fact you have to decide which gpu manufacturer you are going to stick with for adaptive sync (a reason i won't buy an adaptive sync monitor) and the whole thing is a **** show.

It's becoming less and less about the games in some ways, especially with all these constant buggy releases and early access rubbish.
 
While it's the oldest dismissive trick in the book, in regards to NV gimping Ryzen just because AMD have had months/years headstart to remove overheads-well Im lost for words.
 
Deus Ex Mankind Divided uses - or atleast has all the files for - nVidia's hair physics stuff which is odd.

I know they had purehair in the presentations but no mention of what is actually used in the retail game.

http://segmentnext.com/2015/08/17/deus-ex-mankind-divided-uses-pure-hair-successor-to-tressfx/

According to this Eidos said


We also present Pure Hair, an evolution of the well-known TressFX hair simulation and rendering tech, developed internally by Labs. Compared to the previous version, we have significantly improved rendering, employing PPLL (per-pixel linked list) as a translucency solution. We have also significantly enhanced simulation and utilized async compute for better workload distribution.

I doubt they dropped it but yea it's not in the options.
 
I doubt they dropped it but yea it's not in the options.

Yeah I dunno it was in a lot of the PR - there wouldn't be an option for it as the physics hair is baked into the models as standard in dx:md not switchable like Tomb Raider. There doesn't seem to be any signs of any related files or configuration options though for purehair and while they might just be there as part of the cloth, etc. simulation that is used in the game all the support is there for nVidia's physx hair stuff.
 
I think people are too focused on the Nvidia thing and that Adored TV was the first video to show this. Ryzen is brand new. It has the power but there is some issue with regards to gaming. It's not bad for gaming, it just doesn't line up with power the chip has in other non gaming situations.

Ryzen will need further microcode releases. It's going to take a little while for developers to get to grips with Ryzen.

It's just way too early to make a final judgement on Ryzen's gaming performance. And it's way too early to be blaming AMD, Nvidia, Microsoft or anybody. It's a completely new CPU architecture in it's first month.
 
All i can say is AMD are going in the right direction for us gamers. Once we get updates from board vendors and we get optimisations in game and software aswell as some microcode optimisations i can see ryzen being great for us gamers. Finally get us on 8 cores.
 
It's in their new Infinity Fabric interconnect system :

http://segmentnext.com/2017/01/17/amd-infinity-fabric-details/

  • The bandwidth will scale from 30-50 GB/s for notebooks and around 512 GB/s for Vega GPU.
Now this might only be for industrial use like Nvidias NVlink but it's still interesting

I can really see it scale that high on Naples with Vega Instinct.
Supporting 8 channel memory, and 128 PCIe lanes is going to need a hella fast interconnect, especially with a 32core processor.
 
It looks like NVidia may be teaming up with Microsoft ahead of the Windows Creators update to make sure we use there Dx12 driver. This window popped up this morning wanting to reinstall my windows and I don't have any NVidia hardware or software installed......



Now it seems I have to keep a close eye on updates to hose my system.
 
It's in their new Infinity Fabric interconnect system :

http://segmentnext.com/2017/01/17/amd-infinity-fabric-details/

  • The bandwidth will scale from 30-50 GB/s for notebooks and around 512 GB/s for Vega GPU.
Now this might only be for industrial use like Nvidias NVlink but it's still interesting

Riiiight thanks! After a quick look it does look more like an industrial solution than consumer one. But I'm sure we'll get the trickle down :)
 
Riiiight thanks! After a quick look it does look more like an industrial solution than consumer one. But I'm sure we'll get the trickle down :)

Seeing as Ryzen, Vega and AM4 are all new they could have incorporated the support in design. If it works over the PCIE and it does mention both on and off die support we could well see it at a consumer level. It might also help with AMDs plan to reduce the amount of frame buffer.
 
Back
Top Bottom