• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

This is the relevant part of the reply.

Originally Posted by Kollock View Post

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

If he did not mean MSAA, then it was for Async compute. But the non optimal MSAA performance in DX12 affects all vendors.
 
Last edited:
This is the relevant part of the reply.



If he did not mean MSAA, then it was for Async compute. But the non optimal MSAA performance in DX12 affects all vendors.

it doesn't say what you are implying, basic reading skills

he says they, the devs, have disabled async compute on nvidia hardware (and doesn't mention nvidia asking them to), he then also says that nvidia asked them to remove a setting and they refused

any logical reading of that text deduces that async can't be the setting they are referring to, because async has been removed for nvidia hardware, but the devs also refused to remove another setting, without saying what setting

if you can't read plain english I really don't know how else to have a sensible conversation
 

Nvidia do have full DX12 support, at least they have the feature requirements Microsoft laid out, they just neglected to include this, i am a little baffled about that as its arguably one of the biggest features you can have in an API.

But then had they included it Nvidia would not be able to claim DX12 compatibility.

I think its a bit sleazy for Nvidia to then bang on about Conservative Rasterization, in that at some point Microsoft must have agreed to label Nvidia Maxwell v-2 as going beyond AMD, all the way upto 11 (DX12.1)
 
it doesn't say what you are implying, basic reading skills

he says they, the devs, have disabled async compute on nvidia hardware (and doesn't mention nvidia asking them to), he then also says that nvidia asked them to remove a setting and they refused

any logical reading of that text deduces that async can't be the setting they are referring to, because async has been removed for nvidia hardware, but the devs also refused to remove another setting, without saying what setting

if you can't read plain english I really don't know how else to have a sensible conversation

In actuality, because he was non specific to what settings, people can take it either way based upon the context of the discussion before his addendum.
 
So AMD supports part of the DX12 spec better than Nvidia and Nvidia supports part of the DX12 spec better than AMD.

Please tell me how,this is different than DX9,DX10 or DX11??

Not sure why all of this argueing,I really don't.
 
Maybe this is what NVidia unified memory is going to be doing?

Only 7/8 months to wait till we find out. :)

I think it probably is, i just hope it doesn't require a separate code path, Parallel ASync is in all the consoles so once devs get to grips with it we should see the benefits on PC.

If Nvidia's solution requires septate coding it could be a bit of a problem for them.
 
and if the consoles don't support then you can bet 90% of DX12 games wont support it either as consoles are where the money is

It depends on how well they get the Gameworks program into action for DX12 games -if they are willing to put extra effort to enabling these features by working with or giving financial aid(sponsorship) for the devs for the PC version??

But,TBH I still don't see why all the crying over AMD being better in one part of the DX12 spec over Nvidia,when said people were going on how Nvidia had superior DX12.1 feature support which AMD didn't!!
 
Last edited:
That still doesn't say that NVIDIA asked them to disable async for AMD hardware, and it doesn't mention what setting they refused to remove

No, it dont.

You asked

where in that does it say "nvidia asked us to disable async compute"

Which is what I replied to.

with
AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.
 
Yes, but if you look at the message I was quoting, you could clearly see there was a context I was replying to, mauller saying that NVIDIA tried to strongarm the dev in to disabling async for AMD hardware :rolleyes:

http://forums.overclockers.co.uk/showpost.php?p=28505744&postcount=341

Now you are just putting words into my mouth. :p

I said that they wanted it to be disabled in general for the benchmark. Which you could assume based upon the addendum.

Never said strong-arm, now you are trying to make me look bad. Shame on you. go to your corner. :p
 
So AMD supports part of the DX12 spec better than Nvidia and Nvidia supports part of the DX12 spec better than AMD.

Please tell me how,this is different than DX9,DX10 or DX11??

Not sure why all of this argueing,I really don't.

Do people have such short memories? Do we not remember the rumours that Nvidia was basically asking MS to add some fluff stuff to the DX12 spec that only Nvidia could support.

Do we remember Nvidia basically forcing MS to remove certain DX10 features that sped up hardware but they didn't support so they felt like ALL gamers should suffer as a result of them making a worse card than they should have. Then when games added support for those features using DX10.1 they PAID developers to actually REMOVE the DX10.1 from the game as it made them look bad.

What seems pretty clear is, DX12 is pretty ostensibly based on GCN architecture, AMD supports effectively every useful performance feature fully. Nvidia threw their toys out the pram at basically being asked to support Mantle reincarnate and got a few additional pointless throwaway features that won't get much support or offer much benefit so they could market having more features/DX12 support.

That is how the situation reads to me, and how it appears to be playing out in terms of which DX12 features are being used, offering performance improvement, being adopted by devs and what Nvidia appears to be lacking.

These are all the reasons I can't stand Nvidia, they repeatedly at every stage hold back features and performance at the expense of marketing and appearing to be the best. One company is stiffling new features because they didn't bother to support them and pushing back usage of such features by years. Tessellation, the features that wound up in DX10 instead of DX10.1. Nvidia knew the DX10 spec for ages, failed to achieve it and asked MS to **** everyone. Nvidia screwed their own customers by not supporting performance enhancing features and rather than ride the bad press that would come with they instead get MS to remove those performance enhancing features. They are so utterly anti consumer I can't stand it and I get irked by those who blindly support them while Nvidia go out of their way to screw their own customers, it's madness.

Nvidia time and time again choose to inhibit new features, then they also go the other way, they use over tessellation as a weapon to win benchmarks but provide THEIR OWN USERS with no benefit. Think about it, you literally can't see any IQ difference beyond a certain level of tessellation. ACtively designing the hardware to tessellate more is taking transistors away from other functions that can provide a performance benefit. You're paying for the die space that feature takes up and the design team putting time and money into making that, JUST to win a benchmark which but offers no benefit at all to their own users. How much better would Nvidia cards be if that time and money went into something that increased performance or IQ for their own users. Rather than spending time getting MS to remove useful features from DX10, maybe they could have just supported them. Rather than paying devs to remove DX10.1 from their game, add DX10.1 to their refreshes or the next generation of cards... nope.
 
Last edited:
Back
Top Bottom