• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

is Razor1 connected with Nvidia?

He works for a software company from what I can gather, no links to a hardware OEM unlike the other guy.

Razor1 has a long posting history on H Forum and B3D, you can go through his posts if you wish. Mahigan has come out of nowhere, fully armed with AMD PR and slides, has the ability to get direct answers from Oxide, tries to be super accurate on AMD tech details whilst making incorrect assumptions on the Nvidia side of things and is copy pasting the same stuff all over the place.
 
Last edited:
He works for a software company from what I can gather, no links to a hardware OEM unlike the other guy.

Razor1 has a long posting history on H Forum and B3D, you can go through his posts if you wish. Mahigan has come out of nowhere, fully armed with AMD PR and slides, has the ability to get direct answers from Oxide, tries to be super accurate on AMD tech details whilst making incorrect assumptions on the Nvidia side of things and is copy pasting the same stuff all over the place.

he also has given his history - whilst stating he did work for AMD , he doesn't so now
 
He worked for ATI not AMD and he left in 2004 and yes he got 1 or 2 things wrong when it came to NV but he got most things right so im got going to dismiss everything he said because of such minor things when he is right over all when looking at the big picture.
 
Last edited:
A shame that, as it was interesting reading but he got things clearly wrong about Nvidia, so that kinda spoilt it for me. Razor1 has been on the money every time and certainly knows his stuff and cleared up a few bits that I didn't have a clue about.
 
He worked for ATI not AMD and he left in 2004 and yes he got a1 or 2 things wrong when it came to NV but he got most things right so im got going to dismiss everything he said because of such minor things when he is right over all when looking at the big picture.

Yeah,but let's ignore when the other guy is wrong too. So using special logic we must ignore both of them now,since they are 100% incorrect now.

Both companies are going to have strengths and weaknesses in hardware going into DX12.

Saw it with dx8,dx9,dx10 and dx11 and games targeted specific features depending on the dev and the sponsorship.

But its also funny how one guy must be PR and the other guy must not be PR. E-PEEN wars and all that jazz is sooooo funny.

:D

I still predict the following:
What I predict will happen,is that both sides will say their uarch is better,then this will drop:

http://forums.overclockers.co.uk/showthread.php?t=18689120

It will perform better on Maxwell OFC. Then another game will perform better on AMD hardware and then the argument will continue.

Then Pascal and Arctic Islands will drop,and offer better DX12 and VR performance when we get enough DX12 games,and everyone will forget the previous cards!:p


:p
 
Last edited:
Yeah,but let's ignore when the other guy is wrong too. So using special logic we must ignore both of them now,since they are 100% incorrect now.

Both companies are going to have strengths and weaknesses in hardware going into DX12.

Saw it with dx8,dx9,dx10 and dx11 and games targeted specific features depending on the dev and the sponsorship.

But its also funny how one guy must be PR and the other guy must not be PR. E-PEEN wars and all that jazz is sooooo funny.

:D

I still predict the following:



:p

Yep neither of them are 100% correct and i have read both of there posts across many forums and and even i had to pick up Razor1 on one of his assumptions over at Rage3D which he admitted he was wrong.
 
Last edited:
A shame that, as it was interesting reading but he got things clearly wrong about Nvidia, so that kinda spoilt it for me. Razor1 has been on the money every time and certainly knows his stuff and cleared up a few bits that I didn't have a clue about.

were you reading the same thread that everyone else has? Razor1 backtracked so far on a number of occsions I though he would end up on hardocp!

srsly though , they both are right and both wrong - and both admitted as much.
 
By the time DX12 becomes common in triple a game titles. The cards that we have now will be as useless as an ashtray on a motorbike. Fury x and TX /980ti will be the things sold on the big auction site or MM on here.
 
[Various] Ashes of the Singularity DX12 Benchmarks - Page 121
[Various] Ashes of the Singularity DX12 Benchmarks - Page 122


I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware.

AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.

Also the developer clearly states that nVIDIA, AMD and Intel are working closely together; with AMD not complaining although there is a PR deal between them and the publisher. This isn't a case of "let's do it all right on AMD so that nVIDIA may look bad" or whatever conspiracy/Crysis 2 look alike.

By the time DX12 becomes common in triple a game titles. The cards that we have now will be as useless as an ashtray on a motorbike. Fury x and TX /980ti will be the things sold on the big auction site or MM on here.

Most likely they'll do just fine on common resolution(s).
 
Last edited:
Kollock OXCIDE:
AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.

Kollock OXCIDE:
Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD a lot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995
 
Guess what that was.....
And people will still continue to say GameWorks is for the "benefit" of PC gaming regardless...despite the fact all these so-called "innovations" and "features" seem bring little benefit, and causing games strugglings to do the most important and foundamental thing- running properly without dodgy performance issue.

Now that dx12 is launched, I hope Nvidia focus more on making the most out the it on the field than playing with proprietary features on GameWorks, except for may be PhysX.
 
Last edited:
Back
Top Bottom