• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

DX11 dying you can not be serious !!!

The only API that has died recently is Mantle RIP.

As to NVidia drivers being poor recently, I would not argue there.

Having said that AMD drivers are just as bad for other reasons.

Witcher 3 on AMD cards is absolutely dreadful.

Witcher 3 is not that great on NVidia cards but it is more than 4x as fast as AMD cards @2160p

Witcher 3 maxed 2160p

On my AMD cards 18fps

On my NVidia cards 80fps

Don't worry mate it lives on in Vulkan where we can all remember it! And it's paved the way to all this good API goodness so it will always be remembered. DX11 hasn't died yet true but its about to die a slow and painful death. And there's nothing wrong with AMD single card drivers this has too much confusion around AMD. There is poor or lack of drivers for crossfire where AMD looses out. nVidia on the other hand well just look at the driver thread. release after release of awful drivers. People have to roll back to much earlier drivers to get something stable.

But it's not all bad for either camp really is it? Just over exaggerated but it is slightly worse for nVidia users atm driver wise.
 
Nvidia had access to the code-base for over a year and even submitted new shaders when they noticed some were running slow on their hardware.

They should have known how things would be for their own hardware long before the public benchmark dropped. They had internal benchmarks to use for a long while now.

Very true, but with DX12 the onus is on the developer to optimise for a specific architecture rather than the GPU manufacturer. You have to wonder how much effort was put into the Nvidia side of things given the development history and sponsorship of the title.

I think what we are seeing is a something close to a best/worst case scenario depending on whether you have a green or red hat.
 
You have to wonder how much effort was put into the Nvidia side of things given the development history and sponsorship of the title.

AOTS is a massive, much needed PR win for AMD=Nvidia left scrambling for damage control!

AMD/Oxides collaboration using fully tuned Mantle/DX12 api builds got them the win-AMD's gains can be argued in here, but the non tech savvy type gamers(that are a way way higher target audience) will only see the massive gains and will take note- as most don't buy anything above a 970/390.

DX12 the onus is on the developer to optimise for a specific architecture rather than the GPU manufacturer.

It's the same thing:p

The onus should be on the developer to optimise for DX12 not a specific architecture.

But it won't work that way with AAA titles(as Ashes has shown), game sponsorship is going to decide DX12 winners as they will have the clear advantage with optimised code, the other side will get tanked basically.

GW's/Mantle/FS-GS/ramgate/Fcat/vram will pale compared to what's coming with DX 12+GW's:eek:

And this is where it will get very, very interesting, Nvidia are churning out bucket loads of titles, and GE titles are very thin on the ground, I know there is a few coming but if AMD don't get the titles rolling in and match, Nvidia will wipe the floor with them!
 
Last edited:
And this is where it will get very, very interesting, Nvidia are churning out bucket loads of titles, and GE titles are very thin on the ground, I know there is a few coming but if AMD don't get the titles rolling in and match, Nvidia will wipe the floor with them!

The best thing about GE titles is that they work great on all cards! Thanks AMD!! ;) :D
 
Lol at people seemingly shocked by this news. It's obvious looking at AMD's GPU architechture that its more suited to parallelism than nvidias. Nvidia has done the thing they generally do of planning for RIGHT now, whereas AMD planned for the future with GCN. (You only have to look at kepler and maxwell and GCN's performance compared to kepler release to see proof of that).

By the time DX12 games are out pascal will be too and nvidia will have similar DX12 performance to AMD as it will be using a more parallel architecture.
 
Last edited:
I think it is great that the Fury X matches Nvidia's 980Ti in this and good to see.

Except in cases where there are excessive tessellation. And when the game is purely GPU bound. The furyx has always matched the 980Ti.

But it now matches the 980Ti across all resolutions when it didn't before. which is what people are getting their knickers in a twist over.
 
What's the chances Fable Legends will show a similar result to AoTS? It doesn't seem to show any real DX12 features such as excessive drawcalls and looks like any other DX11 game.
 
This doesn't surprise IF true.

AMD have limited resources to work with compared to Intel and Nvidia.

It seems awhile ago they made the decision to plan for the future with their CPU and GPU architectures. They put all their baskets in a multi core CPU and a DX12 era graphics landscape.

Hence why when they did it, the FX CPU line where poor and the AMD GPU's seemingly behind Nvidia.

However now they will start to see some fruits as we move in the the DX12 landscape.

Intel and Nvidia have the resources to adapt tho and will just change their model to maintain their respective crowns.

Mantle=DX12
console win
windows 10

The AMD plan been to prepare for dx12 and creating the technology to excel there. Creating the software for their own hardware with dx12 just allows them to have the tech for it already when Nvidia still sells old technology with the 980ti. Pascal wont save them there.
 
Except in cases where there are excessive tessellation. And when the game is purely GPU bound. The furyx has always matched the 980Ti.

But it now matches the 980Ti across all resolutions when it didn't before. which is what people are getting their knickers in a twist over.

This was/is just not true. It would have sold many many more and not took such a pasting if it "always matched" the Ti. :rolleyes:

However, it great that in DX12 it might actually catch up/or beat the competition. :)
 
All the planning for future in the world isn't going to help, if the company doesn't survive to actually see the future. ;)

As for Pascal wont save NVidia, well it might not do what 'you' expect it to do, but I'm pretty sure it will do what NVidia want it to do, after all it is not as if they don't know how to build a GPU, or anything.
 
It's been known for a good year or 2 now that Amd have been targeting 2016 for a return to form in there products. They look to have a good dx12 product in Gcn and Zen sounds like it could bring them back in to the mix with Intel.

They have to nail this on both fronts for a good turn around but it is doable.
 
This thread is comedy gold, so much fail in one place.

I'll simply wait for some actual DX12 benchmarks to come out.
 
I think it is great that the Fury X matches Nvidia's 980Ti in this and good to see.
Seconded, I don't understand all this fuss around AMD and Nvidia's DX12 performance vs DX11 from some on the Nvidia side.

Nvidia's DX11 performance seemed to improve significantly after Mantle was integrated into BF4 (my average fps in Crysis 3 jumped by 15-20 after the Nvidia "Mantle-like" driver). It seems reasonable to conclude that Nvidia were doing some hardcore, resource-intensive optimisation work to get around the limitations of dx11 (they also like to boast how their driver is more complicated than the Windows OS).

If you read the posts on overclock.net thoroughly, it's said that due to architectural differences AMD could not do the same driver-level optimisations and shader replacements as Nvidia in DX11 and get the same benefits, even if they had the resources so to do.

All AMD has managed to do in terms of DX12 performance is match Nvidia at all resolutions or beat them by the odd handful of frames; this is hardly blowing them out of the water and is very good news for consumers as it means a more competitive landscape and should be welcomed in my opinion.
 
Seconded, I don't understand all this fuss around AMD and Nvidia's DX12 performance vs DX11 from some on the Nvidia side.

Nvidia's DX11 performance seemed to improve significantly after Mantle was integrated into BF4 (my average fps in Crysis 3 jumped by 15-20 after the Nvidia "Mantle-like" driver). It seems reasonable to conclude that Nvidia were doing some hardcore, resource-intensive optimisation work to get around the limitations of dx11 (they also like to boast how their driver is more complicated than the Windows OS).

If you read the posts on overclock.net thoroughly, it's said that due to architectural differences AMD could not do the same driver-level optimisations and shader replacements as Nvidia in DX11 and get the same benefits, even if they had the resources so to do.

All AMD has managed to do in terms of DX12 performance is match Nvidia at all resolutions or beat them by the odd handful of frames; this is hardly blowing them out of the water and is very good news for consumers as it means a more competitive landscape and should be welcomed in my opinion.

This! AMD put more emphasis and focus on mantle than they did DX11 along with their hardware as currently they wasn't winning against nVidia so from what i can see they took a bet on the future as we all know when Mantle was being developed so was DX12 and AMD knew about this as they approached microsoft but microsoft didn't wan't anything from Mantle. So AMD fell behind in DX11 optimisations so current games basically.

nVidia knew about Mantle and took the jump to beat AMD in the DX11 department which also would make mantle look bad! If nVidia could make their cards look better performing or at least make mantle gains look minimal it puts mantle in a light that's kinda Meh... and nVidia sorta accomplished this task!

That's why nVidia have lower driver overheads on DX11 cpu side of things and get better performance gains. Many people overlook this fact and now DX12 is starting to come forward it appears AMD hardware and driver side of things are taking a leap forward compared to nVidia's which to me is no surprise! I'll not get into tessellation side of things in certain games which were over tessellated and AMD hardware isn't as good as nVidia's when it comes to this lol. But same thing as above really.
 
Last edited:
If we are going to trust a random person's PoV form overclock.net this discussion wouldn't be fair without this:

http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/490#post_24325434


The pertinent points:
if your DX12 performance isn't better than the drivers' DX11 performance, you're doing it wrong.


this is effectively best-case (again, within reason) AMD GCN performance vs. worst-case (mostly -- it's probably not intentionally slowing down Nvidia hardware) Nvidia GM20x performance. All you need to do is look at the DX11 performance. That is the bar to clear, on both sides. AMD set the bar very low because they didn't optimize their DX11 drivers much at all. Nvidia set the DX11 bar as high as possible to show where the developers need to start, not where they should finish.

I'm also pretty skeptical of some of the claims and language coming from Oxide. (http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/) "All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months." Sure, but it's an AMD game and so AMD is actively working with the devs while Nvidia isn't. "Some optimizations that the drivers are doing in DX11 just aren’t working in DX12 yet." Or, Nvidia has better DX11 driver optimizations than we have DX12 code optimizations. "This in no way affects the validity of a DX12 to DX12 test, as the same exact workload gets sent to everyone’s GPUs." Um... see above: you're running AMD-tuned code on Nvidia hardware, and then saying this doesn't affect the validity? I call bunk.


In other words, I would be extremely hesitant about making blanket statements regarding what DX12 will and won't do for various GPU architectures based on a single game from a developer that is actively receiving help from only one of the GPU vendors. If we were looking at a game with an Nvidia TWIMTBP logo and Nvidia was doing great while AMD was struggling, I'd be saying the exact same thing. Looking at high level descriptions of the hardware and theoretical GFLOPS and using that to back up the current performance is silly, because the current performance is already skewed. Why is AMD performing better on a game with an AMD logo that isn't even in public beta yet ? (And remember that the beta stage is when a lot of optimizations take place!) Because if it was anything else, we would be really dismayed.


Why isn't Oxide actively monitoring the performance of their shaders on all GPUs? Why did Nvidia have to do the work? Oxide is the developer, and they should be largely held accountable for their performance.

As for AMD's optimized shader code, the only requirement is that it not perform worse on Nvidia hardware than the original Oxide shader code. But it seems like the level of optimizations Oxide has made without help from AMD may not be all that great to begin with. And parts of the engine can and will change, up to and beyond the time when the game ships.

It feels like more than anything, this was Oxide yelling "FIRST!!11!!" and posting a "real-world DX12 gaming benchmark". But like any and all gaming benchmarks, the only thing the benchmark truly shows is how fast this particular game -- at this particular point in time -- runs on the current hardware and drivers.

Ashes is looking more interesting as a way to see what type of CPU is the recommended minimum than as a way of evaluating the AMD and Nvidia GPUs against each other. Hell, the instructions for the benchmark even recommended testing it on AMD R9 Fury X, 390, 380, and 370... but on the Nvidia side, only the 980 Ti is recommended. They know already that their current code is so badly optimized on Nvidia hardware that they only want the press to look at the fastest Nvidia GPUs.
 
Damage control misdirecting the uniformed, Oxide have been working with Nvidia on it for a year, Oxide even used code Nvidia submitted to them, Nvidia also have access to the source code.

Yes Oxide has even used some code to improve shader performance for Nvidia which is more than can be said for some devs who used Gameworks. Some are even banned from using AMD code due to Nvidia restrictions.
 
Back
Top Bottom