• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Oculus Employees: "Preemption for context switches is best on AMD, Nvidia possibly catastrophic"

Its not something that has been a big focus until very recently - nVidia managed to get pre-emption working for compute stuff in general (on hardware that theoretically shouldn't support it at all) but last I heard they hadn't addressed the situation with dynamic parallelism (which is relevant to async) but I wouldn't count them out when it actually comes to it being needed. (EDIT: Its also currently only enabled for debugging use AFAIK so it won't perform well in gaming use as things stand).

The reason it didn't performed well because Nvidia told Oxide that Async Compute is not fully implemented in driver yet.

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702

Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

EDIT: End of the day Maxwell is going to be a liability when it comes to DX12, its been savaged to be very good at DX11 and that is it but anyone who really cares about DX12 performance won't want to be on any of the current GPUs if developers actually make proper use of what DX12 can bring to the table (i.e. AMD supported tessellation when nVidia didn't at all and we all know what happened there).

You, David Kanter and others should not jumped to conclusion. Maxwell will not going to be a liability when it comes to DX12 when Async Compute is fully implemented in driver. Nvidia did the same thing to Star Swarm benchmark when it came out AMD bragged Nvidia DirectX 11 will never matched Mantle performance, Nvidia was very angry at AMD's smeared marketing then they busy spend months fully implemented CPU overhead in driver. When Nvidia released DirectX 11 wonder driver saw Nvidia DirectX 11 destroyed AMD Mantle in Star Swarm.

It will be very interesting how good Async Compute will performed when it fully implemented, probably faster than or matched AMD on latency.
 
It will be very interesting how good Async Compute will performed when it fully implemented, probably faster than or matched AMD on latency.

It will be interesting to see nvidia owners unhappy since nvidia kinda lied about async shaders, and their software based solution will be horrible and much slower than AMD hardware solutions, especially in latency.


You see what I did there? Next time, when you are trying to be a crystal ball, please use facts and not speculations
 
If its anything like all the hoo ha with Ashes of the singularity, it'll be a driver fix in the short term anyway

yeah right, I did not know that you can fix latency with drivers :/ If hardware is not capable of doing certain things, it is not capable whatever wonders drivers can do.
And where is that confidence about nvidia drivers coming from, since recently nvidia just for the life of them cannot provide stable win 10 drivers to their cards, some of their cards don't even have a driver. And you think all the async shader saga in Ashes is coincidence. You think nvidia was caught by surprise by Oxide? It was said that nvidia was quite involved with oxide over the summer, so they knew what is coming, yet they could not fix their async shader stuff, why? Since you think it is easily and quickly fixable. nVidia also is not having great time in folding@home project with new work units. Keep crashing and erroring out in every attempt, while those same WUs are perfectly stable on AMD hardware. And the funny thing is, AMD completely ignores f@h, while nvidia is heavily invested in that project if couple of guys constantly checking in with the project for optimisations.
 
This.

I don't know why anyone even begins to care how good the current cards are at DX12. BY the time DX12 is mainstream (ie by the time a lot of games are coded for it) we will be on the next generation(s) of cards anyway.

I mean lets face it, most on here will be buying a new card every year or two anyway so whether Maxwell or Fury can run DX12 well or not is pretty irrelevant.

No we won't, multiple AAA titles are heading out with DX12, it's being patched into multiple games. This is not DX9-11, the reason the uptake took ages was because not a single pre-existing card had support for these API's. 6 months after DX11 launched probably 5% of people who had bought a gpu in the previous 4 years would have DX11 support, that is a very small market for a game dev to target the game at, VERY small and with no support for that API before it launches.

With DX12 80+ % of people buying cards in the past 4 years have DX12 support. Game devs will target 80+ % of the market instantly. This time you also have Mantle, almost two years of a 50+ devs playing around with a low level API on PC, multiple games released supporting it and multiple mainstream engines supporting it. Low level API well, the whole point is 99% of the control goes to the game, the game will be largely programmed identically for Mantle or DX12 on the game side, managing memory, it's just the calls to the driver that change. Game devs have been preparing for low level support on the PC for 18+ months and they have huge widespread hardware support already.

There will be probably a couple of dozen DX12 titles out before the next generation of hardware hits in Q3 next year(at the earliest really). Complete nonsense to suggest DX12 isn't important.
 
No we won't, multiple AAA titles are heading out with DX12, it's being patched into multiple games. This is not DX9-11, the reason the uptake took ages was because not a single pre-existing card had support for these API's. 6 months after DX11 launched probably 5% of people who had bought a gpu in the previous 4 years would have DX11 support, that is a very small market for a game dev to target the game at, VERY small and with no support for that API before it launches.

With DX12 80+ % of people buying cards in the past 4 years have DX12 support. Game devs will target 80+ % of the market instantly. This time you also have Mantle, almost two years of a 50+ devs playing around with a low level API on PC, multiple games released supporting it and multiple mainstream engines supporting it. Low level API well, the whole point is 99% of the control goes to the game, the game will be largely programmed identically for Mantle or DX12 on the game side, managing memory, it's just the calls to the driver that change. Game devs have been preparing for low level support on the PC for 18+ months and they have huge widespread hardware support already.

There will be probably a couple of dozen DX12 titles out before the next generation of hardware hits in Q3 next year(at the earliest really). Complete nonsense to suggest DX12 isn't important.

it is not important to nvidia buyers, since nvidia trained them well to upgrade every year ;) nvidia has broken async shaders in maxwell, and VR hardware is kinda weak? No problem, you just need to upgrade next year to next gen cards which will have everything fixed for you :D Yeah, consumer friendly.

But yeah, regarding DX12 adoption rate, it does make sense what you are saying. all the previous DX versions were exclusive to new cards, thus longer adoption rate.
 
No we won't, multiple AAA titles are heading out with DX12, it's being patched into multiple games. This is not DX9-11, the reason the uptake took ages was because not a single pre-existing card had support for these API's. 6 months after DX11 launched probably 5% of people who had bought a gpu in the previous 4 years would have DX11 support, that is a very small market for a game dev to target the game at, VERY small and with no support for that API before it launches.

With DX12 80+ % of people buying cards in the past 4 years have DX12 support. Game devs will target 80+ % of the market instantly. This time you also have Mantle, almost two years of a 50+ devs playing around with a low level API on PC, multiple games released supporting it and multiple mainstream engines supporting it. Low level API well, the whole point is 99% of the control goes to the game, the game will be largely programmed identically for Mantle or DX12 on the game side, managing memory, it's just the calls to the driver that change. Game devs have been preparing for low level support on the PC for 18+ months and they have huge widespread hardware support already.

There will be probably a couple of dozen DX12 titles out before the next generation of hardware hits in Q3 next year(at the earliest really). Complete nonsense to suggest DX12 isn't important.

You also forgot to mention the Xbox One will already have games in development with DX12 in mind. All new console games will be using low level API code and since they use GCN there is a high chance that some of the big DX12 features such as Async shaders will be used.
 
You also forgot to mention the Xbox One will already have games in development with DX12 in mind. All new console games will be using low level API code and since they use GCN there is a high chance that some of the big DX12 features such as Async shaders will be used.

I'm very skeptical about console influence on AMD performance in desktops. I was optimistic about this when consoles launched. Though code will be adopted to suit AMD cards, yet to this day we have nvidia involvement with gameworks and other stuff. So at the moment I am still pessimistic. nVidia will always find the way to twist arms or pay off and include their works.
 

Well for all of AMD's bluster, nvidia are still faster in ashes even without async compute... Now the dev says they are working with nvidia on getting async working as well

Now we have another pre-release comment that something isn't yet optimised... These VR headsets are 4-6 months away from being available, so yeah I think anyone betting on AMD having a very clear advantage in DX12 or VR are going to be dissapointed by the time anything reaches an actual release date
 
No we won't, multiple AAA titles are heading out with DX12, it's being patched into multiple games. This is not DX9-11, the reason the uptake took ages was because not a single pre-existing card had support for these API's. 6 months after DX11 launched probably 5% of people who had bought a gpu in the previous 4 years would have DX11 support, that is a very small market for a game dev to target the game at, VERY small and with no support for that API before it launches.

With DX12 80+ % of people buying cards in the past 4 years have DX12 support. Game devs will target 80+ % of the market instantly. This time you also have Mantle, almost two years of a 50+ devs playing around with a low level API on PC, multiple games released supporting it and multiple mainstream engines supporting it. Low level API well, the whole point is 99% of the control goes to the game, the game will be largely programmed identically for Mantle or DX12 on the game side, managing memory, it's just the calls to the driver that change. Game devs have been preparing for low level support on the PC for 18+ months and they have huge widespread hardware support already.

There will be probably a couple of dozen DX12 titles out before the next generation of hardware hits in Q3 next year(at the earliest really). Complete nonsense to suggest DX12 isn't important.

I don't think so

Game devs did the absolute minimum with Mantle and all they were concerned about was making the maximum profit.

If game devs are allowed to get away with it they will do the same with DX12, welcome to the world of broken games.
 
Well for all of AMD's bluster, nvidia are still faster in ashes even without async compute... Now the dev says they are working with nvidia on getting async working as well

Now we have another pre-release comment that something isn't yet optimised... These VR headsets are 4-6 months away from being available, so yeah I think anyone betting on AMD having a very clear advantage in DX12 or VR are going to be dissapointed by the time anything reaches an actual release date

AMD may not have a clear advantage but they certainly have brought their Api overhead down to around the same level as Nvidia in their DX12 drivers. We should see the true ability of the hardware unlike in DX11 where AMD was and still is behind.
 
No we won't, multiple AAA titles are heading out with DX12, it's being patched into multiple games. This is not DX9-11, the reason the uptake took ages was because not a single pre-existing card had support for these API's. 6 months after DX11 launched probably 5% of people who had bought a gpu in the previous 4 years would have DX11 support, that is a very small market for a game dev to target the game at, VERY small and with no support for that API before it launches.

With DX12 80+ % of people buying cards in the past 4 years have DX12 support. Game devs will target 80+ % of the market instantly. This time you also have Mantle, almost two years of a 50+ devs playing around with a low level API on PC, multiple games released supporting it and multiple mainstream engines supporting it. Low level API well, the whole point is 99% of the control goes to the game, the game will be largely programmed identically for Mantle or DX12 on the game side, managing memory, it's just the calls to the driver that change. Game devs have been preparing for low level support on the PC for 18+ months and they have huge widespread hardware support already.

There will be probably a couple of dozen DX12 titles out before the next generation of hardware hits in Q3 next year(at the earliest really). Complete nonsense to suggest DX12 isn't important.

Dozens? Going to be hundreds!
No seriously, where are those games?

EDIT: From reddit.
Here's the known DX12 titles and their ETA, feel free to add more if I missed some:
Q4 2015: Star Wars Battlefront (Vulkan or DX12) - I hope they have a server browser!! GRR. Ark: Survival Evolve (due for a DX12 patch, it's in EA).
Feb 2016: Deus Ex Mankind Divided, Mirror's Edge (looking forward to this myself!), Hitman (same engine as Deus Ex, from Squaresoft)
Early 2016: Fable, Rise of the Tomb Raider
2016: Ashes of the Singularity

The problem is: are those going to be DX11 games with a few DX12 features, like back in the day with DX9-10?
The adoption rate is way different, but I still doubt we'll get real DX12 for a couple of years.
 
Last edited:
There is a difference between DX12 support and DX12 usage in an engine that actually takes advantage of what the API can do...

And at the end of the day if nVidia cards were totally hopeless at it given how big a slice of the userbase that is currently not many developers are going to produce something that alienates the bigger part of their potential audience.
 
I don't think so

Game devs did the absolute minimum with Mantle and all they were concerned about was making the maximum profit.

If game devs are allowed to get away with it they will do the same with DX12, welcome to the world of broken games.

Kaap, are you serious? You are comparing mantle/AMD who are scraping the bottom of their finances with dx12 and MS who have much more influence on game devs.
mantle was developed as test vehicle for AMD to introduce lazy world to LL API. I doubt AMD ever intended to take over the world with Mantle on their own. Actually Mantle did take over the world in the shape of Vulkan and DX12.
But anyway, there is no point to discuss this further since everyone here has some different crystal balls available ;)
 
It will be interesting to see nvidia owners unhappy since nvidia kinda lied about async shaders, and their software based solution will be horrible and much slower than AMD hardware solutions, especially in latency.


You see what I did there? Next time, when you are trying to be a crystal ball, please use facts and not speculations


And yet your post is full of speculation, with some incorrect facts.

From what we have seen so far the NVidia solution is lower latency but with only a 32 list capacity per pass, whether this is fully hardware, software, middleware, or Idontknowwhere, we just don't know yet and we may never know.

async1.jpg


NVidia
async2.jpg


AMD
async3.jpg


As for NVidia lying about it their Async shaders, when did they do that then?
can you post up a link to their material on Async shaders, that says something that can be proven as a lie and I mean proven, not just some guy on the internet said.

Overall it is way to early to tell anything yet, with only the one benchmark of something that can certainly give some odd results, I really don't think any of us can say anything is definitely accurate on the subject at this time.
 
bru, use your brain man :D I don't use facts in my speculation same as Athlons post does not contain any facts. I just reversed sides in my 'speculation' to show how ridiculous his post can be ;) read between the lines.
he is using his crystal ball to speculate without any evidence to confirm his speculations.
regardless of my joke post, I try not to speculate about how and when nvidia fix everything in the world and everyone will be happy.

also you do realise we are talking about different latency here.
 
From them graphs I know what I think looks best, a continue smooth constant line or a line that starts low and gradually gets worse?

That's all am going to say on this matter, because for me the real taking comes when games start dropping..
 
I agree with Gregster on this, nice for those on AMD not planning on upgrading yearly as gives more longevity, but those that like high end cards will likely be upgrading on 14nm anyway. So Nvidia offer the best DX11 performance atm and possibly will offer the best DX12 performance next year with Pascal as well. Let's be honest most of us will upgrade when the die shrink lands anyway, so is DX12 performance today isn't that big of a selling point. If AMD can keep an advantage VS Pascal than that would be great for AMD, but I imagine Pascal will come guns blazing for DX12. Only really worth worrying about when there are actually DX12 games..
 
Last edited:
From them graphs I know what I think looks best, a continue smooth constant line or a line that starts low and gradually gets worse?

That's all am going to say on this matter, because for me the real taking comes when games start dropping..

Then you've misunderstood the graphs... What they show is that NVIDIA has lower latency dealing with smaller sets and only reach the same high level as AMD when dealing with the maximum size AMD can deal with... If you increased the set size again AMD's latency would double where as Nvidias latency would make another small step up

What it shows is that if you optimise your code for AMD's set size then nvidia is equal on latency, but if you optimise for nvidia then AMD will be behind
 
I agree with Gregster on this, nice for those on AMD not planning on upgrading soon as gives more longevity, but those that like high end cards will likely be upgrading on 14nm anyway. So Nvidia offer the best DX11 performance atm and possibly will offer the best DX12 performance next year with Pascal as well. Let's be honest most of us will upgrade when the die shrink lands anyway, so is DX12 performance today isn't that big of a selling point. If AMD can keep an advantage VS Pascal than that would be great for AMD, but I imagine Pascal will come guns blazing for DX12.

On paper both Nvidia's Pascal and AMD's Greenland are about the same so impossible to know who will come out on top. Driver wise AMD seem to be on par with Nvidia in DX12 so it will the hardware that will be the decider.
 
Back
Top Bottom