• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Doom to use async compute

The developer is saying that the Xbone version is also using AS, if it was Xbone only then they would not have used the word also. So are you stating that the developer is wrong and that it has been removed.

How old is that quote? From what I've read there where changes made after Nvidia partnered up with them for the PC version. Presumably replacing A-s compute was one of them.
 
How old is that quote? From what I've read there where changes made after Nvidia partnered up with them for the PC version. Presumably replacing A-s compute was one of them.

About 7 days. :)
http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb

Feel free to search the exact quote, you wont find anything older. Edit: Nope found one from Feb. Still, unless anyone can state otherwise I'll take the word of the developer over that of the PR machine of a company that has lied about developer intentions in the recent past.
Edit2: Hah that was Google being stupid, still not found an occurrence of that quote older than 7 days.
 
Last edited:
More clear than the words of the developer stating that they use it.


Such as hobble Hitman so that a 390X beats a 980ti in DX11. ;)

Is that right that the 390x out performs the Ti in Hitman?
If so that's great news,
It's called getting a taste of your own medicine.
AMD doing something other than just talk is long overdue.
 
Matt, I heard that Hitman uses async, yet AMD cards see nearly zero performance uplift going from dx11 to dx12 :) what's going on there?

Buggy game most likely that needs a few patches. They redid Hitman several times most likely when they didn't know what direction to take the game in, and now it's episodic just to get something out of the door.

Personally I'll be waiting until the entire game is out before paying too much attention to it.
 
Buggy game most likely that needs a few patches. They redid Hitman several times most likely when they didn't know what direction to take the game in, and now it's episodic just to get something out of the door.

Personally I'll be waiting until the entire game is out before paying too much attention to it.

Yeah, I haven't played absolution yet, so no rush to get this hitman ;)
 
Is that right that the 390x out performs the Ti in Hitman?
If so that's great news,
It's called getting a taste of your own medicine.
AMD doing something other than just talk is long overdue.

Oh I agree, no arguments from me. It's about time AMD started to fight fire with fire. Plus the hypocrisy from some about it is purely delicious.
 
About 7 days. :)
http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb

Feel free to search the exact quote, you wont find anything older.

I asked because I thought that statement from a Dev was discussed somewhere on this forum ages ago.

As I've said in other threads what Nvidia has being doing is what it should be doing, It ain't pretty but it's business and it's cut-throat. If I was running the show I'd do it and so would everyone else.
AMD was acting like it was rolling over and taking it up to now, This response is long overdue.
 
Devs would be stupid not to use Async when a large proportion of sales are from the console market. As more games show the gains that can be achieved, no doubt it will become standard practice in all game development.

They'd be stupid to use it, when theres only about 20 people who are able to use it on the PC. :p

Id expect it to be stripped out of just about every port if Nvidia havn't got it on Pascal.
 
About 7 days. :)
http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb

Feel free to search the exact quote, you wont find anything older. Edit: Nope found one from Feb. Still, unless anyone can state otherwise I'll take the word of the developer over that of the PR machine of a company that has lied about developer intentions in the recent past.
Edit2: Hah that was Google being stupid, still not found an occurrence of that quote older than 7 days.

Think we need to be very careful reading that quote as i think anyone thinking that A-sync is currently running in TR is not reading what has been written & is in fact projecting there thoughts on what has been written.

Below is the full section

"The above advantage we feel is the most important one for Rise of the Tomb Raider, but there are many more advantages that make us excited about DirectX 12. Another big feature, which we are also using on Xbox One, is asynchronous compute."

Breaking that down

"The above advantage we feel is the most important one for Rise of the Tomb Raider,
(This refers to everything above that line & all the improvements in TR from DX12 to date, which you will note A-sync was never mentioned)

"but there are many more advantages that make us excited about DirectX 12"
(this is the part we are not reading, this puts a full stop on what they have done with DX12 and allows them to talk about DX12 in a general sense)

"Another big feature, which we are also using on Xbox One, is asynchronous compute."

This is only referring to A-sync in a general sense as a big feature of DX12 & they go on to make a connection to its use in the XBOX

Will a-sync come to TR, maybe but Nvidia has to get its driver out first before i think it can...
 
Is that right that the 390x out performs the Ti in Hitman?
If so that's great news,
It's called getting a taste of your own medicine.
AMD doing something other than just talk is long overdue.

Certainly squeezing out every drop of performance out of the 290x. I'm all for longevity but that's surely hurting sales for AMD.
 
Yes xbox as 2 aces and ps4 has 8 aces I think


Yes both consoles have async compute capabilities. They are less capable than current desktop GPUs, but they are still capable ;)


Cheers guys, Well i can see why AMD is appearing to be shaping up with recent DX12 games. Not just that but i guess porting games from consoles to PC hardware(AMD GPU) shouldn't be as much effort compared to nVidia GPUs
 
If they could guarantee that a 7970 would match a 780 in 3 years or a 290x would jump from 780 to 980 performance on release then they would sell better. The problem is you just don't know for sure. Why don't they just work really hard on the drivers for the release to maximise initial performance. If you look at 290x reviews compared to how it performs now you wouldn't think its the same beast.
 
If they could guarantee that a 7970 would match a 780 in 3 years or a 290x would jump from 780 to 980 performance on release then they would sell better. The problem is you just don't know for sure. Why don't they just work really hard on the drivers for the release to maximise initial performance. If you look at 290x reviews compared to how it performs now you wouldn't think its the same beast.

Maturity! Its a feature!

lol

Na i know what your saying and i never got it either. Why don't they perform anywhere near as good on release as they do now? What's holding them back so much on release? I feel the fury x is suffering the same fate and is only just starting to get slightly better.

Only thing i can think of is driver level! They don't seem to work with the card on a driver level at the end of production near to release to get it performing well and just do this on a game by game basis optimising and squeezing more performance out of it.
 
7970 was 1st GCN iteration. Whatever engineers come up with, software guys need to learn the architecture in order to take advantage of it. This happens over time. AMD was very excited about GCN and what it can do in theory. And we can see that theory becoming reality now.
If AMD came out and said straight away - guys our GPUs can eventually outperform future competitors products, no one would have ever believed it and laughed at them. Instead they gave us some subtle hints on how capable GCN is as an arch.
nVidias maxwell were designed as lean arch excellent for certain workloads, that's why nvidia software guys were able to get quite a lot out of it from the get go. I am sure their Kepler would have improved similar to GCN, but I guess nvidia concentrated their efforts on Maxwell more, since they were their cash cows. If they had the team working on Kepler optimisations, Maxwell optimisations would have suffered.
Will be interesting to see how Maxwell ages once Pascal is out. I wouldn't be surprised that it will fall flat on the face soon after.
 
7970 was 1st GCN iteration. Whatever engineers come up with, software guys need to learn the architecture in order to take advantage of it. This happens over time. AMD was very excited about GCN and what it can do in theory. And we can see that theory becoming reality now.
If AMD came out and said straight away - guys our GPUs can eventually outperform future competitors products, no one would have ever believed it and laughed at them. Instead they gave us some subtle hints on how capable GCN is as an arch.
nVidias maxwell were designed as lean arch excellent for certain workloads, that's why nvidia software guys were able to get quite a lot out of it from the get go. I am sure their Kepler would have improved similar to GCN, but I guess nvidia concentrated their efforts on Maxwell more, since they were their cash cows. If they had the team working on Kepler optimisations, Maxwell optimisations would have suffered.
Will be interesting to see how Maxwell ages once Pascal is out. I wouldn't be surprised that it will fall flat on the face soon after.

The fact that Maxwell had any proper compute tech sliced out of it to focus on pure DX11 gaming performance is probably going to be the main reason it falls behind. Even more so than Kepler did.

Pascal is a full GPU architecture with plenty of compute shoved back in, and with DX12 and the way it likes to handle things I'm sure Maxwell is going to look might bad once even more games switch to DX12 and Vulkan.

They were good cards, for the short term it seems.
 
7970 was 1st GCN iteration. Whatever engineers come up with, software guys need to learn the architecture in order to take advantage of it. This happens over time. AMD was very excited about GCN and what it can do in theory. And we can see that theory becoming reality now.
If AMD came out and said straight away - guys our GPUs can eventually outperform future competitors products, no one would have ever believed it and laughed at them. Instead they gave us some subtle hints on how capable GCN is as an arch.
nVidias maxwell were designed as lean arch excellent for certain workloads, that's why nvidia software guys were able to get quite a lot out of it from the get go. I am sure their Kepler would have improved similar to GCN, but I guess nvidia concentrated their efforts on Maxwell more, since they were their cash cows. If they had the team working on Kepler optimisations, Maxwell optimisations would have suffered.
Will be interesting to see how Maxwell ages once Pascal is out. I wouldn't be surprised that it will fall flat on the face soon after.

In GPU space AMD and ATI before them always developed very forward leaning products but they've also struggled immensely to get the balance right and capitalise on it - often jumping the gun to their own detriment. This time around it is something that they could really work to their advantage as the architecture has great synergy with the nature of nodes sub 20nm planar and DX12 and similar APIs.

Regarding Kepler though - outside of maybe the witcher 3 which I've never played - I'm not really seeing Kepler falling behind (that isn't to say it couldn't be running better than it is if they were spending more time on optimising it but that isn't something I can qualify) - a lot of benchmarks if you dig into it you'll find they are either using old Kepler numbers and not actually retested older cards when they say they have, using numbers given to them by AMD or nVidia or testing Kepler cards limited to the reference on paper clocks - the numbers you see are often anything upto 20% (though normally more like 10-15%) lower than what users will be experiencing in the real world. (Before you add in any end user overclocking).
 
Back
Top Bottom