• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Only AMD has true Async Compute - Doom Devs

Which was made perfectly clear to be the case specifically in deep learning, and the number was substantiated with real world benchmarks proving it was accurate.

So where exactly are you going with this?


All the performance numbers Nvidia announced have been fairly accurate, if anything they have been under reporting slightly. They said things like 1080 is 80% faster than 980 when it is commonly 75-82% faster in a majority of games.


According to Nvidia and their own internal benchmarks on Deep Learning which has nothing to do with gaming.

Nvidia have been repeating the same thing "its coming" for a year, they continue on with the same line.

How is it that AMD can get it up and running while Nvidia have gone from one generation to another and still not managed it?



Soon'tm
 
As if on queue DP is here to spout his pro Nvidia propaganda all over another AMD thread....when will it ever end? ;)

Its like your the NDL, do they send you a members keyring and a woolly hat?
 
Don't know how many times he states it's twice as fast as TX.:p

Guy is a genius, timing opens wallets...;)

There is an earlier segment on non-VR performance (which is plainly non-VR gaming) where (albeit the graphs are kind of funky) but its plainly stated as being around 1.6-1.7X the performance of a 980 and around 1.2-1.3x the performance of a TX and then moves onto a VR segment - it would be kind of weird to having spent some time promoting its normal gaming performance as one thing then to start claiming it was another later on - especially in what is plainly marked as a VR segment.

As if on queue DP is here to spout his pro Nvidia propaganda all over another AMD thread....when will it ever end? ;)

Its like your the NDL, do they send you a members keyring and a woolly hat?

Its a stamp book for posting - one page gets you an nVidia pen, 10 pages a t-shirt, the first person to fill the entire book gets Jen's last year's leather jacket.
 
Do the XBONE and PS4 GPUs have support for Async Compute? (Not a troll I genuinely don't know).

If so then the game devs may well take advantage of it, and AMD GPUs might do very well in some games. If the consoles don't however I can't see it being a big thing for the game devs for the most part.
 
Yes they do ^^^

As if on queue DP is here to spout his pro Nvidia propaganda all over another AMD thread....when will it ever end? ;)

Its like your the NDL, do they send you a members keyring and a woolly hat?

According to Nvidia they worked with Microsoft on DX12 two years before Mantle was even conceived and yet its AMD who got a fundamental DX12 function up and running on day one while Nvidia still can't get it working.

But soon, eh?
 
There is an earlier segment on non-VR performance (which is plainly non-VR gaming) where (albeit the graphs are kind of funky) but its plainly stated as being around 1.6-1.7X the performance of a 980 and around 1.2-1.3x the performance of a TX and then moves onto a VR segment - it would be kind of weird to having spent some time promoting its normal gaming performance as one thing then to start claiming it was another later on - especially in what is plainly marked as a VR segment.

Watch again. Just after the Vr segment he is still going on about twice the perf of TX. He shows a video of the 1080 and after it's finished he's still rambling on about twice the Perf. It was genius as at least 2 people i know that are PC gamers were telling me how great it was and i asked why they said twice the performance of Titan X lol. I then had to kill there dreams, SHAME on you Jen.
 
Do the XBONE and PS4 GPUs have support for Async Compute? (Not a troll I genuinely don't know).

If so then the game devs may well take advantage of it, and AMD GPUs might do very well in some games. If the consoles don't however I can't see it being a big thing for the game devs for the most part.

ASync is fairly fundamental to how the consoles do next gen graphics - however it isn't a straight forward benefit as the optimal way to load up the console hardware is a static target while on PC different GPUs can have very different requirements in terms of best utilising their compute capabilities.
 
Watch again. Just after the Vr segment he is still going on about twice the perf of TX. He shows a video of the 1080 and after it's finished he's still rambling on about twice the Perf. It was genius as at least 2 people i know that are PC gamers were telling me how great it was and i asked why they said twice the performance of Titan X lol. I then had to kill there dreams. Shame on you jen.

Yeah but the entire first segment he is very plain that it isn't 2X TX in non-VR so to ignore that and instead go with the later 2X claim which is plainly the tail end of the VR section is a bit of a strange or desperate thing to do.
 
There is an earlier segment on non-VR performance (which is plainly non-VR gaming) where (albeit the graphs are kind of funky) but its plainly stated as being around 1.6-1.7X the performance of a 980 and around 1.2-1.3x the performance of a TX and then moves onto a VR segment - it would be kind of weird to having spent some time promoting its normal gaming performance as one thing then to start claiming it was another later on - especially in what is plainly marked as a VR segment.

Agree on what you are saying, but that doesn't take away the fact on summarising(after the product vid)- he marketed the **** out a product claiming it as 'twice as fast as TX'- as I said, genius.
 
Yeah but the entire first segment he is very plain that it isn't 2X TX in non-VR so to ignore that and instead go with the later 2X claim which is plainly the tail end of the VR section is a bit of a strange or desperate thing to do.

The fact that people on here and in real life were going on about Twice the Perf tells me it had the effect it was meant to. Why else would he keep repeating it other than trying to drum this fact in to people heads.
 
Preemption is a hardware implementation. Otherwise can you explain how to do pixel-level preemption in software without using magic.

i was talking about async not preemption, and i still dont believe preemption and async compute are the same thing.
to me preemption is a tool used to prioritize queues and submitting 1 at a time, and async allows you to submit multiple workload queued simultaneously getting rid of bubbles that waste space that could have been used for more instructions, while async allow a more efficient way of doing it.
at least thats my understanding of it, and keep in mind i am not a developer, i just read articles like everyone and try to understand correctly what is it about.
if i have everything wrong feel free to correct me.
 
i was talking about async not preemption, and i still dont believe preemption and async compute are the same thing.
to me preemption is a tool used to prioritize queues and submitting 1 at a time, and async allows you to submit multiple workload queued simultaneously getting rid of bubbles that waste space that could have been used for more instructions, while async allow a more efficient way of doing it.
at least thats my understanding of it, and keep in mind i am not a developer, i just read articles like everyone and try to understand correctly what is it about.
if i have everything wrong feel free to correct me.

They are absolutely not the same thing but both can be used together to offer a superior solution. i.e AMD's Asynchronous Compute Engines (ACE) do both.
 
According to Nvidia and their own internal benchmarks on Deep Learning which has nothing to do with gaming.

Nvidia have been repeating the same thing "its coming" for a year, they continue on with the same line.

How is it that AMD can get it up and running while Nvidia have gone from one generation to another and still not managed it?



Soon'tm

So Nvidia made a claim that pascal is 10x faster Than maxwell at deeplearning, never commented on gaming performance in the slightest, proved that it was 10x faster withap.a 3rd party benchmark, and yet you are angry that Nvidia tells the truth aND pascal isn't 10x faster at gaming.

Nvidia could t have been more clear.


When AMD pay Oxide to develop an engine that showcases Mantle API on AMD hardware it is.not surprising in the slightest that Oxide supports AMD hardware quickly. The 1080 has only just started shipping so how do you expect them to develop something over a weekend?
 
Last edited:
i was talking about async not preemption, and i still dont believe preemption and async compute are the same thing.
to me preemption is a tool used to prioritize queues and submitting 1 at a time, and async allows you to submit multiple workload queued simultaneously getting rid of bubbles that waste space that could have been used for more instructions, while async allow a more efficient way of doing it.
at least thats my understanding of it, and keep in mind i am not a developer, i just read articles like everyone and try to understand correctly what is it about.
if i have everything wrong feel free to correct me.

I think you need to read up on async compute before making such statements.

For starters, pascal DNA maxwell absolute allow parallel and asynchronous executionon of compute workloads.
 
So Nvidia made a claim that pascal is 10x faster Than maxwell at deeplearning, never commented on gaming performance in the slightest, proved that it was 10x faster withap.a 3rd party benchmark, and yet you are angry that Nvidia tells the truth aND pascal isn't 10x faster at gaming.

Nvidia could t have been more clear.

News to me, where is this benchmark compared with Maxwell?

When AMD pay Oxide to develop an engine that showcases Mantle API on AMD hardware it is.not surprising in the slightest that Oxide supports AMD hardware quickly. The 1080 has only just started shipping so how do you expect them to develop something over a weekend?

So you would admit Mantle is in DX12?
 
From what I can see AMD has always had better support for DX12. Nvidia's strategy is very clear. They'll only fully support DX12 and all of its features when DX12 becomes relevant enough. I suspect the next generation of Nvidia cards will be fully DX12 compliant. For me, Nvidia have shown a much better sense of timing than AMD based on the last couple of years of their GPUs' performance.

In short, Nvidia always seem to provide the raw performance when you need it - in the here and now. AMD appear to always be talking about the future but it never seems to translate into class leading performance in the present.
 
From what I can see AMD has always had better support for DX12. Nvidia's strategy is very clear. They'll only fully support DX12 and all of its features when DX12 becomes relevant enough. I suspect the next generation of Nvidia cards will be fully DX12 compliant. For me, Nvidia have shown a much better sense of timing than AMD based on the last couple of years of their GPUs' performance.

In short, Nvidia always seem to provide the raw performance when you need it - in the here and now. AMD appear to always be talking about the future but it never seems to translate into class leading performance in the present.

There is a lot of negativity toward Nvidia for their lack of DX12 support and kudos for AMD.

Nvidia are not stupid, they don't let things like that fly if they can help it, the best they can do is keep repeating "oh we haven't enabled it yet" is a copout.
 
Back
Top Bottom