• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Oxide Developer: “NVIDIA Was Putting Pressure On Us To Disable Certain Settings In The Benchmark”

Twisting things much? ;) Nvidia didn't ask Oxide to add anything. They asked them to disable/remove a feature/setting.

Good try though.

LOL.

That's much worse than asking them to add a feature or support for it. Infinitely so. They were asking Oxide to not only disable a-synch compute on NVIDIA (which they did, because it didn't work and reduced performance), but to disable it completely so that AMD couldn't benefit (and they do - massively).
 
AMD fessed up quite quickly with Pcars when caught with their pants down, based off ramgate, could be a while before Nvidia give us some follow up, who knows they might even push the boat out with some damage control, either that or just laugh at their customers again.
 
LOL.

That's much worse than asking them to add a feature or support for it. Infinitely so. They were asking Oxide to not only disable a-synch compute on NVIDIA (which they did, because it didn't work and reduced performance), but to disable it completely so that AMD couldn't benefit (and they do - massively).

Well that's what I was trying to say. Are you responding to Gregster?
 
That statement doesn't make me partisan. There are DX12 titles arriving this fall.

There are titles arriving in Q1 2016.

If these titles use Async shading, a 290x might compete with a GTX 980 Ti.

Therefore there's reason to claim..

"NVIDIA might need Pascal and soon".

You read anything negative towards NVIDIA, who arguably deserve it if this all pans out to be true, as partisanship.

Why? Because you're partisan.

Don't push your problems dealing with this information on me. Deal with it yourself. Take a deep breath and think about who deserves your anger, which you clearly have, me or NVIDIA?

Objectively speaking... It isn't me. I've worked to shed light on this issue. Spent a week glued to a laptop while my wife brought me sandwiches and coffee. (she's the best).

I can now disappear, until the next (wtf is happening here?) moment. Do you think I've enjoyed dealing with people like you? It had taken so much patience on my part. Death threats? Check! Insults and belittling? Check! The internet is filled with crazy folks.

Now I have to deal with you. Not exactly amusing, but worth every single insult thrown my way.

When I was a teenager, I acted like you. I was a partisan hack who defended my beloved hardware choices by any means necessary. I've grown up. Learned that this is absolutely pointless. It didn't benefit me (my ego and sense of self worth was tied to the size of my GPU).

Take a step back and look at what has transpired over the last few weeks. I think the entire PC gaming community will end up benefiting from all of this.


peace.
http://hardforum.com/showpost.php?p=1041827260&postcount=72
 
Last edited:
No ones going to use the Async shaders, as the markets not big enough, its only AMD, who have only got a piddly 15% share of it or something, its not going to be worth it to them, catering for a tiny market like that.
 
No ones going to use the Async shaders, as the markets not big enough, its only AMD, who have only got a piddly 15% share of it or something, its not going to be worth it to them, catering for a tiny market like that.

With the consoles starting to make use of them i can't see them not getting used on the PC. With the consoles being more like PC's than ever it would seem stupid from a developer stand point not to try and use what they have already created. If Nvidia are involved with the game then that's another story.

Edit: Beaten to it.
 
Last edited:
No ones going to use the Async shaders, as the markets not big enough, its only AMD, who have only got a piddly 15% share of it or something, its not going to be worth it to them, catering for a tiny market like that.

Love that logic, if any new tech is at a stage to be implemented in games, if a dominant manufacturer doesn't have it, then that technology gets scrapped or delayed for a generation of cards. Where would the industry be if that was the reasoning for any decisions? This also shows why we need competition if it has come to this type of attitude.

That aside isn't the main gripe that people have that nvidia sold and are selling these cards as being dx12 ready etc and many people buy the maxwell cards with dx12 and the future specifically in mind, only to have not been so honest with it and people's purchases are now worth much less for the very reason they bought it as a result of such marketing? (cf. dx12 going back to 400 series!) It's the duplicity of marketing that is the issue. - As someone on reddit said : 'planned obsolescence'.
 
Last edited:
No ones going to use the Async shaders, as the markets not big enough, its only AMD, who have only got a piddly 15% share of it or something, its not going to be worth it to them, catering for a tiny market like that.

Even sales numbers (which isn't the same as market share, but whatever), was at it's lowest 18%.
Count on those that made Mantle games plus some others. Why would you not do that when you know every GCN GPU (from 7xxx) onwards can get better performance, ergo lower system requirements?
 
This feature was implemented by Lionhead Studios. We integrated it and indend to make use of it as a tool to optimize the XboxOne rendering.

https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html

As Unreal Engine 4 has had AsyncCompute implemented by Lionhead Studios for the XB1, iirc as it's a cross platform mp title, will the PC version be running AsyncCompute too?




That aside isn't the main gripe that people have that nvidia sold and are selling these cards as being dx12 ready etc and many people buy the maxwell cards with dx12 and the future specifically in mind, only to have not been so honest with it and people's purchases are now worth much less for the very reason they bought it as a result of such marketing? (cf. dx12 going back to 400 series!) It's the duplicity of marketing that is the issue. - As someone on reddit said : 'planned obsolescence'.

Who's getting a Maxwell refund this time?:p

Can we get a poll please?:D
 
Last edited:
"the only 2 cards which might benefit from DX12 are the Titan X (confirmed 2 DMA engines) and the 980ti (suspected), Maxwell Gen 1 will no see any performance boost"

Can anyone explain the bit about the DMA engines to me please? And does this affect asyncCompute?
 
"the only 2 cards which might benefit from DX12 are the Titan X (confirmed 2 DMA engines) and the 980ti (suspected), Maxwell Gen 1 will no see any performance boost"

Can anyone explain the bit about the DMA engines to me please? And does this affect asyncCompute?

Stands for "Direct Memory Access." DMA is a method of transferring data from the computer's RAM to another part of the computer without processing it using the CPU. While most data that is input or output from your computer is processed by the CPU, some data does not require processing, or can be processed by another device. In these situations, DMA can save processing time and is a more efficient way to move data from the computer's memory to other devices.

For example, a sound card may need to access data stored in the computer's RAM, but since it can process the data itself, it may use DMA to bypass the CPU. Video cards that support DMA can also access the system memory and process graphics without needing the CPU. Ultra DMA hard drives use DMA to transfer data faster than previous hard drives that required the data to first be run through the CPU.

Not sure on whether it has anything to do with AsyncCompute though, sure someone will be able to answer that.

http://techterms.com/definition/dma
 
Last edited:
This is interesting.

Direct Compute and Shader Combined rendering latency

GTX 980TI, a clear delay here



Tahiti (GCN 1.0) 7970 / R9 280, nothing, nada, nichts..... absolutely simultaneous and parallel. That looks like AMD HSA for GPU's at work here, awesome.

 
Who's getting a Maxwell refund this time?:p

Can we get a poll please?:D

People need to buy like me :P the 780 GHZ will last me til DX11 is done and mean time I've saved a load of money for when I need a card for DX12. :smug:

(touch wood it doesn't die unexpectedly tomorrow).
 
Back
Top Bottom