• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Oxide Developer: “NVIDIA Was Putting Pressure On Us To Disable Certain Settings In The Benchmark”

100% Agreed, but the flip side is AMD poor at execution, launches and timing etc means that there no better. I.e failing in execution is still letting customers down, although in a different way to Nvidia's brutal unethical? Business practices. Great products but failing on the execution and also pricing right now.

Sometimes it reads like AMD want us to feel sorry for them because Nvidia plays dirty, and we should buy an inferior product at a high price to support them. AMD need to answer with better hardware / support, (Referring to older AMD articles) not just speaking out about unfairness.

So neither AMD or Nvidia are ideal. Until here is a third player (Probably never) just buy whoever has best hardware.

you do realise that AMD currently are operating under same budget as you probably :D so would you be able with your finances execute fiji launch better? :D
 
you do realise that AMD currently are operating under same budget as you probably :D so would you be able with your finances execute fiji launch better? :D

Haha :D

Yeah but they could have priced more aggressively, or included bundles. You either have performance edge or performance VS price. AMD have neither and are trying to charge as much / even more than Nvidia.

At least held off launch until stock was plentiful.

AMD make good products, but seem to blunder in execution every time. Which ultimately is just as much a letdown to customers as Nvidia's questionable business practices.
 
Last edited:
Haha :D

Yeah but they could have priced more aggressively, or included bundles. You either have performance edge or performance VS price. AMD have neither and are trying to charge as much / even more than Nvidia

At least held off launch until stock was plentiful.

AMD make good products, but seem to blunder in execution every time.

but why would they price it better if all available cards are being sold? It does not make sense.
When Hynix finally manages to bring more HBM to market then AMD will drop prices, that is given. But for now they sell every single card they send to (r)etailers.

according to some sources, fiji chips were already available long ago, but they were waiting for HBM to be available.
Carrizo I think comes with HDMI 2.0 support, so that means fiji taped out before Carizzo, and that is ages and ages ago. If HBM was finalized last year, fiji would have been launched straight away.
 
Last edited:
but why would they price it better if all available cards are being sold? It does not make sense.
When Hynix finally manages to bring more HBM to market then AMD will drop prices, that is given. But for now they sell every single card they send to (r)etailers.

True, but then launching with limited stock has also hurt the launch, overall more damage than good to AMD's market share / mind share with consumers.

would have been better to wait and have plentiful stock? Come in at a more aggressive price point and get more sales while offering more performance for your money. That would have won back some faith at least.

AMD's Fury X more expensive than some 980 Ti cards, comes with no game etc. Is slower expect at 4K where not many people are actually gaming anyway.

Definitely think this launch was fudged.

I've read that AMD finances are secure until at least 2019, even if Zen and next GPU are a disaster, so AMD have some room to improve next year. Maybe they just want to make a limited run of Fury and sell each card, I just don't think it's done them any favors inning over new customers or restoring faith after being behind for a while.
 
True, but then launching with limited stock has also hurt the launch, overall more damage than good to AMD's market share / mind share with consumers.

would have been better to wait and have plentiful stock? Come in at a more aggressive price point and get more sales while offering more performance for your money. That would have won back some faith at least.

AMD's Fury X more expensive than some 980 Ti cards, comes with no game etc. Is slower expect at 4K where not many people are actually gaming anyway.

Definitely think this launch was fudged.

I've read that AMD finances are secure until at least 2019, even if Zen and next GPU are a disaster, so AMD have some room to improve next year. Maybe they just want to make a limited run of Fury and sell each card, I just don;t think it's done them any favors inning over new customers or restoring faith after being behind for a while.

I wouldn't be so sure about their finances being secure ;)
And remember the green team, internets would have slaughtered AMD if they released fiji like september october.
There is no good or bad here. Either way AMD would have been wrong. I am glad they released it when they did. Now I can enjoy absolutely fantastic xfired fijis.
 
Kind of suprised that Nvidia wanted Async Computer disabled. Interesting quote though.



Maybe AMD's superior Async Compute performance will finally start to help them out in DX12. :D

Haha maybe it'll be like AMD's version of gameworks and Nvidia users will be the ones complaining about poor performance in games using asynchronous compute.
 
Haha maybe it'll be like AMD's version of gameworks and Nvidia users will be the ones complaining about poor performance in games using asynchronous compute.

Except Nvidia can support Async computing in thier newer architectures. Unlike AMD being unable to fully support Gameworks.

It it no different to how older card's do not have the features required to support Directx 12 etc.
 
Last edited:
More relevant info here,

Oxide dev: Console devs are now getting 30% extra GPU performance via Async Compute

Nvidia’s PR has previously put the blame for Ashes of the Singularity’s less than stellar performance on Microsoft’s latest graphic API i.e. DX12, on Oxides Games. Though, the developer assures that there is no dispute between Oxide Games and Nvidia. He believes that the initial confusion between the two was due to Nvidia’s demand that the studio disable certain settings in its benchmark, which it declined.

things could get pretty disruptive in a year when graphics engines built around and optimized for AMD’s GCN architecture start making their way to the PC.

http://gearnuke.com/oxide-dev-console-devs-now-getting-30-extra-gpu-performance-via-async-compute/#
 

Nvidia’s PR has previously put the blame for Ashes of the Singularity’s less than stellar performance on Microsoft’s latest graphic API i.e. DX12, on Oxides Games. Though, the developer assures that there is no dispute between Oxide Games and Nvidia. He believes that the initial confusion between the two was due to Nvidia’s demand that the studio disable certain settings in its benchmark, which it declined.

That is a diplomatic way of saying "we are sick to death of Nvidia trying to bark orders at us"
 
Definitely think this launch was fudged.

Yeah the launch was fudged, but the end of the day user experience is what counts most and the only way a fudged launch effects a user is when that user wants to buy one, after they have one then its irrelevant and even more so in the context of hardware performance and features because the fudged launch does not effect them in that regards besides drivers.

When it comes to buying a dont think about how the launch was fudged 6 months ago, its irrelevant, i care about how the product is now and for the future.
 
Last edited:
This is a developer that sided with AMD. Of course the benchmark runs better on AMD hardware. It's been optimised as such.

Really? Since the only vendor specific code in the game is nVidia's which disabled Async Compute? Seems more optmised for nVidia lol. Just like mmj_uk said no evidence. Where's the evidence its more optimised for AMD hardware?

Oxide have come out and made this statement so if that's not evidence enough for you go ahead and keep believing the dribble that nVidia state with their PR crap ohh its buggy code in the game that affected our performance. Just LOL.

Id beleive Oxide over nVidia any day just because of how nVidia go about their business. They make fantastic cards but they just spoil it with they way they do business as it harms gamers.
 
Really? Since the only vendor specific code in the game is nVidia's which disabled Async Compute? Seems more optmised for nVidia lol. Just like mmj_uk said no evidence. Where's the evidence its more optimised for AMD hardware?

Oxide have come out and made this statement so if that's not evidence enough for you go ahead and keep believing the dribble that nVidia state with their PR crap ohh its buggy code in the game that affected our performance. Just LOL.

Id beleive Oxide over nVidia any day just because of how nVidia go about their business. They make fantastic cards but they just spoil it with they way they do business as it harms gamers.

There is a difference between just writing code in a way that is optimised for one platform (without necessarily intentionally being less optimal for other platforms) and vendor specific code that conditionally runs different operations.

EDIT: Given the history of both I wouldn't necessarily believe either any more than the other personally.
 
Last edited:
There is a difference between just writing code in a way that is optimised for one platform (without necessarily intentionally being less optimal for other platforms) and vendor specific code that conditionally runs different operations.

EDIT: Given the history of both I wouldn't necessarily believe either any more than the other personally.

It is optimised though as nVidia don't have Async Compute or rather Maxwell doesn't which is one of the reasons nVidia asked for it to be disabled and is where this vendor specific codes comes from. With it being enabled it would have affected Maxwell's performance like it did when the driver called on it so Oxide disabled it on hardware level. That to me is optimising as it didn't benefit nor harm AMD's cards in anyway. It benefited nVidia.
 
About the game and benchmark, its not fully released is it and its just early access so its still yet to be finished then polished etc. If so then why all the fuss about the benchmark results, would it not be better of oxide to have release the benchmark when the games actually fully done and polished. Instead of just rushing to get the first dx12 benchmark out to show their dx12 game being nearest to release or somit.
 
No evidence of course... and people say AMD aren't good at marketing, they've milked this title pretty heavily. ;)

Funny how whenever it's the other way around you don't need evidence coupled with simply lying constantly about AMD in almost every thread yourself.

Nvidia controls 80% of the market they can do anything they want.

They DO NOT control 80% of the market, when will people learn the difference between one quarters sales and the market. Dev's will make a game based on gamers who will own gpu's bought both this quarter and 4 years ago. Also a huge portion of the gpu market isn't remotely involved in gaming. Sticking a low end 730gt in some beige box from Dell doesn't mean anyone is going to game on it. Where AMD is weakest is discrete gpus in your average Dell type box, this is also the weakest market in terms of real gamers and the highest volume segment for gpu sales. Actual enthusiast market share Nvidia don't have remotely a 80% lead even in this quarter let alone over the past 3-4 years in terms of gpus that devs target games to.

I wonder how many times AMD have asked developers to reduce or disable Tessellation.

None, the only games that massive over use it on purpose are in Nvidia's pockets, AMD simply applied a tool in the driver giving the USER the option of disable excessive calls with zero IQ hit. Why would AMD waste time asking, well, Ubisoft or TWIMTBP devs to remove a feature Nvidia have paid for, they know they won't.
 
Back
Top Bottom