• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Why 'Watch Dogs' Is Bad News For AMD Users -- And Potentially The Entire PC Gaming Ecosystem

Status
Not open for further replies.
Right, says one of the numerous members with big AMD signature banners... Display your pride in AMD all you like of course, it's no issue, but the hypocrisy is unnecessary.

The graphic is there for people like you. I don't care that I own AMD hardware unlike others who frequent here.. :rolleyes:

My box of redundant cards and my old server machines containing Intel/Geforce pieces is hurting me so much. Maybe I just buy what is the best value at the time! Really some people.
 
Where did you get that from...

Its what happens when you start typing a reply in a fast moving thread, then nip to do something else for 10 mins before hitting reply, we've all been there :p

Stupid statement is Stupid. AMD has "e-peen" too

I'd certainly put 290 crossfire up there in "e-peen" territory.

Have to say I love the offensive usage of the word 'e-peen' on an enthusiasts forum :p
 
Stupid statement is Stupid. AMD has "e-peen" too

I'd certainly put 290 crossfire up there in "e-peen" territory.

Stupid statement is stupid!! - dead on. :)

I dont have crossfire. I have a single card, who the hell are you dribbling over now? Get facts right.
 
Stupid statement is stupid!! - dead on. :)

I dont have crossfire. I have a single card, who the hell are you dribbling over now? Get facts right.


But you were referencing Matt who does have.

The graphic is there for people like you. I don't care that I own AMD hardw. Maybe I just buy what is the best value at the time! Really some people.

But when someone buys an Nvidia card which they deem good value, it becomes purely for "e-peen"
 
Where did you get that from...

What do you think the whole point of it is?? Every statement AMD has been going on is about improving CPU utilisation and when AMD as far back as November last year were saying a 2GHZ FX8350(half its rated clockspeed) was matching a faster Core i7 at stock speeds in a benchmark,it was black and white what they were trying to achieve.

It is about making their CPUs more competitive in games. Simples.

Even look at some benchmarks for example.

IQqPb1E.png

AQAAFg3.png


Both cards are under £130.

The AMD CPUs show massive gains,and the Core i7 shows none in this case.

The DX11 and Mantle performance on Intel CPUs in the game is virtually the same.

Edit!!

With a higher end card like an R9 280X,look at how much the framerates on a FX8350 improve.

M4aUL5s.png


Thief uses UE3 which uses between two to four threads,and heavier usage on at least two of them.

The Intel CPUs show a much smaller improvement,and Mantle makes the AMD CPUs far more competitive in the game now.
 
Last edited:
Which in some cases showing that now you are better off investing your money into a better GPU. Tides have turned, 2015 looks interesting.
 
It also provides increases on intel cpu's , Especialy on those that dont have higher end i7's
If this was solely for amd's cpu's they would have done a nvidia type lockout :)
 
What do you think the whole point of it is?? Every statement AMD has been going on is about improving CPU utilisation and when AMD as far back as November last year were saying a 2GHZ FX8350(half its rated clockspeed) was matching a faster Core i7 at stock speeds in a benchmark,it was black and white what they were trying to achieve.

It is about making their CPUs more competitive in games. Simples.

Even look at some benchmarks for example.

IQqPb1E.png

AQAAFg3.png


Both cards are under £130.

The AMD CPUs show massive gains,and the Core i7 shows none in this case.

The DX11 and Mantle performance on Intel CPUs in the game is virtually the same.

Edit!!

With a higher end card like an R9 280X,look at how much the framerates on a FX8350 improve.

M4aUL5s.png


Thief uses UE3 which uses between two to four threads,and heavier usage on at least two of them.

The Intel CPUs show a much smaller improvement,and Mantle makes the AMD CPUs far more competitive in the game now.


an extremly long post, with all facts wrong.
it's stands to reasons that i7 high end cpu sees the lowest benefit from Mantle, because the cpu on itself has very little bottleneck, and lower end cpu like i3 or even lower get higher perf boost just like FX APU and ****, same that FX4300 have better boost than FX8350.
so Mantle isnt for AMD CPU only, and it work's as good for intel too.
 
As others have pointed out to you, it's not about CPU directly, but indirectly and NOT about AMD cpu's, because it helps Intel cpu's JUST as much meaning a i3/i5 become as viable as AMD chips so it makes absolutely no difference.

You're also forgetting something, the entire games industry wanted a low level api and after Mantle finally MS went low level(I should say, is going), which Nvidia(who don't make desktop cpu's) is fully behind, which game dev's are still fully behind.

None of them care which cpu you buy or don't, it's about game making. Low level is better(for higher end game making). Look at the minimums and steadyness of Mantle frame rates when it's working (yes DX still gets fixes and new drivers to fix problems... expecting Mantle's alpha to be perfect is laughable for those who would bring it up).

When programming with complete transparency to the game dev they can quite literally decide how long just about each frame will take. Add this effect, that is going to take 2ms to finish, we are at 13ms and aiming for sub 16ms so can add that effect.

The issue with DX, and why frame rates are MUCH less smooth, is because the driver randomly chooses to do things when IT wants to, not when the engine tells it to. Game makers have wanted a low level api for many many many reasons and the majority of them are about making games better, smoother, for the game devs themselves, more predictable, easier to make smoother and better, easier to provide a better experience.

The games industry has for a LONG time said it's wanted a low level API and from their end it had nothing at all to do with whose CPU's would sell better as a result and as explained, it boosts Intel cpu's precisely as well as AMD cpu's, so is no net gain for AMD at all.

If AMD's £150 cpu couldn't compete with Intel's £250 cpu, and Mantle makes it, woo, but then Intel's £150 cpu will compete as well now, meaning AMD still has competition, the same competition.
 
Hallock also alleges that "code obfuscation" stemming from GameWorks integration prevents AMD from adequately optimizing its drivers for some games. "[T]he characteristics of the game are hidden behind many layers of circuitous and non-obvious routines," he explains, adding that Nvidia has removed "all public Direct3D code samples from their site in favor of a 'contact us for licensing' page."

Ubisoft's Watch Dogs, which comes out today, is cited as a particularly stark example of unequal optimization. "It's evident that Watch Dogs is optimized for Nvidia hardware," Evangelho writes in his story, "but it's staggering just how un-optimized it is on AMD hardware." Evangelho also links an older article by ExtremeTech. That article made a similar observation about Batman: Arkham Origins, and it similarly pinned the blame on GameWorks.

We've seen in our own testing how AMD graphics cards can underperform in some GameWorks-enabled games, including Batman: Arkham Origins and Assassin's Creed IV: Black Flag, so Evangelho likely isn't wrong there. For whatever reason, some GameWorks titles do seem to run poorly on Radeons.

Source
http://techreport.com/news/26515/amd-lashes-out-at-nvidia-gameworks-program
 
Last edited:
Another Green vs Red huh?

To lighten the mood, if you haven't already, checkout Watch Dog's take on window reflections :D:

y5SJrHOl.jpg.png


Ubisoft's take on DRM:

Favv8sl.png
 
Last edited:

Oh look. All main mantle devs engaged in a circlejerk.

Developers who prefer working with X think X is better. Couldn't possibly have guessed.

You know what you'd get when asking devs who are using Nvidia tech? Devs telling you why Nvidia tech is so good and why AMD has lost respect...


Forbes article reprinted. Meaning Hallock's PR spin accompanied with zero proof of anything.


Also a fun article from 2011 back when Nvidia said that they'd be willing to give PhysX to AMD (in the same way as AMD is now saying that they would give Mantle to Nvidia) :

Nvidia's director of public relations, Luciano Alibrandi, told us that the company was ‘committed to an open PhysX platform that encourages innovation and participation,’ adding that Nvidia would be ‘open to talking with any GPU vendor about support for their architecture.’ Why didn't AMD take up Nvidia's offer?

'I don't know much about that, because that whole discussion happened soon after I joined Nvidia,' says Hegde, 'but I can comment on it from an abstract point of view. Firstly, it's very unlikely that Nvidia would offer it to AMD, and secondly it would make engineering a nightmare for us. Having to take somebody else's API, especially a competitor's API, because Nvidia controls the API and there are architectural differences between AMD and Nvidia platforms. It's not like we have a common x86 instruction set, so it's not like AMD and Intel on the CPU side – this is a completely different instruction set.

'AMD would be foolish to license that because it would just be an engineering nightmare. I'm just talking in the abstract here, but to me it doesn't make sense, and I think Nvidia's being disingenuous by making a claim like that. If it was a standard and open system, like Khronos does, then we would have a lobby so we could make changes in the API, but that's not the same with a proprietary API.'

http://www.bit-tech.net/hardware/graphics/2011/02/17/amd-manju-hegde-gaming-physics/1

Oh how the times change.

AMD now has a proprietary API and they're pushing PR slide after PR slide of stuff about mantle being open and available to all graphics vendors. And yet no other vendor has seen the source, there's no documentation available anywhere and they're trying to say that others should adopt an API designed for GCN when they themselves say that adopting a competitor's API (in their case it was the PhysX API which could be made run on OCL) would be an engineering nightmare.
 
Oh look. All main mantle devs engaged in a circlejerk.

Developers who prefer working with X think X is better. Couldn't possibly have guessed.

You know what you'd get when asking devs who are using Nvidia tech? Devs telling you why Nvidia tech is so good and why AMD has lost respect...



Forbes article reprinted. Meaning Hallock's PR spin accompanied with zero proof of anything.


Also a fun article from 2011 back when Nvidia said that they'd be willing to give PhysX to AMD (in the same way as AMD is now saying that they would give Mantle to Nvidia) :



http://www.bit-tech.net/hardware/graphics/2011/02/17/amd-manju-hegde-gaming-physics/1

Oh how the times change.

AMD now has a proprietary API and they're pushing PR slide after PR slide of stuff about mantle being open and available to all graphics vendors. And yet no other vendor has seen the source, there's no documentation available anywhere and they're trying to say that others should adopt an API designed for GCN when they themselves say that adopting a competitor's API (in their case it was the PhysX API which could be made run on OCL) would be an engineering nightmare.

Two of those are Ubisoft devs Alatar lol. Timothy is also the author of TXAA and ex Nvidia. Says it all. :D

I wondered how long it would be before the old Physx links were brought up. I'm surprised it took so long. Nothing wrong with Physx imo as it does not negatively effect AMD performance or optimizations in anyway.
 
Status
Not open for further replies.
Back
Top Bottom