• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

CPR W3 on Hairworks(Nvidia Game Works)-'the code of this feature cannot be optimized for AMD'

You are overriding something that was not intended. AMD only have a slider due to their tessellation not being up to par. Nvidia does not need a slider to reduce it if its over-tessellated. It can easily cope with the levels that is applied.

There is no perceivable difference between 8x and 16x Tessellation, if you are developing a game you never do something that has a high performance cost but gives almost an imperceivable or no visual difference, the performance cost exists in performance scaling for Nvidia as much as AMD, its because AMD GCN 1.0 and 1.1 cards start at a lower base that the performance is less.

The difference is AMD users can override it to gain performance with no visual impact, ending up with higher performance than Nvidia.

The Tessellation Slider has been a part of CCC for many years, it was put there to give its users the option overriding over Tessellated games in the past, also Nvidia games.

AMD's GCN 1.2 (Tonga) and no doubt the incoming cards address the Tessellation issue by increasing Tessellation throughput massively.
 
The Tessellation is backed into Hairworks. Developers have no control of it. ^^^^



Over tessellation will reduce the performance on any card, no matter how much throughput it has.
AMD's performance actually end's up faster than Nvidia equivalent's because AMD's drives can override over tessellation.

Of course developers have control over it lol. They don't just use a library and that is Geralt's, the horses and the monsters hair all done. They can optimise the code for nVidia cards as much as they want.

Agree totally but it is something they shouldnt be allowed to ignore because they have a large customer base with 7 series cards. I would be pretty annoyed if i had bought a titan or titan black right now as they are performing worse than a 780ti and close to a 960

Absolutely and if I was still on Kepler and there is performance gains to be had, I would like them.

One thing I have noticed recently though is - nVidia and game Devs are no longer accepting certain BS that gets posted out from AMD and hitting back. Too many times have AMD pointed the finger at others and now they are firing back, AMD are starting to look a little silly. It is all pointing to what Jason said in the post Silent Scone quoted from Forbes and I couldn't agree more. Maybe some of the more vocal guys at AMD should back away from the blame game for a bit and concentrate on getting games running well for their own users.
 
Last edited:
Of course developers have control over it lol. They don't just use a library and that is Geralt's, the horses and the monsters hair all done. They can optimise the code for nVidia cards as much as they want.

Its nothing to do with code, Tessellation is backed into the Material layer, the material layer is the library, they simply apply their own look to it, the structure, the bones of the object is in Nvidia's Games Works dll's.
 
Of course developers have control over it lol. They don't just use a library and that is Geralt's, the horses and the monsters hair all done. They can optimise the code for nVidia cards as much as they want.



Absolutely and if I was still on Kepler and there is performance gains to be had, I would like them.

One thing I have noticed recently though is - nVidia and game Devs are no longer accepting certain BS that gets posted out from AMD and hitting back. Too many times have AMD pointed the finger at others and now they are firing back, AMD are starting to look a little silly. It is all pointing to what Jason said in the post Silent Scone quoted from Forbes and I couldn't agree more. Maybe some of the more vocal guys at AMD should back away from the blame game for a bit and concentrate on getting games running well for their own users.

the devs should be criticised more
why put out a game that performs bad then have a patch hours later, then another the next day, then the next day
im sure they must have a few graphics cards! they know how its going to perform, if gamers are having to fix it themselves diving into ini files then they deserve some criticism
then they complain sales are down on the pc, its because AAA's are so dam disappointing most the time lol
 
It was easy to understand considering all you were doing is relaying what you read happened. It's also the only example to date, because nobody has the kind of resources to coactively work like that unless there is money behind it.

It's the only example needed on how it should be played-above board with zero obfuscation involved from anyone.
 
the devs should be criticised more
why put out a game that performs bad then have a patch hours later, then another the next day, then the next day
im sure they must have a few graphics cards! they know how its going to perform, if gamers are having to fix it themselves diving into ini files then they deserve some criticism
then they complain sales are down on the pc, its because AAA's are so dam disappointing most the time lol

I think CD Projekt Red themselves got a bit of a shock at just how expensive some Game Works Libraries actually are, were Nvidia entirely honest with them?

It seems a lot of this stuff was added in the last few weeks / months with heavy Nvidia involvement, it wasn't tested thoroughly and they had to scramble for fixes the day it landed in users hands with complaints flooding in.

Too much reliance on their hardware partner.
 
I think CD Projekt Red themselves got a bit of a shock at just how expensive some Game Works Libraries actually are, were Nvidia entirely honest with them?

It seems a lot of this stuff was added in the last few weeks / months with heavy Nvidia involvement, it wasn't tested thoroughly and they had to scramble for fixes the day it landed in users hands with complaints flooding in.

Too much reliance on their hardware partner.

Maybe a little early for you Humbug but Silent Scone posted this up a little while ago, which just shows how wrong you are with "It seems a lot of this stuff was added in the last few weeks/months"

AMD Is Wrong About 'The Witcher 3' And Nvidia's HairWorks
Comment Now Follow Comments

The increasingly bitter war between GPU manufacturers AMD and Nvidia NVDA -0.69% continues this month with the release of CD Projekt Red’s The Witcher 3 and with it, another GameWorks controversy. Except this time it’s much easier to see the naked truth.

The story so far: AMD believes that the implementation of an Nvidia-developed graphics feature called HairWorks (part of the company’s GameWorks library) in The Witcher 3 is deliberately crippling performance on AMD Radeon graphics cards. HairWorks — similar in functionality to AMD’s TressFX — taps into DirectX 11 to tessellate tens of thousands of strands of hair, making them move and flow realistically.

Early and exhaustive benchmarks from German site HardwareLuxx indicates that when HairWorks is activated on a higher-end Nvidia cards like the GTX 980, framerate performance drops by 30% (which of course it does, because extra eye candy affects performance!) But on a Radeon 290x? Up to a 61% hit to average framerates.

GameWorks-Games-The-Witcher-3-Wild-Hunt

If you’re following this story, you may be aware of CD Projekt Red’s official statement on the matter. They told Overclock3D that yes, HairWorks can run on AMD hardware, but that “unsatisfactory performance may be experienced as the code of this feature cannot be optimized for AMD products.”


The problem with this statement is that it was overly vague and left a lot of possibilities dangling. Possibilities that can be interpreted for a variety of arguments. Why can’t it be optimized? Was it merely an issue of limited resources or man hours? Is it because, as some have argued, AMD’S GCN 1.1 Directx 11 tessellation is sub-par? Or did Nvidia explicitly prevent CD Projekt Red from optimizing their code on AMD hardware or from inserting their own tech?


The answer to the latter question is a decisive no. We saw technologies from both companies in Rockstar’s Grand Theft Auto V, a game that was optimized quite efficiently on a wide range of hardware from both Team Green and Team Red. And Nvidia’s Brian Burke recently reiterated to PCPer.com that “our agreements with developers don’t prevent them from working with other IHVs.”

But let’s get to the heart of this article. AMD’s chief gaming scientist Richard Huddy recently went on the offensive, claiming that Nvidia’s HairWorks code is somehow deliberately sabotaging Witcher 3 performance on AMD hardware. Speaking to ArsTechnica, he said the following:

“We’ve been working with CD Projekt Red from the beginning. We’ve been giving them detailed feedback all the way through. Around two months before release, or thereabouts, the GameWorks code arrived with HairWorks, and it completely sabotaged our performance as far as we’re concerned. We were running well before that… it’s wrecked our performance, almost as if it was put in to achieve that goal.“

That’s funny, since I attended an Nvidia press conference all the way back in June 2013 that showed an early version of Nvidia’s HairWorks — then unnamed — running on multiple wolves in The Witcher 3. Later, in January 2014 — 16 months ago — Nvidia officially christened the technology “HairWorks” and showed it off again, using several examples of the tech implemented into The Witcher 3.

Here’s a video from Gamescom 2014 (August) showing HairWorks running in The Witcher 3.


Let’s assume Huddy’s claim of working with the developer “from the beginning” is true. The Witcher 3 was announced February 2013. Was 2+ years not long enough to approach CD Projekt Red with the possibility of implementing TressFX? Let’s assume AMD somehow wasn’t brought into the loop until as late as Gamescom 2014 in August. Is 9 months not enough time to properly optimize HairWorks for their hardware? (Apparently Reddit user “FriedBongWater” only needed 48 hours after the game’s release to publish a workaround enabling better performance of HairWorks on AMD hardware, so there’s that.)

Hell, let’s even assume that AMD really didn’t get that code until 2 months prior, even though they’ve been working with the developer since day 1. Do you find that hard to swallow?

That’s all irrelevant in my eyes, because the ask never came in time. Via Ars Technica, Huddy claims that when AMD noticed the terrible HairWorks performance on their hardware two months prior to release, that’s when they “specifically asked” CD Projekt Red if they wanted to incorporate TressFX. The developer said “it was too late.”

Well, of course it was too late. Nvidia and CD Projekt Red spent two years optimizing HairWorks for The Witcher 3. But here’s the bottom line: The developer had HairWorks code for nearly two years. The entire world knew this. If AMD had been working with the developer “since the beginning” how on earth could they have been blindsided by this code only 2 months prior to release? None of it adds up, and it points to a larger problem.

Look, I respect AMD and have built many systems for personal use and here at Forbes using their hardware. AMD’s constant pulpit of open source drivers and their desire to prevent a fragmented PC gaming industry is honorable, but is it because they don’t want to do the work?

A PC enthusiast on Reddit did more to solve the HairWorks performance problem than AMD has apparently done. AMD’s last Catalyst WQHL driver was 161 days ago, and the company hasn’t announced one on the horizon. Next to Nvidia’s monthly update cycle and game-ready driver program, this looks lazy.

If you want a huge AAA game release to look great on your hardware, you take the initiative to ensure that it does. What you don’t do is expect your competitor to make it easier for you by opening up the technology they’ve invested millions of dollars into. You innovate using your own technologies. Or you increase your resources. Or you bolster your relationships and face time with developers.

In short, you just find a way to get it done.

If I sound frustrated, it’s because I am. I’ve been an enthusiastic fan of AMD for a good long while (just look at the numerous DIY builds and positive reviews I’ve given them), and last year at this same time I was admittedly on the other side of this argument. But what I’m seeing now is a company who keeps insisting their sole competitor make their job easier “for the good of PC gaming.” And I see said competitor continuing to innovate with graphics technologies that make games more beautiful. And I see promises like the concept of “OpenWorks” laying stagnant a full year after they’re hyped up. And I see AMD’s desktop GPU market share continue to slip and think to myself “maybe this is not a coincidence.”

I’ve reached out to AMD and invited them to issue a follow-up comment or offer any clarity to Huddy’s statement.

Sorry for quoting the whole post from Jason but well worth a read and I am a millions times this with AMD.
 
Its was just a thought Greg, something went wrong because clearly they didn't anticipate the performance problems an Console let alone Desktop.
 
all i see from that is AMD more willing to put the blame on nvidia than developers, because well anything else would be very undiplomatic
i dont have to be diplomatic
the developers make the decision to put the game out as is
you could even call it a moral choice
and they choose not to put gamers first
its their pockets first always!
 
Its was just a thought Greg, something went wrong because clearly they didn't anticipate the performance problems an Console let alone Desktop.

Fair enough. Similar to HairWorks and I don't know what they can and can't alter in the library and I would have assumed that they have full control over the tessellation being used.
 
Fair enough. Similar to HairWorks and I don't know what they can and can't alter in the library and I would have assumed that they have full control over the tessellation being used.

Full control, no, simply because its already backed into the library, its a set of paintable geometries made by Nvidia, its why they use them instead of making their own, its quick and easy because you don't have to worry about all that.

They could limit the Tessellation throughput at a higher level than the engine, thats how AMD do it thought the drivers.
 
lol gameworks added in the last few months.

Well here is a video from last August. (that is 9 months ago for those who cannot be bothered to work it out :))


And here is one from June 2013 ( that's a lot of months because I cannot be bothered to work it out. :))


So that pretty much shuts up any talk that Gameworks was only implemented in the last few months.
 
Fair enough Bru ^^^^ as i said to Greg it was just a though, it felt like a last minute act given they had to patch some of it on day one. and again...
 
lol just because their was a build/demo with hairworks in it a few years ago proves nothing
maybe there was big changes made before launch
the recent patch that improves hairworks performance says that there most likely was

that is such a trolling article you guys are putting so much faith in ><
 
There is no perceivable difference between 8x and 16x Tessellation, if you are developing a game you never do something that has a high performance cost but gives almost an imperceivable or no visual difference, the performance cost exists in performance scaling for Nvidia as much as AMD, its because AMD GCN 1.0 and 1.1 cards start at a lower base that the performance is less.

The difference is AMD users can override it to gain performance with no visual impact, ending up with higher performance than Nvidia.

The Tessellation Slider has been a part of CCC for many years, it was put there to give its users the option overriding over Tessellated games in the past, also Nvidia games.

AMD's GCN 1.2 (Tonga) and no doubt the incoming cards address the Tessellation issue by increasing Tessellation throughput massively.

Aha, so the whole thing is actually an AMD plot to gain better performance than NVidia in one of their own games. Cunning. :D:p:D
 
We dont know which iteration of hairworks/gameworks was used back in 2013 and now. They must likely are completely different. Im pretty sure nvidia has kept developing hairworks since.

You are overriding something that was not intended. AMD only have a slider due to their tessellation not being up to par. Nvidia does not need a slider to reduce it if its over-tessellated. It can easily cope with the levels that is applied.

And LampChop its pretty easy for you to say we dont need a tesselation slider in the nvidia panel when your sitting on the only gen of nvidia cards that has been optimized for the witcher 3 game. There are no performance difference going from the 340 driver and up for kepler. I would really like more control over the experience ive paid out my behind to get so when we get these insane cases i actually have the ability to control some of the outcome. If i can control the details of shadow and textures and whatnot why shouldnt i also be able to control the level of tesselation? Its the reason i game on a PC rather than a console, options and more control.
 
lol gameworks added in the last few months.

Well here is a video from last August. (that is 9 months ago for those who cannot be bothered to work it out :))


And here is one from June 2013 ( that's a lot of months because I cannot be bothered to work it out. :))


So that pretty much shuts up any talk that Gameworks was only implemented in the last few months.

Yeah but It was AMD that said it, so you can pretty much ignore.
 
lol gameworks added in the last few months.

Well here is a video from last August. (that is 9 months ago for those who cannot be bothered to work it out :))


And here is one from June 2013 ( that's a lot of months because I cannot be bothered to work it out. :))


So that pretty much shuts up any talk that Gameworks was only implemented in the last few months.

No it doesn't, that was the optimised Keplar build version, it got scrapped with updated GW's to favour Maxwell...

joke.
:p
 
Last edited:
Back
Top Bottom