• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

CPR W3 on Hairworks(Nvidia Game Works)-'the code of this feature cannot be optimized for AMD'

He also says, and this is kind of important, shortly after the 2013 demo they scrapped a lot and went with a completely different shader system. And he's right if you actually cared to pause the 2013 footage it's very flat and has no reflection based shaders and generally looks dated.
 
Ive uploaded a short video to youtube trying to showcase the weird performance issue i talked about earlyer. Please excuse my poor poor editing skills.

 
He also says, and this is kind of important, shortly after the 2013 demo they scrapped a lot and went with a completely different shader system. And he's right if you actually cared to pause the 2013 footage it's very flat and has no reflection based shaders and generally looks dated.

Of course they scrapped a lot. It was never going to run on a console looking the way it did, so it had to be basterdised to suit. Let's say you're right with the 2013 footage, the 2014 footage was even better, yet that was thrown out in favour of having it running on a crap console gpu with an average cpu. There's reasons why they have enhanced versions of the witcher games...to put back what they had to remove. They have previous. Cyberpunk will be the same, looks great now, just wait till Sony and Ms tell them it had better not look better than the consoles.
 
Of course they scrapped a lot. It was never going to run on a console looking the way it did, so it had to be basterdised to suit. Let's say you're right with the 2013 footage, the 2014 footage was even better, yet that was thrown out in favour of having it running on a crap console gpu with an average cpu. There's reasons why they have enhanced versions of the witcher games...to put back what they had to remove. They have previous. Cyberpunk will be the same, looks great now, just wait till Sony and Ms tell them it had better not look better than the consoles.

But TBF this is more CDPR - they decided to make consoles the main platform for the game and then downgraded the PC version since they thought it was not worth spending more money.Its what Crytek did after Crysis. This is why in retrospect the derision Crytek got over Crysis meant that more companies are targeting consoles as the main platform and not PC.
 
Last edited:
Just tried removing the files on an i5 system with a 780GTX and it made absolutely no difference to the frame rate, so I would love to know who Orangey's confirmed source is.

Knowing him probably some scrotum on reddit who's got a brother with a friend who says so. Will be invoicing for time wasting. :D
 
Just tried removing the files on an i5 system with a 780GTX and it made absolutely no difference to the frame rate, so I would love to know who Orangey's confirmed source is.

Knowing him probably some scrotum on reddit who's got a brother with a friend who says so. Will be invoicing for time wasting. :D

If im not mistaken im that source and ive posted my proof of it just look above. An important note though is i have kept saying this doesnt translate to a raw fps increase through the entire game. I only posted about this in an attempt to figure out where some of performance might be lost in hopes that it will be fixed in the future. Its not an attempt to point fingers at nVidia or anybody really.
 
I think its down to me the user to be able to control such a setting. Im playing on a pc after all not a console. I dont understand why CDPR havent made it an option in their game yet since its such a performance hog. Give me the darn tess slider all ready.

The tessfactor will be programmed into the geometry for HairWorks by CDPR (no you do not need source code to program basic geometry), and then it's handed over to DirectX to go through the tessellation stage which controls a whole lot more and performs more optimisation. The fact slower cards are coming out, well slower, is a combination of the driver optimisation and the fact the tessellation pipeline is slower.

The geometry used obviously doesn't perform as well as it does on Maxwell. I'm not sure why this is such a huge shock, but it's not really helped by the fact Kepler performance has been neglected recently. A good question to ask is, would AMD have the slightest quibble if their tessellation pipeline was better than NVIDIA's? Or would it be something they'd be reminding you all of on Twitter.

If im not mistaken im that source and ive posted my proof of it just look above. An important note though is i have kept saying this doesnt translate to a raw fps increase through the entire game. I only posted about this in an attempt to figure out where some of performance might be lost in hopes that it will be fixed in the future. Its not an attempt to point fingers at nVidia or anybody really.

Well you're not the stereotype I was describing ;). I couldn't find any performance loss
 
Last edited:
Don't get angry with me, this isn't some guy off OCN it's a friend who's worked with Rebellion and other developers along with NVIDIA first hand.

I am so very sorry if my sources are too credible for you, I know it's not as much fun when there's just nonsense being posted backwards and forwards. Maybe someone who knows any better can come and refute what I'm saying. And yes I understand fundamentally what the tessellation process does because I bothered to read it. Again very sorry if I'm having to make you use a higher brain function in order to comeback with something that's constructive. If I copied and pasted what I was actually being told you'd just have posted something as dumb as you did before.

The truth of the matter is some of you don't really even care what the reasoning might be for the performance issues, you'd rather live in lala land and believe it's some corporate conspiracy. Well thank christ you aren't running for government.
 
Last edited:
Why would the developer add this awful bloated code into their game 1 month from launch, NV probably struck a deal with the suits to add in something THEY wrote.

It makes each primitive sub-pixel and adds 8XMSAA. This is a developer world-renowned, creating a graphical showpiece which requires heavy optimization. The effect is "meh" at best. It adds nothing to the game, if you ask a substantial portion of players. This would have been obvious to anyone.

There's somebody up to no good somewhere.
 
I don't believe for a second this code was added one month prior, just look at all the evidence that refutes that. HairWorks has been present in W3 since the very early days. You're basically suggesting that CDPR willingingly sabataged performance on hardware they'd been optimising for years, just because NVIDIA wanted them to.

8X MSAA is applied to HairWorks only when active (and to HairWorks alone). Is it not plausible that the line tessellation appeared too sharp when coupled with the sharpening filter and other post processing effects? A tessellation slider should be present in game, this isn't up to NVIDIA. Not to mention you can change this easily from the Rendering ini as I have done quite a few settings myself (HairWorksAALevel=8)
(shameless plug)


JvsjUQ7.jpg.png



Also it's pretty clear to anyone if you look at the comparison shots between tessfactor levels when being overridden on AMD hardware, that reducing it has an impact on the way Geralts hair looks. You've got people moaning about the trees being at a right angle on a summers day, but they'd be ok with right angles in Geralts hair.

Nothing I'm seeing here is new. Even as far back as Fermi architecture, this had a superior threading and processing engine. Maxwell Tessellation pipeline is 3 times faster than Keplers and I wouldn't even care to look how much more efficient it is than Hawaii.
 
Last edited:
I don't believe for a second this code was added one month prior, just look at all the evidence that refutes that. HairWorks has been present in W3 since the very early days. You're basically suggesting that CDPR willingingly sabataged performance on hardware they'd been optimising for years, just because NVIDIA wanted them to.

thats assuming they were even pushed to optimize it for the older gen cards
when they talked about their test machines it was 960's 980's etc
this kind of shows when you see some benchmarks where a 960 matches a 780, when you look at the hardware specs its hard to understand how they even managed that

maybe its perfectly innocent but its worrying for the future to me how easy it is done
& i dont think thats tinfoil hat stuff thats a legitimate worry lol!
 
I think they are possibly being told to focus on Maxwell, yeah. Developers and compilers alike. Which is great for us that follow the fold, not so great for people that are waiting it out.

I think Kepler performance will get some 'first child' attention shortly. If not only because people are noticing the drop off
 
Sorry you must have missed the part where I made sure he had nothing else to contribute.

I've already pointed out you like to quote the internet, Tommy. No need to spell it out any further. Spare us.


I'll break it down so you can get it around that cynic on your shoulder.

The amount of tessellation is determined by the developer (which may vary depending on target system performance, etc).

Source geometry is sent to the hardware along with values stating the level of tessellation desired and whether the hardware is to deal with quad, tri or line data.

Pass over to DirectX through the hardware tessellation process (hull shader - tessellation stage - domain shader); and nVidia's simply scales that much better.

The amount of MSAA 8X applied is easily adjusted through the rendering ini

The amount of tessellationfactors is adjustable through CC control panel - the result of which is quite detrimental at 4 or below (right angles in Geralts hair)


Summary, Oh it's not fair NVIDIA's pipeline is better and it's made worse by the fact they don't want us seeing their source code but we're glossing over the fact our tessellation performance has been straddling behind for years.
 
Last edited:
Back
Top Bottom