• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

CPR W3 on Hairworks(Nvidia Game Works)-'the code of this feature cannot be optimized for AMD'

Kaap stop showing off ^^^ :p

Make yourself useful and post one Titan-X Tessmark score at 1080P? reference clocks :)

Don't know what res they used, assuming 1080P

Just checked it only uses one card.:)

What other settings should I use apart from 1080p ?
 
Kaap, don't know they don't say what settings they used...

Just run it at 1080P, expecting around 80000.
 
Well TressFX first iteration had less of performance hit than Hairworks.

TressFX 2.0 is used in Tomb Raider Definitive Edition on PS4 which is even more optimized,

Rise of the Tomb Raider on the Xbox one will have even even more upto date TressFX implementation.

I hope they solve the way the hair appeared to cut through the neck, It really affected how good it looked. Other than that Tress fx was great and ran fine on the Nvidia card I had at the time I was playing Tomb Raider, In fact my bench results show my 780 Classified performed as well as the Twin Frozer 290x I have now with only a 2 frame drop on minimums.
 
Kaap, don't know they don't say what settings they used...

Just run it at 1080P, expecting around 80000.

I figured out what they have done

They used default settings for everything except they selected "Extreme X32"

You were pretty close with your prediction, here is what I got

msFKIqp.jpg
 
Do synthetic tessellation benchmarks really matter? Unless there's more games like crysis 2 coming down the pipe with flat objects getting tessellated to hell and back for zero visual difference i don't see the problem.
 
I hope they solve the way the hair appeared to cut through the neck, It really affected how good it looked. Other than that Tress fx was great and ran fine on the Nvidia card I had at the time I was playing Tomb Raider, In fact my bench results show my 780 Classified performed as well as the Twin Frozer 290x I have now with only a 2 frame drop on minimums.


Geometry problem, the Hair doesn't recognise the skin as a solid object, instead its resting on the bone structure, yes animated AI have a bone structure, a primitive one of sorts thats programmed to to animate the AI.

They could get around it if the Dev applies a rigid body fix of some sort. i think.

@ Kaap, that is pretty close :D thanks for testing it.

Well i'm off to bed...
 
I hope they solve the way the hair appeared to cut through the neck, It really affected how good it looked. Other than that Tress fx was great and ran fine on the Nvidia card I had at the time I was playing Tomb Raider, In fact my bench results show my 780 Classified performed as well as the Twin Frozer 290x I have now with only a 2 frame drop on minimums.

2.0 was meant to improve on that but only the consoles got it, we will have to see what 3.0 brings.
 
Do synthetic tessellation benchmarks really matter? Unless there's more games like crysis 2 coming down the pipe with flat objects getting tessellated to hell and back for zero visual difference i don't see the problem.

Would you find it acceptable if NVidia were rendering textures in a lower resolution on GTX970 to save VRAM, assuming that the user couldn't notice it?

Like I said tessellation will be used increasingly more in gaming going forward and AMD will probably not be rendering it as intended, is one case of extreme over tessellation justification for widespread cheating?
 
Would you find it acceptable if NVidia were rendering textures in a lower resolution on GTX970 to save VRAM, assuming that the user couldn't notice it?

Like I said tessellation will be used increasingly more in gaming going forward and AMD will probably not be rendering it as intended, is one case of extreme over tessellation justification for widespread cheating?

Not wanting to start an argument, but your example of NVidia rendering stuff at a lower resolution, wouldn't be the same thing at all. That would involve NVidia not actually rendering the same scene as AMD. It would be like saying, you have a scene with two characters fighting, but one card only renders one of them to make it faster.

Now don't get me wrong I'm not saying that having a scene that is massively complex, just because it runs better on one set of hardware is a good idea, but at least both sets of hardware are rendering the same scene.
 
Just read this entire thread and I twigg its about Witcher 3 but CPR what is that? Someone need resuscitation?

This is common practice for Nvidia, same with their Gsync implementation whereas AMD's freesync

Quote"
G-Sync monitors require a proprietary Nvdia G-Sync scaler module in order to function, which means all G-Sync monitors have similar on screen menus and options and also have a slight price premium, whereas monitor manufacturers are free to choose scalers from whichever manufacturers produce hardware that supports FreeSync. "

"In order to be G-Sync compatible, the screens need G-Sync specific hardware that's rather expensive, unofficially adding around £75 on to the price of any given monitor. "

"FreeSync, which is an AMD technology, uses the Adaptive Sync standard built into the DisplayPort 1.2a specification. Because it's part of the DisplayPort standard decided upon by the VESA consortium, any monitor with a DisplayPort 1.2a input is potentially compatible. That's not to say that it's a free upgrade; specific scaler hardware is required for FreeSync to work, but the fact that there are multiple third-party scaler manufacturers signed up to make FreeSync compatible hardware (Realtek, Novatek and MStar) should mean that pricing is competitive due to the competition.

While DisplayPort 1.2a is an open standard that can be used by anyone, Nvidia's latest 900-series graphics cards don't use it, with the firm saying it's going to continue focusing on G-Sync instead. Some monitor manufacturers are sticking with Nvidia for now, too. "

So enlighten me, how is it that this is all AMD's fault? Why should they be claimed to be "whinging" its clearly Nvidia using anti competitive practices.

No matter how much of a fanboy u r - competition is a good thing. I've always been on the red team, and up until last 2 cpus in AMD's camp.

Not sure this is really about Freesync and GSync.
I think long term Nvidia could struggle to keep GSync going in its current form. Whether or not it's better or worse than Freesync.

It's much the same as Mantle with AMD. AMD worked on using Mantle, which only worked on some AMD cards, to improve performance while Nvidia stuck with DirectX (which works on all cards) and included DirectX improvements in their drivers. It doesn't look like we'll be seeing much more of Mantle for the time being though. Possibly because it was "AMD only" and controlled by AMD that made people reluctant to adopt it.

Gsync spawned Freesync much like Mantle gave rise to DirectX12 and Vulkan. So they're did some good for the industry.

Both companies want to sell hardware and it's unique features that are likely to help with that.
If both companies produced the same hardware and sold it at the same price then things could stagnate very quickly.

Maybe that's what will happen with GameWorks too. Maybe it's just the catalyst to something bigger and better.
 
Did you get out of the wrong side of the bed this morning?

Wasn't aware that was a requirement for being sarcastic ;)

Exactly the reason that slider was introduced, to stop things from being needlessly over tessellated in order to cripple performance. This includes things on and off screen.

Remember you aren't really, not really, meant to say things like that without any evidence to back it up any more, Matt (because it's a baseless lie).

That could have quite easily been avoided (along with other tweaks) if there was a driver released in time for game launch. Quite a big game launch at that.
 
Last edited:
Also, those Tessellation tweaks are available to the user in CCC, i wonder if they can be used to improve Nvidia's cited Tessellation performance issue in W3, if so why have AMD not done this themselves?

Can anyone tell the difference between 8x and 16x?

 
Back
Top Bottom