• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Publishes DirectX 12 Tips for Developers

GameWorks doesn't do anything with tesselation, tess is driven by the source code and on to the pipeline. The developer only need change tess factors if enough people have performance issues. Which they have done in the most recent case. There is literally no way of twisting this information or fact to be anything else.

These high factors are only an issue because of inferior performance and not least of all on non game critical effects optimised to run on the vending authors hardware. Has little bearing on tips on good middleware optimisation if you have a basic grasp on the matter.
 
Last edited:
We already have GameWorks using excessive amount of tessellation of dx11 than how dx11's tessellation feature was intended to be properly used, you can't really blame people for worrying about foulplay with Nvidia "teaching" developers how to implement dx12 in a specific way, rather than for developers to implement EVERYTHING that dx12 can offer.

I mean can you imagine how silly would it be if say for example a PSU manufacturer claim their PSU is rate at 80Plus, and when they try to get it certified, they ask the certifying bodies to run their tests in a specific manner, and providing them with a guide with instructions on what to do?

I mean yes it made sense for AMD to do something like that for Mantle, as it is their API, but dx12 is not Nvidia's API. I think developers should just use dx12 as the way they want it to be used, and if graphic cards cannot deliver, then the GPU manufacturers should play catch up, rather than asking developers to slow down and wait for them.

The way I see it, Nvidia made promises of their card can support dx12, but they now realise they are falling short on what they could offer in terms of scale of their what can be supported, so they are now just trying to get the developers to make their use of dx12 work in a specific way to try to hide that to avoid negative publicity.

The Crysis 2 DX11 tessellation thing ? there that was BS but hardly nvidias fault (some conspiracy theorist might say otherwise but its mostly hot air + circumstantial evidence and reasoning), plus its well know AMD's tessellation performance has allways been abit weak even to this day (but much improved).

Granted DX12 is not nVidia API but with it being "closer to the metal" it makes scene as a lot more stuff just can't be handed over to the driver or the API to do.

As for the "falling short" on the DX12 side of things I presume your referring to the "async compute" situation, sure its not fully enabled in there drivers and in the TWO DX12 benchmarks which use async compute nvidia seem to doing fine.
 
Last edited:
I mean yes it made sense for AMD to do something like that for Mantle, as it is their API, but dx12 is not Nvidia's API. I think developers should just use dx12 as the way they want it to be used, and if graphic cards cannot deliver, then the GPU manufacturers should play catch up, rather than asking developers to slow down and wait for them.

Unless they want to use high levels of tessellation I assume? That's bad even if it's the developers using DX11 they way they want?
 
In all these cases it is not down to NVidia or AMD, it is the developer that decides how much of a feature that their code uses. If AMD or NVidia suggests that they use a certain amount of a feature, all they have to do is say no.
 
What benefit do you get out of using high tess levels? :)

None, 8x is perfectly acceptable, anything over that and its hard to see the difference, 16x ok maybe but you're being a bit silly, 32x is a massive waste of resource, 64x is madness.

Having said that if you want your world to look good you might use high levels of vegetation, that will again translate into high levels of tessellation.
So having a high tessellation through-put GPU is a useful thing.

But just ramping up the levels of tessellation in your geometry is daft.
 
The proper answer to that wouldn't be understood by most here, partly also including myself.

The proper answer is Nothing. I'm sure you manage to understand it, don't put yourself down :p

About the same as using high levels of Async compute by the looks of the benches. :D:p:D

No bench has used high levels, or even moderate levels looking at Ashes and fable development posts. Fable barely uses any in fact, think they said it was used for 2-3 effects. More use of async will mean better performance for cards supporting it, it's not in the same boat as tessellation :D
 
The proper answer is Nothing. I'm sure you manage to understand it, don't put yourself down :p



No bench has used high levels, or even moderate levels looking at Ashes and fable development posts. Fable barely uses any in fact, think they said it was used for 2-3 effects. More use of async will mean better performance for cards supporting it, it's not in the same boat as tessellation :D

None, 8x is perfectly acceptable, anything over that and its hard to see the difference, 16x ok maybe but you're being a bit silly, 32x is a massive waste of resource, 64x is madness.

Having said that if you want your world to look good you might use high levels of vegetation, that will again translate into high levels of tessellation.
So having a high tessellation through-put GPU is a useful thing.

But just ramping up the levels of tessellation in your geometry is daft.


This is precisely why these things fall on deaf ears, for instance did you know that these factors can be calculated on the fly? Not once has anyone mentioned this when talking about excessive tesselation as they do not have an understanding of how the pipeline works.

After the control points and various parameters are set in the hull shader then through the domain shader, the tesselator requires fixed geometry settings including the tess factor in order to operate. However these if left in a certain state are calculated dynamically (LOD) depending on range, or if even in view. Typically the closer you are to the object the performance will decrease. This is noticeable with HairWork in Witcher 3. Setting the maximum tess factor for line tesselation makes more sense then people realise as it's not the best method to render something like hair anyway, which is another topic entirely. The amount used with this method the more the better. Essentially there is a lot more to it than end users understand or give credit for - myself included.
 
I'm more of a result guy. I don't understand how tessellation works beyond breaking down into smaller triangles. What I do know is that increasing tessellation past a point does nothing to the visual quality and decreases performance. I see it like AA, you need some AA but there's diminishing returns as you up the levels.
We don't need to know how something works to see the effects of using it at differing levels.
 
I'm more of a result guy. I don't understand how tessellation works beyond breaking down into smaller triangles. What I do know is that increasing tessellation past a point does nothing to the visual quality and decreases performance. I see it like AA, you need some AA but there's diminishing returns as you up the levels.
We don't need to know how something works to see the effects of using it at differing levels.

Then it is the results you should reflect on, for instance it's when pixels become 'X' smaller in size that AMD tesselation falls off a cliff. When one takes the application for such things out of the equation, one is still superior to the other if able to handle the same geometry another hardware pipeline cannot. This doesn't just apply to line tesselation - Crysis 2 using excessive tesselation on geometry was another stab at Nvidia when all else failed at a time when all source code was implemented by the developer and the developer alone for tess based effects. AMD has been banking on people taking their viewpoint on these things for a few years now.

This has even little bearing on the topic at hand as all points made are ones that any developer should really adhere to. NVIDIA have given these types of guidelines for along time - a lot longer than the GW project has been running, as well as Crysis 3.

That's not to say your viewpoint is wrong, but criticising one method when not having an understanding of it is why these debates are never ending.
 
Last edited:
Then it is the results you should reflect on, for instance it's when pixels become 'X' smaller in size that AMD tesselation falls off a cliff. When one takes the application for such things out of the equation, one is still superior to the other if able to handle the same geometry another hardware pipeline cannot. This doesn't just apply to line tesselation - Crysis 2 using excessive tesselation on geometry was another stab at Nvidia when all else failed at a time when all source code was implemented by the developer and the developer alone for tess based effects. AMD has been banking on people taking their viewpoint on these things for a few years now.

This has even little bearing on the topic at hand as all points made are ones that any developer should really adhere to. NVIDIA have given these types of guidelines for along time - a lot longer than the GW project has been running, as well as Crysis 3.

That's not to say your viewpoint is wrong, but criticising one method when not having an understanding of it is why these debates are never ending.

Hard to argue with AMDs tess performance not being up to scratch. But I feel you're arguing something different to what I am (or I'm plain not understanding, which is highly possible).
Myself, and most consumers, would look at it in simple terms.
Does increasing tessellation past 'x' point (where AMD drops off) improve the way the game looks? The answer seems to be a resounding no. Therefore is it worth doing? Again, no. What's happening in the background, as complex and fantastic as it may be, is of little interest to most consumers.

What you seem to be saying (I think) is that AMD should improve their tessellation performance so devs can use it more freely without the performance hit? ie it's AMDs fault for poor performance in those conditions, which I cannot argue with. My argument is in the value of increasing it.
 
Your not wrong Silent_Scone, the problem is W3 Hairwoks is tessellating about 4x per pixel.

You do need to tessellate hair if you want to give it 3D geometry, which you do, but you don't need an over abundance of vertices in something thats little more than a pixel wide, that is mad.

As for the Crysis 2 "unnecessarily tessellating the ocean under land" that one is a bum argument.

If you have an ocean or water volumes which depend on a world ocean its actually completely normal to have it under the land mass, the ocean does not stop at the rendered land, its not like real life, it caries on beneath the land, the land render actually sits on top a solid body of water that is the ocean, this is true for every game which has an ocean.
 
Last edited:
As for the Crysis 2 "unnecessarily tessellating the ocean under land" that one is a bum argument.

If you have an ocean or water volumes which depend on a world ocean its actually completely normal to have it under the land mass, the ocean does not stop at the rendered land, its not like real life, it caries on beneath the land, the land render actually sits on top a solid body of water that is the ocean, this is true for every game which has an ocean.

Here.... :)





Now stop it! :p
 
wasn't the ocean under the map thing debunked by Crytek themselves as Cryengine uses occlusion culling, stick it in dev mode and you can see all the polys as it stops culling, but in the game most of that doesn't get rendered?

it was more that some random objects were over tesselated and to a certain extent that got blamed on EA rushing them to launch so not all the assets were tidied up

as much as people love to blame nvidia, they didn't write the game and it does seem counter productive to deliberately make your game run poorly on one set of hardware over another... didn't AMD come out and add the tesselation slider to CCC because of this?
 
wasn't the ocean under the map thing debunked by Crytek themselves as Cryengine uses occlusion culling, stick it in dev mode and you can see all the polys as it stops culling, but in the game most of that doesn't get rendered?

it was more that some random objects were over tesselated and to a certain extent that got blamed on EA rushing them to launch so not all the assets were tidied up

as much as people love to blame nvidia, they didn't write the game and it does seem counter productive to deliberately make your game run poorly on one set of hardware over another... didn't AMD come out and add the tesselation slider to CCC because of this?

Look at the tessellation lines under the land in my screen cap.
 
Back
Top Bottom