Soldato
- Joined
- 17 Aug 2003
- Posts
- 20,160
- Location
- Woburn Sand Dunes
for the same input data you'd get the same output data regardless of the brand of hardware. The software controls the LOD used.
i think this is something that should be verified. your argument in that thread was based on the fact that people who said performance would drop if the tessellation output increased were wrong:
By gimping the shaders I'm referring to the claims you, charlie and a few other parties have made about how the method nVidia use for handling tessellation will reduce non-tessellated shader processing output... which I claim shows a lack of understanding about how nVidia are actually going about it and completely ignorant of the load balancing aspect - and I think the heaven benchmark backs me up on that point.
Load balancing i understand, but i believe you are contradicting yourself here.
if the hardware (and/or drivers) is load balancing the tessellation work, then performance wouldnt decrease, as you said in that quote ^^^. However, that is in direct conflict with your recent statement:
for the same input data you'd get the same output data regardless of the brand of hardware. The software controls the LOD used.
you cannot do both. either the non-tessellated shader output will decrease, or the tessellated output will. either way, as you've said yourself - it's scaling something.