• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

More bad news for nvidia if true.

for the same input data you'd get the same output data regardless of the brand of hardware. The software controls the LOD used.

i think this is something that should be verified. your argument in that thread was based on the fact that people who said performance would drop if the tessellation output increased were wrong:

By gimping the shaders I'm referring to the claims you, charlie and a few other parties have made about how the method nVidia use for handling tessellation will reduce non-tessellated shader processing output... which I claim shows a lack of understanding about how nVidia are actually going about it and completely ignorant of the load balancing aspect - and I think the heaven benchmark backs me up on that point.

Load balancing i understand, but i believe you are contradicting yourself here.

if the hardware (and/or drivers) is load balancing the tessellation work, then performance wouldnt decrease, as you said in that quote ^^^. However, that is in direct conflict with your recent statement:

for the same input data you'd get the same output data regardless of the brand of hardware. The software controls the LOD used.

you cannot do both. either the non-tessellated shader output will decrease, or the tessellated output will. either way, as you've said yourself - it's scaling something.
 
There is going to be a performance decrease somewhere - I never said there wouldn't be - the point was its not as severe as they were claiming due to the load balancing and the fact that tessellated or otherwise - even when the GPU is going "flat out" doing one traditional shader workload - not all clusters are actually as tied up as each other.

As for the input/output - ATI and nVidia would have to do the same thing under the hood for the same software input parameters otherwise output would be unpredictable and impossible to develop against.
 
it's pretty clear what you said in that first quote, Rroff. if the tessellated output HAS to be the same on both cards, then it's going to affect the non-tessellated output. That's how load balancing works. you can't say the tessellation is using up unloaded clusters only, because that means it's the tessellated output that's being scaled.

i can see this discussion becoming a confusing mess of double-negatives, so maybe this will help:

There is going to be a performance decrease somewhere - I never said there wouldn't be
By gimping the shaders I'm referring to the claims you, charlie and a few other parties have made about how the method nVidia use for handling tessellation will reduce non-tessellated shader processing output... which I claim shows a lack of understanding about how nVidia are actually going about it and completely ignorant of the load balancing aspect - and I think the heaven benchmark backs me up on that point.

You rubbished the notion that non-tessellated output will drop, now you are saying it will :)
 
You need to back up a step and get the context from the original claims.

I rubbished the severity that performance would be affected, I did not dispute that there would be a performance decrease. I said that it wouldn't be as bad as they were saying because they had been mistaken in their understanding of how the polymorph engine worked with the rest of the shader hardware.

The load balancing is not about affecting how the tessellation hardware works but how it fits in with the other non-tessellated processing on the hardware. Its a bit difficult to explain it simply without unintentionally presenting misinformation - but essentially not every cluster is loaded up the same and even when your at maximum capacity with traditional shader work - that doesn't mean you are actually 100% utilising the hardware capabilities and you can take up the slack with a different type of processing, which means the overall performance impact is reduced somewhat.
 
i know how it works Rroff.

here's simple:

you have x amount of work being done on a cluster, with y being the available work left for tessellation duty (or any other duty). That free run time is then used for tessellation. If any additional runtime is needed for tessellation, the next cluster is looked at, and perhaps whole clusters might also be assigned to it.

simples?

now, how badly where people (esp. charlie) saying tessellation would affect performance?
 
Exactly - you seem to have a good grasp of it - so you should be able to see for yourself where Charlie's claims were overboard and hence the 400 series not suffer against the 5 series in the way they were making out.
 
Oh noes, a journalist (not a technician or engineer) doesn't understand the intricacies of graphic card technology.

Let's forget everything else true or as his site says, semi accurate, he has ever reported.

/s.
 

Whoa there horsey! Of course he's not always right but I would say a good 75% of what he writes up is in the right ballpark and to be honest I think his articles are actually entertaining and not simply press releases and such.

He's like a dog with a bone, and NVidia is his bone. He was pretty much spot-on with Fermi, it is a disappointment, not quite a disaster though it was bloody close. I will still get a GTX480 but that sucker is getting watercooled.
 
Following the SA for long time, I don't see why the guy is slated.

For months he was saying GF100 will run hot, needs a lot of power, isn't that faster from ATI, will be expensive and very low production numbers, making it unavailable for mass sale and profits. Already he said that they will be weeks after the reviews for the cards to become available, and here are 3 weeks later!

Making a mistake x Watt power requirements, z number of shaders etc, doesn't prove that what he said was wrong. Especially when Nvidia was advertising the GPU with different specs up to few months ago!!!!

And don't forget. We haven't seeing any GTX4x0 production cards on the market or tested.

I really wish all the "speculators" and "journalists" were that far from the truth, than usually are.

Kudos to him tbh.
 
Oh noes, a journalist (not a technician or engineer) doesn't understand the intricacies of graphic card technology.

Let's forget everything else true or as his site says, semi accurate, he has ever reported.

/s.
Indeed, he's a journalist not an engineer. To be honest he's probably the best journalist in the GPU industry, he only gets the stick he does because some people don't like the scoops he gets.
 
Not everyone cares about bang for buck.
I know what your saying there , my point was more about brand loyalty. In my eyes it is sheer stupidity. Reading through LOTS of threads on this forum(and others) leads me to believe that team Yellow (so that i don't get shot at dawn for favouritism)could release a rabid dog on us and most yellow team followers here would welcome it into the family home and sit it next to the kids and say to everyone that its a great dog really.
Just my tuppence worth on the matter.
 
Well pure brand loyalty for brand loyalty sake is stupid.

Personally I don't like ATI drivers (HINT I'm not implying here they are good or bad), like the way nVidia have setup application profiles and other driver features and the ability to really tweak away with nHancer - amongst several reasons why I stick with nVidia as much as possible.
 
Indeed, he's a journalist not an engineer. To be honest he's probably the best journalist in the GPU industry, he only gets the stick he does because some people don't like the scoops he gets.

+1, it's such a shame that more mainstream sites don't do any work with regards to old fashioned investigative journalism and simply post press statements made by corporations.
 
I must admit after reading the Semi Accurate site, That it does leave me worried and confused in the GPU market. As I've placed a pre-order for a GTX470 card I'm also debating if I made the wrong call and should have stuck with ATI and got 5850 as it is tried and tested hardware.

Or I keep mine pre-order and get a waterblock for it ?
 
I must admit after reading the Semi Accurate site, That it does leave me worried and confused in the GPU market. As I've placed a pre-order for a GTX470 card I'm also debating if I made the wrong call and should have stuck with ATI and got 5850 as it is tried and tested hardware.

Or I keep mine pre-order and get a waterblock for it ?

If I was in your situation I would drop the pre-order and get a 5850
 
Back
Top Bottom