• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Shenanigans again?

The fact that I have no problem jumping from AMD to Nvidia and visa versa shows I am not a fanboy of either company, did I **** of the 5870 when I got the 480, no I don't think I did.

You actually did, you acted irrational about it, went on a rant about how much better GTX480s are and how much better they perform when you overclock them compared to a 5870, while blatantly ignoring the fact that your 480 had a higher percentage overclock than your 5870. In fact, it took quite a while to get you to realise that a 150Mhz overclock on a 5870 can't be compared to 150Mhz on a 480 as the 480's clock speed is lower than the 5870's. You went on quite a bit about how this 150Mhz overclock on he 480 shamed the 150Mhz overclock on the 5870, while being completely blind to how wrong you'd gotten it.
 
What we really need are some images from Hawx 2 showing performance and tessellation so we can see how big a deal it really is.
 
Although I will say that considering just how many million polys/sec current cards can display, the excessively low poly background hills in the Hawx2 screens are done deliberately to advertise tessellation - i.e if tessellation didn't exist, you'd have something that resembled a real mountain horizon, not something that wouldn't look out of place in a PS1 title.

This.

Seriously, it's just a ridiculous image comparison. If I didn't have tessellation enabled and I saw those background hills in the game, I'd be pretty annoyed - tessellation isn't an excuse to just throw us right back into the dark ages of videogame graphics and pretend that some fancy new technique is necessary for polygons to look remotely realistic.
 
No of course we can, it's just the benchmark is irrelevant in the real world.

And for over tessellating in games to the point of increasing workload for no IQ benefit, then that is just a complete waste of resources and kills performance, which means we all then have to suffer lower FPS as a result of Nvidia's BS.

well if say "very high" tesselation offers no visual benefit to you then maybe you could set it to low , medium or high instead? and let others have as much as they wish?


i bet even in HAWK you can edit the tesselation amount via an INI like you can blur , hdr , dof etc in other games
 
This.

Seriously, it's just a ridiculous image comparison. If I didn't have tessellation enabled and I saw those background hills in the game, I'd be pretty annoyed - tessellation isn't an excuse to just throw us right back into the dark ages of videogame graphics and pretend that some fancy new technique is necessary for polygons to look remotely realistic.

I agree - reminds me of Giants: Citizen Kabuto and thats like getting on for 10 years old.
 
well if say "very high" tesselation offers no visual benefit to you then maybe you could set it to low , medium or high instead? and let others have as much as they wish?


i bet even in HAWK you can edit the tesselation amount via an INI like you can blur , hdr , dof etc in other games

I don't think you quite understand the problem, it's not subjective in that it doesn't improve the IQ for me personally, it's that it won't improve it for anyone, what Nvidia is obviously pushing for is to de-optimise tessellation for AMD as well as them selves because that's the only technical advantage they have but this is at the detriment of both AMD & Nvidia users.
The people defending Nvidia on this, are either unaware of the technical reasons for de-optimised tessellation or are fanboys in a similar category as Raven, because this form of tessellation hurts Nvidia users also as it diverts resources that could otherwise be put to better use.

“Overall, the polygon size should be around 8-10 pixels in a good gaming environment”, said Huddy. “You also have to allow for the fact that everyone’s hardware works in quads. Both nVidia and AMD use a 2×2 grid of pixels, which are always processed as a group. To be intelligent, a triangle needs to be more than 4 pixels big for tessellation to make sense”.
Interesting enough, but why are we being told this? “With artificial tests like Stone Giant, which was paid for by nVidia, tessellation can be done down to the single pixel level. Even though that pixel can’t be broken away from the 3 other pixels in its quad. Doing additional processing for each pixel in a group of 4 and then throwing 75% of that work away is just sad”.

http://www.kitguru.net/components/graphic-cards/ironlaw/nvidia-offers-the-constant-smell-of-burning-bridges-says-amd/
 
Not true... nVidia is not pushing for the de-optimisation of tessellation... they are pushing the current stock implementation.

Its AMD thats pushing for an opptimised - non-standard - implementation - and granted it does have some substantial merits.

It would be beyond the scope of a post here to explain whats going on with the ultra high resolution tessellation and the potential pitfalls of blindly using an adaptive process to avoid situations where your mostly going sub-pixel - but its not as straight forward as the statement above would indicate - sometimes you can't avoid over processing some parts to ensure you get proper coverage on other parts, as not doing so either results in IQ issues or the alternative adaptive algorythm would reduce performance more than it would save by not generating sub-pixel content.


I'd also like to see you expand on this:

The people defending Nvidia on this, are either unaware of the technical reasons for de-optimised tessellation or are fanboys in a similar category as Raven, because this form of tessellation hurts Nvidia users also as it diverts resources that could otherwise be put to better use.

as I think your still under the impression tessellation on nVidia cards is done on the CUDA cores.
 
Last edited:
Not true... nVidia is not pushing for the de-optimisation of tessellation... they are pushing the current stock implementation.

So why does Stone Giant (that Nvidia paid for) tessellate all the way down to single pixels instead of at the very least, groups of 4?

as I think your still under the impression tessellation on nVidia cards is done on the CUDA cores.

No, but are you telling me these "Polymorph" engines couldn't be put to better use than calculating stuff that can't be displayed on the screen?

P.s. if your opting to continue to debate in this thread then I would appreciate if you can concede where it is obvious you are wrong instead of wriggling, and of course I will do the same...
 
Last edited:
So why does Stone Giant (that Nvidia paid for) tessellate all the way down to single pixels instead of at the very least, groups of 4?

Because thats how the current standard implementation of tessellation works - don't blame nVidia.


No, but are you telling me these "Polymorph" engines couldn't be put to better use than calculating stuff that can't be displayed on the screen?

Polymorph engines are fairly specialised in terms of functionality... besides as above with the current implementation (not nVidias choice) to ensure proper coverage on all aspects of the scene you sometimes have no choice but to go over the top. While I applaud AMD for apparently trying to opptomise things they seem to be more interested in using it to score political points than actually make changes.

P.s. if your opting to continue to debate in this thread then I would appreciate if you can concede where it is obvious you are wrong instead of wriggling, and of course I will do the same...

I usually do when I'm actually wrong...
 
Well that's not what I said, now is it? I didn't talk about fair. Extreme tessellation is pointless if it doesn't improve image quality, that's the point.

Of course nVidia will be pushing for extreme tessellation even if it doesn't improve image quality, because that's all they have going for them at the moment. Turning tessellation up to 11 is pointless if you can't see any differences beyond 2. nVidia don't care about that.

Just like extreme PhysX calculations so that it run poorly on CPUS & anything but mid to high end GPUS.
 
I don't think you quite understand the problem, it's not subjective in that it doesn't improve the IQ for me personally, it's that it won't improve it for anyone, what Nvidia is obviously pushing for is to de-optimise tessellation for AMD as well as them selves because that's the only technical advantage they have but this is at the detriment of both AMD & Nvidia users.
The people defending Nvidia on this, are either unaware of the technical reasons for de-optimised tessellation or are fanboys in a similar category as Raven, because this form of tessellation hurts Nvidia users also as it diverts resources that could otherwise be put to better use.

“Overall, the polygon size should be around 8-10 pixels in a good gaming environment”, said Huddy. “You also have to allow for the fact that everyone’s hardware works in quads. Both nVidia and AMD use a 2×2 grid of pixels, which are always processed as a group. To be intelligent, a triangle needs to be more than 4 pixels big for tessellation to make sense”.
Interesting enough, but why are we being told this? “With artificial tests like Stone Giant, which was paid for by nVidia, tessellation can be done down to the single pixel level. Even though that pixel can’t be broken away from the 3 other pixels in its quad. Doing additional processing for each pixel in a group of 4 and then throwing 75% of that work away is just sad”.

http://www.kitguru.net/components/graphic-cards/ironlaw/nvidia-offers-the-constant-smell-of-burning-bridges-says-amd/

The same waste was happening with batmanAA most of the AA calculations were done on ATi cards but final AA render output was left out.
 
The fact that I have no problem jumping from AMD to Nvidia and visa versa shows I am not a fanboy of either company, did I **** of the 5870 when I got the 480, no I don't think I did.

you slated the 480gtx, Raven. Slated it. and now you own one.

you are absolutely chock-a-block full of bile. You are seen as just about the most impressionable two faced liar on these forums today. How many people do you think wouldn't agree with this statement?

RavenXXX2 said:
Fastest GPU is the 5970, 480 is marginally better than a 5870 in most cases and that's just sad, if fanbois want to claim it is a win, fine, go play with your 95c at load, 300 watt power draining embarrassment.

That is probably the single most damning post you made prior to actually buying the card. That literally does say it all and as far as you claiming to have changed your ways since buying the 480...well, no. You are very happy to remind everybody that the nvidia cards are that much better performance with tessellation, but neglect to mention its tessellation at such a high level that its virtually indistinguishable. thats a bit like me pushing Ati cards because they are so much better at 128x AA...its nonsense.

dont try to claim you've changed, Raven. Everybody here can see straight through you.
 
Last edited:
Yeah and? I didn't own a 480 when I made them comments months back, and since then I have already said I won't slate a card unless I've had first hand of experience of it " yeah go through my posts sure you will find it " so please jog on with your abusive comments.

Do you even own a gaming card? you just lurk around in this forum posting crap like the above.
 
Last edited:
me? lol. keep trying. I own a 5850 btw.

Do you even own a gaming card? you just lurk around in this forum posting crap like the above.
Crap like what? pointing out the amazing inconsistencies of your posting? the flat out lies you come out with? if you don't like it you could, you know...not do it?

If you want an example of somebody who really doesnt take sides then you are talking to such a person right now. Ask me anything - you might learn something.

and since then I have already said I won't slate a card unless I've had first hand of experience of it "

You havent owned a 6 series radeon...why are you slating the tessellation performance, or are we ok to slate the manufacturer but not the cards now? I guess that's your get out of jail card is it?
 
you slated the 480gtx, Raven. Slated it. and now you own one.

you are absolutely chock-a-block full of bile. You are seen as just about the most impressionable two faced liar on these forums today. How many people do you think wouldn't agree with this statement?



That is probably the single most damning post you made prior to actually buying the card. That literally does say it all and as far as you claiming to have changed your ways since buying the 480...well, no. You are very happy to remind everybody that the nvidia cards are that much better performance with tessellation, but neglect to mention its tessellation at such a high level that its virtually indistinguishable. thats a bit like me pushing Ati cards because they are so much better at 128x AA...its nonsense.

dont try to claim you've changed, Raven. Everybody here can see straight through you.

That reads very bitter and is a bit over the top mate. It reads like you have personal issues.
 
lol he's bringing up posts from way back in April, which I have since said I was wrong to make, guess some just have a beef for whatever reason. Best to ignore " as I am from now on " or RTM if you feel the posts are abusive.
 
Back
Top Bottom