• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is there any truth that AMD cards are bad at Tessellation

Here's my contribution, with crossfire enabled: cards@stock just with non-throttling fan profile. [email protected]

Tesselation disabled:

2zqh6w6.png

Tesselation extreme:

2qn4ck6.png

So taking average fps the average increase is 34.6%.

That seems to be in line with your 290x results Kaap. and is lower than the titan's increase.
 
AMD excuses at getting their behinds tanned from nVidia :p

Ummm...I have a GTX 670, I don't care about brand, just about whether something is valid and benefits me as the user or not! :D:p

mmj uk said:
AMD's implementation of optimising for low levels is a short term mentality whereas NVidia are looking to the future.

You hear people banging on about AMD cards being more future proof due to extra VRAM all the time, but when NVidia have GPU's that can do amazing amounts of tessellation it gets downplayed?

But the different is that the VRAM can and actually will be used (it already is being utilised in some circumstances), whereas when you're talking about levels of tessellation that are just not useful, how is that short-term mentality? As described earlier in the thread, there is a limitation where continued adding of tesselation to objects/models is just not useful, as you cannot actually see the additional triangles; they are too small.

If the number of triangles generated by USEFUL tesselation grows far enough at 4K, and AMD struggle at this res with useful and visible levels of tessellation, then I will concede that Nvidia have future proofed whereas AMD have perhaps been short sighted, but is that actually the case? Is it a case it is for 4k-proofing the amount of tesselation processing that will be needed, or have they simply thrown in massive amounts of sub-pixel tessellation processing capability for the sake of it as a way to rig benchmarks, rather than to benefit the user, in which case the point of a 'joker card' feature still stands. And this is regardless or not of whether Nvidia 'persuade' developers to use this level of tesselation to benefit themselves or not, as that is no different than a title being optimised/crippled for one party by any other method when you get down to the reality of what it means to user?

If it doesn't change the result at 4K resolutions, then you have to argue beyond that that if this is future proofing, then surely the rest of the card itself will be too feeble at the time to have useful framerates, with or without this tesselation processing, so beyond benchmarks, what is the point of this extra silicon cost, which at the end of the day, will indirectly be costing us more as consumers due to the additional space/power requirements this will incur?
 
Last edited:
I have heard many times that AMD cards are bad at tessellation but this does not seem to be the case, for the Hawaii GPUs anyway.

But yes it does beg the question what is holding the 290s back in heaven.

Your right Kaap its not the case.

It was with the Radeon HD 6000 series compared with Fermi, and you know how it goes, like Chinese wipers that generation gets lost in context until its just "AMD", the same thing gets repeated through the ages without even knowing what people are talking about.
 
Last edited:
Makes me wonder if half of you even read the GameWorks article then. Some of you honestly need to stick to your guns just once.

Never mind the fact I linked tessmark results but a bench dedicated to testing Tess is obviously of no interest...

Although the fact AMD is still ropey in OGL too probably has some prodigious reasoning too no doubt. It's no where near as bad as it used to be, but it's definitely still flagging.

http://www.extremetech.com/extreme/...rps-power-from-developers-end-users-and-amd/2


59316.png
 
Last edited:
Apologies if I have missed something or am just being stupid but ...

How is Kaapstads 4 titans and a 6 core only scoring about 80-90 more than my stock i5 2500 and single 290 with the tessellation on extreme and only 5 more avg fps?
 
Apologies if I have missed something or am just being stupid but ...

How is Kaapstads 4 titans and a 6 core only scoring about 80-90 more than my stock i5 2500 and single 290 with the tessellation on extreme and only 5 more avg fps?

that is only 1 of his Titans, not all 4.
 
Makes me wonder if half of you even read the GameWorks article then. Some of you honestly need to stick to your guns just once.

Never mind the fact I linked tessmark results but a bench dedicated to testing Tess is obviously of no interest...

Although the fact AMD is still ropey in OGL too probably has some prodigious reasoning too no doubt. It's no where near as bad as it used to be, but it's definitely still flagging.

http://www.extremetech.com/extreme/...rps-power-from-developers-end-users-and-amd/2

Who are you attacking and why?

your just being randomly aggressive, what is your problem?

Its nothing to do with tessellation, good grief get over it.
 
that is only 1 of his Titans, not all 4.

Ahhh so I did miss something lol saw x4 and figured it was running quad sli. I'm within 10% of his bench in score and FPS which would be about right for the 290? I remember before I bought it was apparently only 10% off the titan which was a real selling point.
 
Last edited:
Who are you attacking and why?

your just being randomly aggressive, what is your problem?

Its nothing to do with tessellation, good grief get over it.


No reason. Only that Joels article clearly vindicates that tessellation is crippled by using excessive units, and these Tessmark results are just completely being overlooked. A tessellation...benchmark :D


heaven.gif


59316.png
 
Last edited:
No reason. Only that Joels article clearly vindicates that tessellation is crippled by using excessive units, and these Tessmark results are just completely being overlooked. A tessellation...benchmark :D


heaven.gif


59316.png


Kaap just ran a Heaven Direct X Tessellation test and found Nvidia looses more performance than AMD in Tessellation, in other words AMD have no problem running Tessellation.

Tessmark is OpenGL, AMD have 0 optimisation in OpenGL, they have done 0 work with OpenGL, especially with the drivers that was done on.

You actually know this, Tessmark being OpenGL proves absolutely nothing, why are you using that to try ramming home what you know by now does not exist? because you insisted on it in the first place? your not always right, no shame in that. don't take it so personally.
 
Last edited:
So the Extreme Tech article is in fact way off base by criticising the tessellation performance? Because Joel realised that when used in excess, i.e 64X, AMD cards fall short.

AMDs drivers are crap in OpenGL, yes. But you could argue that both Heaven and Tessmark are not valid examples of Tessellation performance. The only reason it is so crap in OpenGL is because AMD haven't felt it necessary to adapt their drivers, and as a result it's only using a single pipeline to process it. Could argue that's crap in itself, surely. Although if we were being pedantic, AMD have released many OpenGL moderations in drivers gone by

Cape-Benchmark-StandardTessellation-640x360.jpg


Maybe people should also try PLA benchmark instead of relying on Heaven alone.

Metro is another example, especially considering it's a game.
 
Last edited:
So the Extreme Tech article is in fact way off base by criticising the tessellation performance? Because Joel realised that when used in excess, i.e 64X, AMD cards fall short.

AMDs drivers are crap in OpenGL, yes. But you could argue that both Heaven and Tessmark are not valid examples of Tessellation performance. The only reason it is so crap in OpenGL is because AMD haven't felt it necessary to adapt their drivers, and as a result it's only using a single pipeline to process it. Could argue that's crap in itself, surely. Although if we were being pedantic, AMD have released many OpenGL moderations in drivers gone by

Cape-Benchmark-StandardTessellation-640x360.jpg


Maybe people should also try PLA benchmark instead of relying on Heaven alone.

Yes, they can't separate Tessellation from the game, they are just assuming Nvidia did what they apparently did in Crysis 2, IE bunp up Tessellation to unnecessary level to hurt AMD.

What they forget is this was done with the HD 6000 series, and it is true the HD 6000 series suffers in Tessellation, if you go back now and look at Crysis 2 reviews on the HD 7000 vs the Kepler the HD 7000 comes out on top.

Your right Kaap its not the case.

It was with the Radeon HD 6000 series compared with Fermi, and you know how it goes, like Chinese wipers that generation gets lost in context until its just "AMD", the same thing gets repeated through the ages without even knowing what people are talking about.
 
Yes, they can't separate Tessellation from the game, they are just assuming Nvidia did what they apparently did in Crysis 2, IE bunp up Tessellation to unnecessary level to hurt AMD.

What they forget is this was done with the HD 6000 series, and it is true the HD 6000 series suffers in Tessellation, if you go back now and look at Crysis 2 reviews on the HD 7000 vs the Kepler the HD 7000 comes out on top.

I don't think many people care about tessellation on older tech cards, only new ones.
 
Back
Top Bottom