Well like most things sometimes it does sometimes it does not.
Whatever is faster doing on the GPU should be done on the GPU as long as it got the resources free to do it otherwise its the CPU.
I'm not sure why people are scared of polygons.
Ever since the first T&L cards, GPUs have been capable of handling millions of polys. A few polys more or less really isn't the issue. They are often 'free', because of the additional overhead of setting up each call. A call to draw 1 polygon is no faster than a call to draw 1000 polygons.
3DMark2001 already included a high-polygon test which rendered 1m polygons, and a high-end GeForce2 would churn through that with little problems. Just because games never pushed the hardware anywhere near their capabilities doesn't mean you should be afraid of it.
In game there are varying degrees of settings for AA & AF and tessellation levels and they also have driver override options as far as AA & AF, i see no reason why tessellation should not be a driver override option for people who want it regardless of AMD possible motives, it really is no different to AA/AF levels as many can't use maxed out AA/AF levels either and depending on the brand and model of card, what levels are playable to the user will vary depending on the game and resolutions and yes there could be possible side effects with driver override tessellation but i have not seen any comments of users who use the option suffering from any up till now.
Which is where I as a developer would disagree. I design the application, I make the choices of what level of detail is applied where, when and how. There is no need for a driver to override any of my choices, because I give the user plenty of choice already. The only driver features that are left are hacks and cheats, and those should be banned. I don't want AMD, nVidia or anyone else tweaking my applications behind my back. They need to be run as they were designed, not as some random driver hacker thinks they should look.
Also, on my blog I linked to some images that clearly showed rendering issues when enabling AMD's tessellation 'optimization' in the driver:
http://forums.anandtech.com/showpost.php?p=31992121&postcount=89
They limit tessellation, in a not-so-clever way, I might add... Resulting in things like brick walls turning into weird pyramid shaped bricks. If I tessellate a brick wall as a developer, I want the user to see bricks! Not weird pointy thingies!
While both techs have impressive possibilities, what's put on the table is what counts and is judged on those merits and i could go without until they are used as they should, DX11 itself is sometimes used just as badly.
Well, as a developer I obviously have a different view on things. I need proper hardware support so I can use DX11/tessellation/etc as it was meant. Endless City is pretty much the only example that uses tessellation the way it should be. And that is exactly the scenario where Radeons still fall apart.
I mean, sure, the 5000 series was the first DX11 hardware. So AMD didn't get it right the first time, fair enough... Then the 6000 seriess came... and AMD still didn't get it right... hrm...
But then they come out with the 7000 series, an entirely new architecture (Graphics Core Next, yay!), and they STILL have the same lousy tessellator they've been peddling for years. Not acceptable people! nVidia's tessellator from 2 generations ago is still much faster than AMD's latest at the tessellation ranges where it matters (there's a reason why DX11 was designed with a range of tessellation factors from 1...64. AMD can only handle 1..10 properly, then drops off exponentially, making everything higher excruciatingly slow and useless... I can't sell it to the public when their super-duper 7970 card performs worse than a cheapo GTX560Ti... but that's what happens).