If it works "as advertised" then there really shouldn't be any downside to it. Maximum framerates will suffer (and perhaps also averages as a consequence), but performance should be more consistent, and more significantly, power consumption will be reduced.
The downside will come if it's poorly implemented - i.e. if it downclocks when framerates are still fairly low, or if it fails to spin back up for whatever reason. There are plenty of potential issues with the implementation, so I guess we just need to wait and see how it performs in the real world.
In theory it should act like a "reverse powertune" - reducing power draw when the extra GPU horsepower is not needed, rather than capping it at a fixed maximum. It'll be interesting to see how well it works... With power draw increasingly becoming the limiting factor with every process shrink, I think we'll continue to see more advanced power containment systems from both Nvidia and AMD.