I believe those figures are for whole system.
As said it's faster than GTX480 while maintaining the power consumption so it's rather impressive really.
Ish, the overall problem is, very often fusing off parts of a core, it doesn't actually stop all the power usually drawn from those area's.
If you look at the top down architecture diagrams of the GF100/gf110(which are identical by the way, architecture diagrams from Nvidia identical for both that is). You'll see that if you disable one SIMD, theres a lot of stuff that was still "turned on" that each GPC uses and just one of the GPC's had one less SIMD working, the raster engine for each GPC would use the same power, just one would be doing 25% less work. LIke wise the stuff you fuse off still suffers from leakage(to a hugely lower degree).
The only reason the GF100 didn't come with 512 shaders was yields of fully working 512shader parts were low enough they could not release it, they probably could get yields higher by increasing voltage massively which would again make it unreleasable.
If you take a same clocked GF110/580gtx, disable one SIMD, at the same voltages, power draw wouldn't drop off that much.
The biggest difference between the 470/480gtx power wise is 100Mhz lower clock and lower voltage, not the one more SIMD disabled.
Bah, its hard to explain, basically the 580gtx isn't some massive step forward in efficiency is what I'm saying, power wise, the increased efficieny simply comes from being fully enabled than being designed to be low power.
As Raven said, it makes very little real world difference for 99% of home users. Those that fold, doing near constant GPGPU work or game 24/7(so WoW addicts) might see the difference, even then cost wise its not much, but room heat wise, it really can do.