Well, this is performance per transistor efficiency we're talking about here, so if they're going to be increasing the number of shaders and including these enhancements, we can expect a greater performance increase than if they were simply adding more shaders. Bear with me here:
Most vector data types in high level shader languages have 4 elements or less (float4 and vec4 in HLSL and GLSL being the widest of such types respectively). Ignoring driver optimisations that attempt to work around this (by, for example, packing 5 scalar additions into a 5-way vector addition - let's not forget this has its own overhead in terms of CPU time), each 5-way group of shaders can only work on up to 4 pieces of data at any one time largely because of this. That collection of 'fifth' shaders goes to waste most of the time. Say on the 5870, you were to remove each of those 'fifth' shaders, you'd be left with 1280 shaders, right? But most of the time (let's ignore that we've just lost our ability to perform transcendental operations for the sake of simplicity) there shouldn't be a performance hit. If we replace those lost shaders to bring our total back up to 1600 shaders, we now have an equivalent of 25% more shader throughput over the 5870.
Now what about our '6870' with 1920 shaders of the 4-way type? Well as we can see, there are 20% more shaders than in the 5870, and they should in most cases have 25% more throughput. The maths here isn't hard, 1.25*1.2 = 1.5. This should work yet again with the 6770 in regards to the earlier performance claims relating to the 5850. (960/800)= 1.2, so 20% more shaders, multiplied by our efficiency increase, and that again gives us 50%. This ties in with the claims of the 6770 being about the speed of, or slightly faster than the 5850 (a quick glance at techpowerup's 5770 review shows the 5850 to be about 43% faster than the 5770).
I'm unconvinced this update is just about making things cheaper and more cost effective - if they were going for that, the 6770 wouldn't have a costlier 256-bit memory interface. In my opinion what they're trying to do is cement their dominance over Nvidia for the Christmas season.