The original spec was... by the time it was actually working hardware it was barely competitive with the 7800GTX... unfortunatly no matter how good the hardware or how much money they chuck at it intel simply won't succeed in this area without a massive shift in approach and mindset.
UNfortunately the mindset was of Nvidia's, but with vast differences, both are going not paralel enough, too big die's with not enough efficiency. AMD is only held back by software and programability. If games were all geared to use a 5870 completely, or a 4870 for that matter, Nvidia would be in serious trouble, even without perfect programing AMD are massively ahead on profitability, margins, manufacturability and really everything that makes a GPU competitive in a business sense. If software could use the full shader compliment on every clock, well, it would quite literally wipe the floor with anything Nvidia have, by miles. The problem is of course that making a hardware design that is almost impossible to get full efficiency from is a huge risk in itself, and Nvidia's design is pretty easy to leverage the full power from.
However, which has clearly been the winner for the past 18 months, performance, price, manufacturability, yields, move to new processes, etc, etc. I've said for over a year, Nvidia HAVE TO without question head for a AMD style GPU core, hugely smaller, manufacturable properly, designed for lower clock speeds, less leakage, better yields, and higher efficiency. The upside of Nvidia doing that is now all dev's have a reason to spend more time coding in a way that can get better efficiency from the hardware, downside is it would be harder for a while to get that extra performance. Nvidia simply can't compete from a business sense, and seemingly from a performance sense without doing so.
Larabee isn't far off Fermi in reality, except because of its design, it needs far higher clocks with less cores, vs Fermi with more cores and lower clocks. Intel thought they could do higher clocks on better process's of their own, which is highly likely, but it likely ran into a somewhat netburst reality....... just can't hit the clock speeds required to be truly competitive.
Same goes for Fermi though, it needs X clockspeeds to be competitive, but its only hit x -(25-30%) speeds.
Frankly its design will only worsen its problems at 28nm, yes Nvidia will add via's to account for TSMC's awful process, and account for variable transistor size, both will increase an already 60% larger die, on a process thats likely going to lose out on anywhere from 20-40% on die size to an identical core on GloFo's transistor count.
If AMD can even reduce the extra die size it used to avoid TSMC 40nm problems, if both the GF100/5870 roughly doubled in transistor count for next gen, the new Nvidia core could be up to twice the size of AMD's new core, yet offering similar or maybe even less power.
Nvidia made the exact mistakes Intel made with Larabee, which is hilarious as Dear Leader has laughed at everything Larabee was doing wrong for the past 2 years.
Larabee has efficiency in instructions, better process's to be used, more money to be invested and no shrinking market where R&D will dissappear. Intel also have a track record of learning from their mistakes, unlike Nvidia.
Intel also only started buying up small firms and design teams who would likely help with future GPU designs last year, that obviously wouldn't be effective Larabee mark 2009/10, that quickly, they are getting the right people and teams to take Larabee forward into something good in the future. Nvidia just show no signs of adjusting to the industry and manufacturing around them, which could ultimately be their end.