480mm2 probably won't be that much larger than Cayman...
The problem is, yields go down exponentially as size increases, come in before the curve gets mental and you're fine, after or borderline, wave by by to your profits.
Profits are on the opposite exponential curve.
If a wafer is still $5k, if you get 10 chips working, they'll be $500 each, get 20 chips working, and they've dropped by half to $250, get 40 working and you're down to $125, get 80 working and its $62.50 a chip.
Thats the problem, the rumours of likely sub 10 or maybe 20 512sp full Fermi's off a wafer are likely true, or frankly they'd release them, several thousand wafers for a run and thats still 40-50k they could have made, they've release "ultra" type cards with lower numbers than that before.
So the difference between getting 40 chips and 80 off a wafer is pretty huge in terms of price, where you'll hit a profit, and where you can raise the price to make a dent in the billions you've spent on R&D.
Either way Cayman is supposed to be sub 400mm2, I can't remember what guess I put in for Cayman, I guess 255mm2 for the 6870/50 though, could not have been more correct for that, still a guess my Cayman guess could be way off.
From rumours, general yields of "big" chips in the past, issues Nvidia have had over the last 3 process's in terms of yields/cost/problems due to size, above 400mm2 isn't great, above 450mm2 is going to be bad, above 500mm2 is just ridiculous.
If Cayman goes above 400mm2, it won't be by much, or they've gotten some silly pricing from TSMC based off their potential to leave for GloFo very very soon. This is where the problem lies, current 5870 architecture/efficiency puts the GF100 about 60% bigger but at best 20% faster, whats going to happen when that gap reduces to 20% bigger if both chips had a similar architecture AMD would have a significant lead, if AMD have increased their efficiency dramatically Nvidia won't have a hope in hell.
Even ignoring that, lets call it at a guess, Nvidia are going to be 20% bigger(I think it will be at least 30% personally), take any number, it doesn't really matter, but say Nvidia get 40 chips off the wafer $5k /40 = $125, $5k / 48 = $104.
Thats COST, thats zero profit, thats making nothing at all and not even paying for the shipping to get them to be put on cards somewhere. We're talking billions in R&D from both companies, they tend to go for about a 100% profit, thats ideal, at which point its now $250 vs $208, a $42 larger price before you talk about power, memory everything else, and in this case, AMD would almost certainly have the performance advantage, $42 cheaper, or $42 more profit, and the faster card, its win win.
If its 30%, AMD end up gettign 30% more cores per wafer, which comes down to $96, basically 1/4 cheaper, and its still faster.
Thats also best case scenario, in the real world a 400mm2 will have marginally better yields than a 401mm2 core, a 400mm2 will have significantly higher yields than a borderline 480mm2 core, that could throw another 10% more core per wafer to AMD, which means its $125 vs $89.
That 80mm2 is HUGE, if the gaps bigger(385-395mm2 vs 500-510mm2, it just gets worse, and worse).
An extra £50 is fine for a card thats actually the fastest, thats how the market works, people will pay a premium, if its actually slower though, very very few people will go for a more expensive and slower card.