This is one of the most important questions in computing
Modern processors are speed limited by their access to the data so it matters. The benefits of cache are depreciating, its a scale going from zero cache to all data being stored at this maximum speed but its not a linear scale as to the speed increase.
The cpu algorithm will try to maximise the correct data being in the best speed memory so thats more important most of the time
I think the speed increase would look a bit like that but its going to depend on the size of the program as some will fit entirely
in real world terms are we going to notice it in basics windows navigation and surfing
or is it only recognisable in encoding and gaming ect ?
Not sure, if you look at the aandtech site I think they have discussed this at length and measured by the program type, for instance when the first dual core processors came out
Repetitive and tight code should be best but a video encode is a lot of data too so the processor cache is only the critical point when all other data is already waiting. So a video encode with a simple codec will stress the hard drive most and the cache size will matter not much, anything will do
A very complex transform process will likely put overall speed back on the equations that make up mpeg codec or whatever. So its the internal cache then and large cache allows everything to be put within hands reach and more variation to occur without halting to get up and fetch
Web browsing is about latency over copper wires mostly, thats the weakest link so the processor has a lot of time anyway The only thing I ever saw opposite was a proposal for 3D webpages over fibreoptic
http://www.youtube.com/watch?v=LJMsuPfoBk4
http://en.wikipedia.org/wiki/VRML
http://en.wikipedia.org/wiki/Cache_algorithms#Most_Recently_Used