You need to read the white paper, they have answered how they go about getting the data and I have already covered all of that. There is no argument here for me to answer.
You might want to re-read it bearing what I said in mind - their specially crafted victim process repeatedly stores the data they are after so that it becomes prominent amongst the data that is normally in and out of that buffer:
"Our first experiment performs the attack discussed ear-lier against a victim running in the same or other hardware thread and repeatedly storing a secret to a fixed memory address."
That isn't normal application behaviour and is what they need to do to overcome the various synchronisation and noise issues - nothing I am saying is at odds with the whitepaper.
Even assuming somehow they managed to get their exploit running with all the other obstacles when it comes to the browser and JS and a normal desktop environment - their attack needs a victim process that is storing the target data over and over as fast as the system is running to leak data at 1B/s in the real world that data is probably in that buffer 2 or 3 times over a session not the 1000s of times they need to leak it. (Additionally those 2-3 times it is there they would have no way of knowing so as to read the evicted data in a timely fashion).
It is academically interesting because it theoretically shouldn't be possible to leak that data at all across that boundary but doesn't represent something that works in the real world - although the theory could be put into play in certain environments (namely things like shared hosting environments) if an attacker found a weakness they could repeatedly invoke from unprivileged code at will - hence the passwd example but that is another one that due to factors is unrealistic in the real world.
Last edited: