• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Yet another Intel CPU security vulnerability!

Did they find all the NSA hacking tools before they leaked? The ones everyone rushed to patch? Including several zero-day exploits.

Hackers are using leaked NSA hacking tools to covertly hijack thousands of computers
https://techcrunch.com/2018/11/28/hackers-nsa-eternalblue-exploit-hijack-computers/

How do you think companies like Akamai in your link know this stuff is going on? if these RIDL exploits, etc. were as easy to use as you imply we'd see it ongoing by now.

EDIT: And if someone does find some way to push through and expose a system to these kind of attacks who knows - they might even be able to trick/manipulate the OS or BIOS into either re-enabling HT or at least enough functionality for such exploits to work (such BIOS manipulations already exist in other exploits) - but again if they got that far you are already owned.
 
How do you think companies like Akamai in your link know this stuff is going on? if these RIDL exploits, etc. were as easy to use as you imply we'd see it ongoing by now.
No one knew about the NSA hacking tools until they leaked. This is my last post by the way. The debate is more or less over. I am sure you need to do something else as well.
 
There are lots of zero days exploits running around at any one time, one I remember could just let you take over a windows machine. Why has the world not ended? Why is every desktop systems being compromised left, right and centre....

I don't disagree and it's why I have investment in firewalls that run into the £20k+ price points and that run IDS, IPS, DLP and other Unified Threat Management features, It's why I proxy port 80 and 443 traffic and also why I use message labs. As well as this it is why I have network based AI (DarkTrace AP) in my security stack and why I have just invested the best part of £100k on new servers and some new layer 3 switches. You know what it's also why I put my job on the line every year by paying professionals big stacks of cash for external pen tests. Why would I do all of this if the internet was a safe place? I don't doubt for a second that zero day vulnerabilities exist but my security stack is deep and well regarded as some of the best tech out there, is it full proff? Hell no but it's robust.

With all this in mind why then do I agree with many of the fundamentals of what people in here are saying? It isn't because I think you don't need security? It's not that at all and security is important, but its equally important to know what the footprint is so you can ensure your security is effective. When you put these fundamentals together you should be able to join up the dots and see things from a broader more reasoned perspective.
 
Last edited:
No one knew about the NSA hacking tools until they leaked. This is my last post by the way. The debate is more or less over. I am sure you need to do something else as well.

People do however know about RIDL and associated side-channel methods so if they were going on we would know by now and if as easy to turn these proof of concepts into real world viable attacks as you claim they would certainly be in existence by now. In reality as I said there are reasons for them demonstrating these weaknesses in the form they have - that doesn't mean these weaknesses aren't a threat it just means there is more to understanding their application and where their usefulness lies to an attacker.

PS Don't worry about me - posting on here doesn't take away time from me getting on with anything else.

I don't disagree and it's why I have investment in firewalls that run into the £20k+ price points and that run IDS, IPS, DLP and other Unified Threat Management features, It's why I proxy port 80 and 443 traffic and also why I use message labs. As well as this it is why I have network based AI (DarkTrace AP) in my security stack and why I have just invested the best part of £100k on new servers and some new layer 3 switches. You know what it's also why I put my job on the line every year by paying professionals big stacks of cash for external pen tests. Why would I do all of this if the internet was a safe place? I don't doubt for a second that zero day vulnerabilities exist but my security stack is deep and well regarded as some of the best tech out there, is it full proff? Hell no but it's robust.

This is a lot of the reason I got out of IT as a job (though more related to database development certification, etc.) it can be so demanding staying on top of the latest developments and sucked the life out of a lot of the sides of it I enjoy as a hobby.

Though mind you my brother makes very good money doing a lot of that stuff on a consultancy/contracting basis - way more than I ever made :s
 
Last edited:
People do however know about RIDL and associated side-channel methods so if they were going on we would know by now and if as easy to turn these proof of concepts into real world viable attacks as you claim they would certainly be in existence by now.
Yes, this is a point I also made previously. Anti-virus researchers have been actively searching for malware in the wild based on these types of side-channel attacks. 18 months on from Spectre and still the only occurrence is the original proof of concept code. Now I am sure there are government agencies, with massive resources, which are researching weaponised versions of these exploits. But, due to the way these exploits work, they will be bespoke and tuned to a specific target. Your average home user is not going to stumble across this whilst casually browsing ads on the Internet.

The setup for the ZombieLoad browser snooper does require ideal conditions. First they run a third party tool on the victim machine to reveal CPU core IDs and to determine which are physical and logical. Then the browser and snooping software are launched on the same machine using core affinity to ensure that both are running and locked to the same physical core. Finally they recommend that the CPU is configured to run at a constant maximum frequency with no SpeedStep style variation. They also go on to say that for best chance of success the machine should have a clean reboot before hand, as resume from sleep messes with the timer state.

That’s quite a lot of hard work to snoop on your browsing history. Much easier just to trick the home user into installing some fake browser add-on which will give them complete control over the machine.
 
The setup for the ZombieLoad browser snooper does require ideal conditions. First they run a third party tool on the victim machine to reveal CPU core IDs and to determine which are physical and logical. Then the browser and snooping software are launched on the same machine using core affinity to ensure that both are running and locked to the same physical core. Finally they recommend that the CPU is configured to run at a constant maximum frequency with no SpeedStep style variation. They also go on to say that for best chance of success the machine should have a clean reboot before hand, as resume from sleep messes with the timer state.

That’s quite a lot of hard work to snoop on your browsing history. Much easier just to trick the home user into installing some fake browser add-on which will give them complete control over the machine.

The problem with RIDL - it is working with very small buffers, working with severe challenges in terms of returning the data it is trying leak due to often not knowing precisely when it will be available and requires some way of forcing the victim to put that data into a buffer in the first place - usually repeatedly so as to overcome the limitations in place. Which isn't normal behaviour for a program. It is further complicated by the fact that different CPUs have different configurations of the buffers used or might not have them at all so it is much easier to use these attacks when you know what the target CPU is (i.e. a server you already have some relationship with) and hard in the case of a web-browser where CPUID information generally isn't exposed (although in some cases it might be leaked or possible to infer from the user-agent).

Which is why the examples mostly use a specially crafted victim process that constantly writes/stores the data they want to steal and why it is unrealistic in the real world. There are however instances like with the passwd example where you can force the victim to repeatedly store the information you want in the relevant buffer to leak but that isn't something that can really be done through a web-browser and generally doesn't work with things like passwd in the real world due to the factors I mentioned before such as security watchdogs designed to prevent brute force attacks.

"In a typical real-world setting, synchronizing at the exact point when sensitive data is in-flight becomes non-trivial, as we have limited control over the victim process. Instead, by repeatedly leaking the same information and aligning the leaked bytes, we can retrieve the secret with-out requiring a hard synchronization primitive"

One of the key ways it works is by repeatedly forcing a program to store the same data and by comparing multiple leaks of "noise" data for commonalities you can eventually align each copy to get the full copy of the data - which obviously presents some pretty significant challenges when trying to leak something via a web-browser when it is very rare for privileged information to be managed like that and you have limited if any control over the other side of the boundary to force it to behave in a way that would cooperate with your exploit - unlike a shared hosting environment where you have more opportunities to invoke processes on demand from unprivileged access.
 
I don't disagree and it's why I have investment in firewalls that run into the £20k+ price points and that run IDS, IPS, DLP and other Unified Threat Management features, It's why I proxy port 80 and 443 traffic and also why I use message labs. As well as this it is why I have network based AI (DarkTrace AP) in my security stack and why I have just invested the best part of £100k on new servers and some new layer 3 switches. You know what it's also why I put my job on the line every year by paying professionals big stacks of cash for external pen tests. Why would I do all of this if the internet was a safe place? I don't doubt for a second that zero day vulnerabilities exist but my security stack is deep and well regarded as some of the best tech out there, is it full proff? Hell no but it's robust.

With all this in mind why then do I agree with many of the fundamentals of what people in here are saying? It isn't because I think you don't need security? It's not that at all and security is important, but its equally important to know what the footprint is so you can ensure your security is effective. When you put these fundamentals together you should be able to join up the dots and see things from a broader more reasoned perspective.

Defense in depth is the only way to go. Security is not about protecting yourself about what you know about but being secure even from what could be unknown. So you have to be secure at every level. The CPU is a very important level to security. Sometimes when you have a security hole in one place that can cause your whole security posture to become insecure. You don't wait around for what the attacker may come up with. You don't argue about the probability that it could be hard to exploit or disagree with the phd security experts that found the problem. You right away turn to removing the possibility of exploitation. If you can't then you mitigate the threat.

The question of turning HT off, is down to whether you are running trusted or untrusted code.

If you run a web browser then you are exposed to code you cannot control. So your posture should be one of removing the possibility of attack or if you can't then mitigation. Given RIDL is a hardware problem and made worst by HT. Then software mitigation (with HT off) or hardware replacement would be the quickest way.

With a server that only runs trusted code, you could argue that HT turn on is okay. This is a server locked away on a part of the network not accessible to outside sources of attack. You can balance security wise, the probability that an attack is very low, with the performance increase HT would provide. This would be a risk assessment process I would guess.

With home users they can make this same choice with computer games. Computer games being I would guess their version of trusted code. Even so gamer's need to be careful, some computer games like CS:GO will load webpages when joining a server and connect to servers that cannot be trusted (public servers runs by an untrusted third party). This would be untrusted code, it may be prudent in this case to turn off HT. Here HT would proved very little performance impact, making the choice trivial. The issue here is even me as a gamer would never turn HT off. Also some people will find it hard to turn HT on and off.

If you don't understand when to turn HT off or on. Then I guess you should just leave it off, as the experts advise.
 
zx128k none of those scare me, all those presentations were done with local access already granted to a machine.

Its a bit like how one assesses server vulns, you have whats known as local exploits and remote exploits, the former requires some kind of access already to accomplish the exploit hence the term local and by nature will have much lower risk levels, the latter would allow an exploit to be carried out without any existing access to machine so e.g. accessing a web page on a server and then able to get into the server simply from that, obviously much more serious.

If I was to apply the same sort of classification to those examples you posted they would be the equivalent of local exploits.

Now if someone e.g. manages to put on a drive by on the bbc news website that specifically uses this vulnerability, then you have my attention.
 
That's a pretty fair comment, i don't pretend to understand it which is why i read this thread much more than i participate.

Having said that i don't trust the agenda of a minority of others in here nearly as much as i trust yours.

I hope I am not in that untrusted category.

The issue I have with the hype is it misrepresents the facts. Plain and simple really.

In terms of server land I think EPYC is a clear win, I dont have any brand affiliation with either intel or AMD.

On desktop if you fully patch intel systems I have no doubt performance is very close on the chips. My issue really is tho these patches are probably not needed for the vast majority of desktop users. But of course if you want peace of mind that publically disclosed vulns are patched then by all means patch or just buy AMD. There will always be undisclosed vulns which could apply to any platform, and patches also do not necessarily fully mitigate, e.g. the spectre patches do not fully mitigate. If they did the performance hit would be unreal.
 
zx128k none of those scare me, all those presentations were done with local access already granted to a machine.

Its a bit like how one assesses server vulns, you have whats known as local exploits and remote exploits, the former requires some kind of access already to accomplish the exploit hence the term local and by nature will have much lower risk levels, the latter would allow an exploit to be carried out without any existing access to machine so e.g. accessing a web page on a server and then able to get into the server simply from that, obviously much more serious.

If I was to apply the same sort of classification to those examples you posted they would be the equivalent of local exploits.

Now if someone e.g. manages to put on a drive by on the bbc news website that specifically uses this vulnerability, then you have my attention.

Local access does not matter. Any untrusted code in your browser from a website is a problem too. If you run trusted code its not an issue. Above they have moved on to their last smoke screen which is the noise issue which the white paper states they had to overcome in their password attack. Ignoring proof of concept is not a great debating position.

See sync section in the RIDL document. page 7 & 8. https://mdsattacks.com/files/ridl.pdf

"For this purpose, we show a noise-resilient mask-sub-rotate attack technique that leaks 8 bytes from a given index at a time.

As shows in Figure7,1suppose we already know part of the bytes we want to leak (either by leaking them first or knowing them through some other means).2In the speculative path we can mask the bytes that we do not know yet.3By subtracting the known value,4and then rotating by 16 bytes, values that are not consistent with previous observations will be out of bounds in our Flush + Reload buffer, meaning we do not leak them. This technique greatly improves the observed signal.We use this technique to develop an exploit on Linux that is able to leak the contents of the/etc/shadow file. Our approach involves repeatedly invoking the privileged passwd program from an unprivileged user. As a result the privileged process opens and reads the/etc/shadow file, that ordinary users cannot access otherwise. Since we cannot modify the victim to introduce a synchronization point, we repeatedly run the program and try to leak the LFB while the program reads the/etc/shadow file. By applying our previously discussed technique, with the additional heuristic that the leaked byte must be a printable ASCII character, we are able to leak the contents of the file even with the induced noise from creating processes."
 
Local access does not matter. Any untrusted code in your browser from a website is a problem too.
Yet all the proof of concept code is being executed with local access on a specially prepared machine. They haven't been able to demonstrate the exploit in JavaScript delivered via a modern browser. Sure, it is possible to write malicious JavaScript, but these exploits require low-level instructions and critical timing, there are limits to what is possible in script, and much simpler attacks that could be used instead.
 
Yet all the proof of concept code is being executed with local access on a specially prepared machine. They haven't been able to demonstrate the exploit in JavaScript delivered via a modern browser. Sure, it is possible to write malicious JavaScript, but these exploits require low-level instructions and critical timing, there are limits to what is possible in script, and much simpler attacks that could be used instead.

You have no idea what you are talking about. So basically you are stating it's not possible from JavaScript. Here is javascrypt. Note: SpiderMonkey is Mozilla's JavaScript engine written in C and C++. WebAssembly https://blog.mozilla.org/blog/2017/11/13/webassembly-in-browsers/

RIDL from JavaScript
https://youtu.be/KAgoDQmod1Y

If your argument is right, then explain why has someone done the attack from javascript?

I would also like to cite,

@inproceedings{ridl,
title = {{RIDL}: Rogue In-flight Data Load},
booktitle = {S\&{P}},
author = {van Schaik, Stephan and Milburn, Alyssa and Österlund, Sebastian and Frigo, Pietro and Maisuradze, Giorgi and Razavi, Kaveh and Bos, Herbert and Giuffrida, Cristiano},
month = may,
year = {2019},
}

https://mdsattacks.com/

"We show that attackers who can run unprivileged code on machines with recent Intel CPUs - whether using shared cloud computing resources, or using JavaScript on a malicious website or advertisement - can steal data from other programs running on the same machine, across any security boundary: other applications, the operating system kernel, other VMs (e.g., in the cloud), or even secure (SGX) enclaves."

Please prove that this is not possible, as there is reasonable proof you are wrong.
 
Last edited:
Are you trolling? does that look like a drive by infection in a web browser to you?

look at the top of the cli, it a locally executed command.

JavaScript is an interpreted language. The browser provides the platform independence for java through its java virtual machine and the interpreted JavaScript. As I posted above SpiderMonkey is Mozilla's JavaScript engine or interpreter. So if it runs at the cli, it will run from SpiderMonkey which is built into FireFox.
 
Last edited:
Fact 1: They used a timer which has been disabled in modern browsers.
Fact 2: They couldn't make it work in Chrome (don't mention any other browsers).
Fact 3: They ran the taskset command locally to lock both the victim and attack processes to a specific CPU core (i.e. specially prepared environment). The don't show how they determined the core IDs but I would imagine it was executing lscpu locally beforehand.
 
Local access does not matter. Any untrusted code in your browser from a website is a problem too. If you run trusted code its not an issue. Above they have moved on to their last smoke screen which is the noise issue which the white paper states they had to overcome in their password attack. Ignoring proof of concept is not a great debating position.

See sync section in the RIDL document. page 7 & 8. https://mdsattacks.com/files/ridl.pdf

"For this purpose, we show a noise-resilient mask-sub-rotate attack technique that leaks 8 bytes from a given index at a time.

As shows in Figure7,1suppose we already know part of the bytes we want to leak (either by leaking them first or knowing them through some other means).2In the speculative path we can mask the bytes that we do not know yet.3By subtracting the known value,4and then rotating by 16 bytes, values that are not consistent with previous observations will be out of bounds in our Flush + Reload buffer, meaning we do not leak them. This technique greatly improves the observed signal.We use this technique to develop an exploit on Linux that is able to leak the contents of the/etc/shadow file. Our approach involves repeatedly invoking the privileged passwd program from an unprivileged user. As a result the privileged process opens and reads the/etc/shadow file, that ordinary users cannot access otherwise. Since we cannot modify the victim to introduce a synchronization point, we repeatedly run the program and try to leak the LFB while the program reads the/etc/shadow file. By applying our previously discussed technique, with the additional heuristic that the leaked byte must be a printable ASCII character, we are able to leak the contents of the file even with the induced noise from creating processes."

You are overlooking two crucial factors in how they overcame the noise issue - they made their victim process repeatedly store the "secret" data they are after and made sure the victim process would be where their attack needed it to be to leak data by controlling the local environment - they also started with a considerably cleaner environment than a typical desktop system where a lot more arbitrary data would be coming and going from the buffers. This isn't how privileged information works in the real world - in some cases it might be possible to force those circumstances when you have one foot in the door like a shared server environment.
 
You are overlooking two crucial factors in how they overcame the noise issue - they made their victim process repeatedly store the "secret" data they are after and made sure the victim process would be where their attack needed it to be to leak data by controlling the local environment - they also started with a considerably cleaner environment than a typical desktop system where a lot more arbitrary data would be coming and going from the buffers. This isn't how privileged information works in the real world - in some cases it might be possible to force those circumstances when you have one foot in the door like a shared server environment.

You need to read the white paper, they have answered how they go about getting the data and I have already covered all of that. There is no argument here for me to answer.
 
Fact 1: They used a timer which has been disabled in modern browsers.
Fact 2: They couldn't make it work in Chrome (don't mention any other browsers).
Fact 3: They ran the taskset command locally to lock both the victim and attack processes to a specific CPU core (i.e. specially prepared environment). The don't show how they determined the core IDs but I would imagine it was executing lscpu locally beforehand.

"While built-in high-resolution timers have been disabled as part of browser mitigations against side-channel at-tacks [64], [63], prior work has demonstrated a variety of techniques to craft new high-resolution timers [66],[56], [60], such as SharedArrayBuffer [60] and GPU-based counters [56]." "The SharedArrayBuffer feature was recently re-enabled in Google Chrome, after the introduction of SiteIsolation [67], [68]." Straight out of the white paper.

Chromium https://www.chromium.org/Home/chromium-security/mds
The Chromium development team released an advisory stating that they looked into whether they could introduce MDS vulnerability mitigations into the browser, but decided that users should instead rely on operating system security updates instead.

"The Chrome team investigated various mitigation options Chrome could take independently of the OS, but none were sufficiently complete or performant. Users should rely on operating system level mitigations."

Response

The Chrome team investigated various mitigation options Chrome could take independently of the OS, but none were sufficiently complete or performant. Users should rely on operating system level mitigations.

Windows

Windows users should apply updates with MDS mitigations as soon as they are available, and follow any guidance to adjust system settings if appropriate. See: To be fully protected, customers may also need to disable Hyper-Threading (also known as Simultaneous Multi Threading (SMT)).

Chrome OS

Chrome OS has disabled Hyper-Threading on Chrome OS 74 and subsequent versions. This provides protection against attacks using MDS. More details on Chrome OS's response.
 
Back
Top Bottom