• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Yet another Intel CPU security vulnerability!

I was talking with a friend who’s kicking himself about not going EPYC. This situation is going to hit hard I think.

Unless you have a process that requires Intel for legacy reasons (some companies still have applications from the 70s or 80s central to their production systems) or very specific performance reasons you have to be very careful in an enterprise/professional scenario going with Intel at the moment in terms of accountability. I'm certainly glad I don't work in that environment any more.
 
Unless you have a process that requires Intel for legacy reasons (some companies still have applications from the 70s or 80s central to their production systems) or very specific performance reasons you have to be very careful in an enterprise/professional scenario going with Intel at the moment in terms of accountability. I'm certainly glad I don't work in that environment any more.

Please stop.
 
Unless you have a process that requires Intel for legacy reasons (some companies still have applications from the 70s or 80s central to their production systems) or very specific performance reasons you have to be very careful in an enterprise/professional scenario going with Intel at the moment in terms of accountability. I'm certainly glad I don't work in that environment any more.

I can think of limited scenarios these days, there are limited code bases that require Intel specific compilers. One that immediately came to mind was a C compiler that I came across a few years back (weirdly when I was writing an android app) that just wouldn't compile code on an AMD based system but would when I went to compile it on an intel based system. I don't think there are really many apps at all that you can't get to work with some investment in time or a re-compile. I wouldn't say scenarios don't exist but I think they are few and far between these days.
 
I don't think there are really many apps at all that you can't get to work with some investment in time or a re-compile.

Problem is that investing some time (usually money) bit for instance plenty of older systems are still in play in retail and manufacturing environments that don't play nice on anything but Intel (often lots of hand optimisations and originally create in COBOL, etc.) and over time massive amounts of time and money are spent cobbling over where they are falling apart with the issues that no one now exists who understand the systems well enough to do a simple port to a more modern platform and no one wants to be the person signing off on the resources needed to recreate the system from scratch on a modern platform. We've been slowly pulling ourselves out of this scenario at work over the last few years and that is with a willingness from management to embrace modern digital approaches and the costs of some of these update projects.

I'd agree it is a diminishing situation these days but it isn't rare yet.

EDIT: My post above wasn't intended to infer it was common hence using "unless" - as I said at the start of the thread I'd have severe reservations about using Intel for these kind of tasks any more but I do know sometimes these situations exist where it isn't a simple process to change over to AMD. More the thrust of my post was I don't envy the people making design/purchasing decisions in these situation with the potential amount of accountability on your shoulders security wise in the current environment and the often non-trivial task of getting a company to move away from Intel if they've been using it for decades.
 
Last edited:
Problem is that investing some time (usually money) bit for instance plenty of older systems are still in play in retail and manufacturing environments that don't play nice on anything but Intel (often lots of hand optimisations and originally create in COBOL, etc.) and over time massive amounts of time and money are spent cobbling over where they are falling apart with the issues that no one now exists who understand the systems well enough to do a simple port to a more modern platform and no one wants to be the person signing off on the resources needed to recreate the system from scratch on a modern platform. We've been slowly pulling ourselves out of this scenario at work over the last few years and that is with a willingness from management to embrace modern digital approaches and the costs of some of these update projects.

I'd agree it is a diminishing situation these days but it isn't rare yet.

I've got a similar situation with an accounting system that has a custom html front end, written by a contractor who sadly took her own life, it works alongside a couple of compiled DLL's that are effectively a makeshift API that there is no documentation for. There is also no documentation for the front end and. The app that it hangs off has had many iterations since and is very different today. The DB schemas are miles apart and barely similar, it also relies heavily on an SQL interface between our case management tool which feeds the account system data overnight. We have gone through 3 devs (apparent experts on the software) and probably half a million in spent costs and 3 failed project attempts to try and migrate it. We got close once, or at least we thought we did, until I found a number of glaring oversights that ended up being show stoppers.

We have started a new project now and are going to replace and rewrite everything from the ground up. Really this is the route we should have been taking a long time back but you live and learn.
 
I've got a similar situation with an accounting system that has a custom html front end, written by a contractor who sadly took her own life, it works alongside a couple of compiled DLL's that are effectively a makeshift API that there is no documentation for. There is also no documentation for the front end and. The app that it hangs off has had many iterations since and is very different today. The DB schemas are miles apart and barely similar, it also relies heavily on an SQL interface between our case management tool which feeds the account system data overnight. We have gone through 3 devs (apparent experts on the software) and probably half a million in spent costs and 3 failed project attempts to try and migrate it. We got close once, or at least we thought we did, until I found a number of glaring oversights that ended up being show stoppers.

We have started a new project now and are going to replace and rewrite everything from the ground up. Really this is the route we should have been taking a long time back but you live and learn.

We (not my current employer) had a similar situation if I'm reading that right and the solution for many years was to simply print the information off from one system and have someone (namely me) re-enter it on the other :( I actually wrote the proof of concept that did myself out of a job for connecting the two databases together :s

I had the advantage there though of having used the system for many many years and pretty much knew the pattern of how it worked in my head - very hard for anyone to come in and understand the system at the same level without proper documentation and missing source code, etc.
 
We (not my current employer) had a similar situation if I'm reading that right and the solution for many years was to simply print the information off from one system and have someone (namely me) re-enter it on the other :( I actually wrote the proof of concept that did myself out of a job for connecting the two databases together :s

I had the advantage there though of having used the system for many many years and pretty much knew the pattern of how it worked in my head - very hard for anyone to come in and understand the system at the same level without proper documentation and missing source code, etc.

I think less than the way data flows at a database level it's trying to change the front end web app and how it leverages the API of the old system vs how the new version (which has a new restful API) would leverage the data. Alongside how it feeds into custom holding tables and a whole load of other factors including how it has SAP, crystal reports and a whole load of other integration. It's just a lot of work and hundreds of millions of lines of data that need to be right. It's just a messy migration and a migration to be in the same place as we are now, which to be honest is much less than ideal.

Sometimes it is just better to start again and build a load of import scripts for the current data, if you do it right you get to choose your position which to me is much more appealing a position. I need to stop though as even though this discussion is far more insightful and productive than the last few pages of the thread we are so far off topic it's silly :)
 
I was talking with a friend who’s kicking himself about not going EPYC. This situation is going to hit hard I think.

I can't fathom out why anybody would go the Intel route if they need to make the decision today and if they don't have any major compatibility reason such as legacy Intel compiled systems as suggested above. Your average SME should be jumping all over this if they are at that point in product life cycle. I mean even if Intel cut their current prices in half they still lose in important metrics like density, scalability, power draw, performance per $, performance per watt, SECURITY. Basically every important metric they lose.

Any decision maker should be looking very closely at the numbers, roadmaps and relative performance as well as the current security landscape before making these sorts of decisions. It's not rocket science to move over, in fact it's pretty easy.
 
Which is why I don't envy anyone in that position - lot of companies have always bought Intel and convincing non-tech people at management/director level to change in some structures extremely hard meanwhile you'd have to be conscious of the fact that if security issues do happen, through continued use of Intel hardware, the target is going to be on your back.
 
Sticking with what you know, fear of the unknown.

I think there is also a level of Mindshare, A fried of mine who does professional productivity work has had nothing but iMac's, 2 years old £1800, its slower than hell and he knows it, he's sent it back twice thinking something was wrong with it, he gets frustrated with how slow it is.
Its like a cheap laptop on a stand, you tell him that as delicately as possible he gets quite upset, he genuinely thinks ANY PC would be even worse, there is no telling him, its like a religion.

Seeing that you soon realise Intel really don't have to do a lot of work to keep existing customers paying twice as much for inferior products with problems. Especially when you're the guy making the Intel vs AMD choice for your paymasters, you just stick with Intel because no one is ever going to criticize you for that.

That is what AMD are upagainst and they know it.

OedpBo9.jpg
 
The whole point of the bug is that sandbox's and VM's don't protect you from RIDL because the hardware allows you to bypass them. The while paper states that they were able to complete the attack from spidermonkey. So this is just another attack. You should be banned m8, when will your harassment stop?

Web browsers are capable of running JavaScript outside the sandbox, with the privileges necessary to, for example, create and delete files.
Microsoft Windows allows JavaScript source files on a computer's hard drive to be launched as general-purpose, non-sandboxed programs.

Depends how it's setup, but as its stated in the while paper it does not matter. You know this because you have read the white paper by now. This is a poorly contrived argument.

Umm the sandbox could stop the code from running on the PC in the first place. Why should I be banned? I am not harassing you, I am taking part in a debate.

I have never said you cannot run javascript locally, neither has anyone else said that.
 
Umm the sandbox could stop the code from running on the PC in the first place. Why should I be banned? I am not harassing you, I am taking part in a debate.

I have never said you cannot run javascript locally, neither has anyone else said that.

That's the point of the security hole, sandbox or VM can't stop you stealing secrets. They were able to run the code from spidermonkey within Firebox. Javascript is an interpreted language, so long as the code can be run in the sandbox it can use the security hole. They cover methods for running the required commands for the attack within the browser. Remember you can setup the attack in the browser or the command line. If done right it matters little and is a proof of concept. I believe there is a video showing them stealing your browsing history. The code has to run locally, so either you download it within your browser or run it from your hard disk, it matters little. It's not meant to work under any set of circumstances. Also what they are showing more importantly is they are able to steal the data they require on the target machine. It does not matter what is running on the machine. They will find it just as hard to steal the target data. They outline how in their white paper.
 
I think agree to disagree on this one, it helps no one on here just us going round in circles. If there is a proof of concept from within a modern web browser I will look at it, but until then my view is what it is.
 
They were able to run the code from spidermonkey within Firebox... They cover methods for running the required commands for the attack within the browser... I believe there is a video showing them stealing your browsing history.
Sorry, this is simply not true. There are two proof of concept demos for two different exploits, neither are run within Firefox, or any browser.

RIDL JavaScript - The exploit recovers a text string which is being repeatedly re-posted. The JavaScript is run locally, outside the browser using the SpiderMonkey JavaScript engine. We know this because the command is taskset -c 7 ./js ridl-shell.js

ZombieLoad Browser Monitoring - The exploit recovers the the current URL if it is refreshed sufficient times within a defined time interval. The binary code is executed locally, outside the browser. We know this because the command is taskset -c 1 /lb_look 0

In both cases, the researchers have used local system tools to identify core IDs and lock the victim and attack processes to the same physical core. They do not detail how this could be achieved from within a browser.
 
I think agree to disagree on this one, it helps no one on here just us going round in circles. If there is a proof of concept from within a modern web browser I will look at it, but until then my view is what it is.

Side-channel attacks are very hard to defend against via sandboxing, etc. but by their nature they usually involve "feeling around in the dark" and to be success usually need a fair amount of control over the environment. Academically interesting because in theory it shouldn't be possible to leak data over these boundaries theoretical or otherwise but prohibitive to actually put into practise. JavaScript for instance has mostly abstracted memory management making it a lot harder to "inadvertently" access memory locations that might leak useful information usually requiring tricks with page faults to spill data you shouldn't have access to and all in all ends up with a very imprecise leak of data that requires 1000s or even millions of repetitions to piece together anything useful at all and that is under conditions where you already have inside knowledge of the environment. With the RIDL exploits they rely on the target data being repetitively passing through a specific buffer, the buffer is small which also means the chances of the data being there when they flush it with all the other imprecisions of a browser type environment being tiny and that is ignoring that you need something repeatedly presenting the data to that buffer and in normal usage that doesn't happen and in their example they remove all the sources of noise they could before filtering the rest - in the real world there would be all kinds of arbitrary data adding to the noise they are dealing with.

Where it is more useful is in a shared hosting type environment if an attacker can identify a process which they can invoke repeatedly from unprivileged code to leak privileged data without tripping security measures - they use passwd as an example as for instance in a multi-user Linux environment you can easily call it without any special access to the machine (though you still need the ability to execute unprivileged code that can access the relevant buffers hence why things like remote accessible remote machines are problematic as that gives many vectors for potentially exposing them) and it by its nature loads a certain amount of privileged data every time it is invoked so as to do authentication - the problem is in a properly setup environment there would be security measures to lock out an attacker if they repeatedly tried to invoke passwd with invalid credentials but it serves as an example if an attacker does find a process they can invoke at will which has access to information worth stealing.

Fortunately they can't easily leak information that is passing through a system in the nature of say Heartbleed such as individual HTTPS transactions because they need the same data to appear repetitively so as to piece it together by monitoring for patterns - in their example they leaked the encrypted form of a password in a approx. 24 hours by repeatedly invoking passwd.

It isn't very useful at all for the kind of malware that targets consumers in an mostly automated fashion.

They do not detail how this could be achieved from within a browser.

Unfortunately there are a few little details like this that are omitted from their description of their testing procedure which makes it a little confusing sometimes such as precisely how hardware threads (amongst other factors) are used in the context of their attacks and what the actual range of possible weaknesses there are with hardware threads and so on. I'm not even 100% sure on what they are actually doing with taskset, etc. without more details of their procedures. (I mean I get the gist of it but there are factors that they don't elaborate on that leave a bit of a wildcard).

Unfortunately it tends to make the attacks look far easier to achieve to the lay person than they actually are.
 
...how hardware threads (amongst other factors) are used in the context of their attacks and what the actual range of possible weaknesses there are with hardware threads and so on. I'm not even 100% sure on what they are actually doing with taskset, etc. without more details of their procedures. (I mean I get the gist of it but there are factors that they don't elaborate on that leave a bit of a wildcard).
The victim and attack processes must be on different logical HT cores on the same physical core. So they need to determine this and use taskset to lock to core affinity. This nugget is somewhat buried in the readme of the proof of concept code:
  • On Linux, you can run 'lscpu -e'. This gives you a list of logical cores and their corresponding physical core. Cores mapping to the same physical core are hyperthreads.
  • On Windows, you can use the coreinfo tool from Windows Sysinternals.
  • Try to pin the tools to a specific CPU core (e.g. with taskset). Also try different cores and core combinations. Leaking values only works if attacker and victim run on the same physical core.
 
The victim and attack processes must be on different logical HT cores on the same physical core. So they need to determine this and use taskset to lock to core affinity. This nugget is somewhat buried in the readme of the proof of concept code.
  • On Linux, you can run 'lscpu -e'. This gives you a list of logical cores and their corresponding physical core. Cores mapping to the same physical core are hyperthreads.
  • On Windows, you can use the coreinfo tool from Windows Sysinternals.
  • Try to pin the tools to a specific CPU core (e.g. with taskset). Also try different cores and core combinations. Leaking values only works if attacker and victim run on the same physical core.

That is Zombieload yeah? that kind of information is glossed over a bit with RIDL which is a bit misleading in terms of how potent it is outside of theory.

Same with some of the other factors - they mention:

Since the process of evicting entries from the L1D cache makes extensive use of the LFBs—due to TLB misses as well as filling cache lines—this adds a significant source of noise. We also need to ensure that the TLB entries for our reload buffer are still present after the eviction process, adding another source of noise to our attack.

But they don't really elaborate on the implications of that and how robust their solution actually is in a real world environment (it isn't) and the requirements to accomplish that of somehow repeatedly forcing the process to happen so as to filter the target data out of the noise and the contrast between their controlled test environment and a real world environment with a lot more stuff contending for those buffers and caches.
 
Last edited:
That is Zombieload yeah? that kind of information is glossed over a bit with RIDL which is a bit misleading in terms of how potent it is outside of theory.
Yeah, that text is from ZombieLoad. But RIDL clearly requires the same setup, they mention the same physical core requirement in their presentation and are using taskset in the same way to ensure the JavaScript engine runs on a specific core.
 
Back
Top Bottom