In other news early performance metrics of EPYC 7452 are in and man thats some cpu for the money - Very happy with them.
I was talking with a friend who’s kicking himself about not going EPYC. This situation is going to hit hard I think.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
In other news early performance metrics of EPYC 7452 are in and man thats some cpu for the money - Very happy with them.
I was talking with a friend who’s kicking himself about not going EPYC. This situation is going to hit hard I think.
Unless you have a process that requires Intel for legacy reasons (some companies still have applications from the 70s or 80s central to their production systems) or very specific performance reasons you have to be very careful in an enterprise/professional scenario going with Intel at the moment in terms of accountability. I'm certainly glad I don't work in that environment any more.
Please stop.
Unless you have a process that requires Intel for legacy reasons (some companies still have applications from the 70s or 80s central to their production systems) or very specific performance reasons you have to be very careful in an enterprise/professional scenario going with Intel at the moment in terms of accountability. I'm certainly glad I don't work in that environment any more.
I don't think there are really many apps at all that you can't get to work with some investment in time or a re-compile.
Problem is that investing some time (usually money) bit for instance plenty of older systems are still in play in retail and manufacturing environments that don't play nice on anything but Intel (often lots of hand optimisations and originally create in COBOL, etc.) and over time massive amounts of time and money are spent cobbling over where they are falling apart with the issues that no one now exists who understand the systems well enough to do a simple port to a more modern platform and no one wants to be the person signing off on the resources needed to recreate the system from scratch on a modern platform. We've been slowly pulling ourselves out of this scenario at work over the last few years and that is with a willingness from management to embrace modern digital approaches and the costs of some of these update projects.
I'd agree it is a diminishing situation these days but it isn't rare yet.
I've got a similar situation with an accounting system that has a custom html front end, written by a contractor who sadly took her own life, it works alongside a couple of compiled DLL's that are effectively a makeshift API that there is no documentation for. There is also no documentation for the front end and. The app that it hangs off has had many iterations since and is very different today. The DB schemas are miles apart and barely similar, it also relies heavily on an SQL interface between our case management tool which feeds the account system data overnight. We have gone through 3 devs (apparent experts on the software) and probably half a million in spent costs and 3 failed project attempts to try and migrate it. We got close once, or at least we thought we did, until I found a number of glaring oversights that ended up being show stoppers.
We have started a new project now and are going to replace and rewrite everything from the ground up. Really this is the route we should have been taking a long time back but you live and learn.
We (not my current employer) had a similar situation if I'm reading that right and the solution for many years was to simply print the information off from one system and have someone (namely me) re-enter it on the other I actually wrote the proof of concept that did myself out of a job for connecting the two databases together :s
I had the advantage there though of having used the system for many many years and pretty much knew the pattern of how it worked in my head - very hard for anyone to come in and understand the system at the same level without proper documentation and missing source code, etc.
I was talking with a friend who’s kicking himself about not going EPYC. This situation is going to hit hard I think.
The whole point of the bug is that sandbox's and VM's don't protect you from RIDL because the hardware allows you to bypass them. The while paper states that they were able to complete the attack from spidermonkey. So this is just another attack. You should be banned m8, when will your harassment stop?
Web browsers are capable of running JavaScript outside the sandbox, with the privileges necessary to, for example, create and delete files.
Microsoft Windows allows JavaScript source files on a computer's hard drive to be launched as general-purpose, non-sandboxed programs.
Depends how it's setup, but as its stated in the while paper it does not matter. You know this because you have read the white paper by now. This is a poorly contrived argument.
Umm the sandbox could stop the code from running on the PC in the first place. Why should I be banned? I am not harassing you, I am taking part in a debate.
I have never said you cannot run javascript locally, neither has anyone else said that.
Sorry, this is simply not true. There are two proof of concept demos for two different exploits, neither are run within Firefox, or any browser.They were able to run the code from spidermonkey within Firebox... They cover methods for running the required commands for the attack within the browser... I believe there is a video showing them stealing your browsing history.
I think agree to disagree on this one, it helps no one on here just us going round in circles. If there is a proof of concept from within a modern web browser I will look at it, but until then my view is what it is.
They do not detail how this could be achieved from within a browser.
The victim and attack processes must be on different logical HT cores on the same physical core. So they need to determine this and use taskset to lock to core affinity. This nugget is somewhat buried in the readme of the proof of concept code:...how hardware threads (amongst other factors) are used in the context of their attacks and what the actual range of possible weaknesses there are with hardware threads and so on. I'm not even 100% sure on what they are actually doing with taskset, etc. without more details of their procedures. (I mean I get the gist of it but there are factors that they don't elaborate on that leave a bit of a wildcard).
The victim and attack processes must be on different logical HT cores on the same physical core. So they need to determine this and use taskset to lock to core affinity. This nugget is somewhat buried in the readme of the proof of concept code.
- On Linux, you can run 'lscpu -e'. This gives you a list of logical cores and their corresponding physical core. Cores mapping to the same physical core are hyperthreads.
- On Windows, you can use the coreinfo tool from Windows Sysinternals.
- Try to pin the tools to a specific CPU core (e.g. with taskset). Also try different cores and core combinations. Leaking values only works if attacker and victim run on the same physical core.
Since the process of evicting entries from the L1D cache makes extensive use of the LFBs—due to TLB misses as well as filling cache lines—this adds a significant source of noise. We also need to ensure that the TLB entries for our reload buffer are still present after the eviction process, adding another source of noise to our attack.
Yeah, that text is from ZombieLoad. But RIDL clearly requires the same setup, they mention the same physical core requirement in their presentation and are using taskset in the same way to ensure the JavaScript engine runs on a specific core.That is Zombieload yeah? that kind of information is glossed over a bit with RIDL which is a bit misleading in terms of how potent it is outside of theory.