Global BSOD

Where?

And why would the updated file be full of zeros rather than the erroneous one?
From Crowdstrike (if you can believe them):

Channel File 291​

CrowdStrike has corrected the logic error by updating the content in Channel File 291. No additional changes to Channel File 291 beyond the updated logic will be deployed. Falcon is still evaluating and protecting against the abuse of named pipes.

This is not related to null bytes contained within Channel File 291 or any other Channel File.

Also here:

Some people report that the files responsible for the CrowdStrike crashes (Eg. C-00000291-00000000-00000032.sys) are full of zeroes. This is not the case for any of the machines I fixed by hand today. One example is ad492bc8b884f9c9a5ce0c96087e722a2732cdb31612e092cdbf4a9555b44362.
@virustotal
)
 
Last edited:
From Crowdstrike (if you can believe them):

Channel File 291​

CrowdStrike has corrected the logic error by updating the content in Channel File 291. No additional changes to Channel File 291 beyond the updated logic will be deployed. Falcon is still evaluating and protecting against the abuse of named pipes.

This is not related to null bytes contained within Channel File 291 or any other Channel File.

Thanks, that's odd, I don't think that implies the updated file is full of zeros though rather just that the files people showed full of zeros weren't the cause of the crash - though deleting it apparently did fix it.
 
Any ransomware developer would be envious of how destructive this was! Not only that, but having to pay for it due to regulatory requirements!

There is a damn good reason that QA is so vitally important, any competent QA team would have caught that well ahead of time... If they even have any QA team.
 
Last edited:
Crazy how each affected pc has to be booted into safe mode and the offending update removed, not a quick fix in places with more than just a few pc's.
 
Crazy how each affected pc has to be booted into safe mode and the offending update removed, not a quick fix in places with more than just a few pc's.

I've had it before at work, I don't do IT, where they centrally rolled out an update last thing in the evening with a bad driver and I was the only person left on site with any clue so spent my night shift having to manually reboot each system 3 times to force them into recovery mode, go through a [authorised] convoluted process to defeat security measures (LOL) and manually delete the offending driver so the OS would roll it back to the previous one because they wouldn't have been able to get someone out to do it in time to prevent it impacting daytime operations, couldn't do the first bit remotely due to the systems BSODing on start up and anyone who could deploy a fixed/newer driver had gone home for the day.

Was a funny phone call as they were panicking as it would have been way beyond someone without IT experience and I was like "I've got this".

EDIT: IIRC the problem was the hardware the driver was for, for a short period had used a substitute chipset, the update had been tested against the normal version of the hardware but not the version using a substitute chip.
 
Last edited:
As far as I'm aware everything was resolved over the weekend where I work - most of the impact was down to service issues with Microsoft 365/Sharepoint which were resolved within hours and a couple of external systems related to HR and payroll which were impacted by CrowdStrike issues on the provider's end.
 
Any ransomware developer would be envious of how destructive this was!

Disruptive rather than destructive, but anyone interested in causing mass disruption would be looking at this and thinking, all we need to do is force PCs to recovery mode.

Imagine if this happened to all Windows PCs, not just those with CrowdStrike installed, the disruption would be far far worse :)
 
Dave's Garage on YouTube has a good technical summary of CrowdstrikeGate. (Dave is an ex-Microsoft Windows dev).
this one for those interested

watch as companies download data wipers and malware trying to bring their systems back up
 
Last edited:
How do they not have a first deploy PC to check any updates before they are sent OUT?!?
probably the usual.. "this feature needs to go in now" and for delivery manager to tick a box to say it's done.

this usually ends up being rushed, by-passes of the usual checks and bam you've got the perfect storm and "works my local tho" response.

I've seen it too many times :)
 
Back
Top Bottom