Global BSOD

Yeah, still needs some poor person to go around and do it. If they have 100s of systems with this issue its going to take some time.
Network boot with some sort of PXE image with the fix should be possible for some I'd imagine, but more remote systems without any out of band access are going to be a nightmare.
 
I can't believe GD has such a resource pool available to fix this problem (and, not always separately, anything to do with masturbating the little fella off to guns and ammo) and these companies aren't hitting you guys up to fix all of this!

What short sightedness! What a disaster!
Unlike all the "experts" on here when it comes to missing people, airplane crashes, submarine implosions, etc, I suspect there are actually a lot of IT experts on this forum who have been involved in similar issues. For example I've spent 37 years in this industry, a lot of which has been scraping dead systems off the floor in major incidents. I certainly won't be alone in that respect.

If my company was affected by this, which thankfully it isn't, then I would be getting hit up to fix it.
 
Unlike all the "experts" on here when it comes to missing people, airplane crashes, submarine implosions, etc, I suspect there are actually a lot of IT experts on this forum who have been involved in similar issues. For example I've spent 37 years in this industry, a lot of which has been scraping dead systems off the floor in major incidents. I certainly won't be alone in that respect.

If my company was affected by this, which thankfully it isn't, then I would be getting hit up to fix it.
Small fish, big pool and no one, NO ONE, on this forum is getting hit up to do anything important other than present opinion as fact because they run some car boot sale's EFTPOS system.
 
It's always funny how companies just assume IT runs itself, don't really need IT staff. Then something like this happens and they become desperate, but because they outsourced everything they have to get in line :D
 
Last edited:
Yes. Simply renaming the Crowdstrike folder (or deleting the file as posted) and then rebooting the server, fixes it. We have had about 20 of 200 servers affected.
I think the reason Microsoft was mentioned specifically, is becuase it isn't affecting Linux servers with Crowdstrike agents installed, only MS servers.

Where is the folder located?
 
It's always funny how companies just assume IT runs itself, don't really need IT staff. Then something like this happens and they become desperate:D

Haha, this is true - I also think its times like this that firms really notice how poor the service is with when they've outsourced their IT support.
Suddenly a 3rd party helpdesk team is having to support multiple clients with the same thing and just gets utterly swamped.
 
I can't believe GD has such a resource pool available to fix this problem (and, not always separately, anything to do with masturbating the little fella off to guns and ammo) and these companies aren't hitting you guys up to fix all of this!

What short sightedness! What a disaster!

AI took all our jobs that's why.
 
we've been slowly recovering the boxes manually..

detach disc
attach to another instance that is running
remove file
re-attach
reboot

back online..

it's a Friday special for sure.
 
Back
Top Bottom