Man destroys his business with 1 line of code

80% of those smaller hosters are people who've a moderate amount of experience doing the best they can on the side to be fair - I spent a fair amount of time doing technical support for one of the GSPs which is a fairly similar environment.

I've seen the big boys make a few cockups too. Like not housekeeping log files and running out of inodes on HP-UX systems for example, hard coding usernames and passwords..
 
hard coding usernames and passwords..

I've not gone that far but I've passed usernames and passwords on the commandline before forgetting in *nix that'll show up on PS - fortunately in that case every user was completely virtualised and couldn't see processes for other users.

EDIT: Slightly amusing story regaling a sysadmin about that one once after someone here pointed out the mistake I'd made - they very hurriedly left the room when the implications sunk in...
 
Last edited:
Yup - my personal home backup system is a bit primitive but:

-QNAP TS-419 NAS
-2 disc mirrored RAID internally (for uptime convenience if one fails) with realtime replication to a regular NTFS USB drive plugged in the back.
-Certain folders sync'd to online cloud storage
-Front USB copy port setup to synchronise critical data to whatever is plugged into the front port on pressing the button - with which I rotate between 3 external USB drives so as to have an offline copy.

You are a good person and should feel good.
 
I just have a single disk in an old Dell server which has compressed backup data on it. No offline copy and no RAID... I'm a bad person and should feel bad. :p
 
Unless he's using a custom version of rm or a version older than 2006, shenanigans.

Alongside the lack of clients complaining.

(--preserve-root is rm's friend.)
 
Foo and Bar are traditional names used for generic variables in coding examples. And Fubar (****ed Up Beyond All Recognition) is a military term that long predates that movie (which I've never seen).

Why the hell haven't you seen that film? :eek::eek:
 
I reckon it's a troll, but to be fair, pretty much everyone that's done sysadmin of any type has made a reasonably serious ****-up at some point. I don't believe you'd run something like this on all systems before trying it out on a dev box and then a bunch of UAT ones, though.

I don't do much unix admin these days but I've always found it much safer to use something like:

find /path/to/"$foo"/"$bar" -exec rm {} \;

Especially as you can check properly for sane variable(s) beforehand and in testing/debugging you can coment out the exec bit and see exactly what you're going to be deleting. If typing in the console, it saves that accidental hitting of return causing grief, too. :D

I think I'd have cloned the backup server disks and tried running a recovery program on those afterwards seeing as only the file descriptors would have been removed rather than the data itself?

I've managed to get a wiped system back once after a similar mishap, albeit with a broken database. Damn right mouse click (paste) in a root terminal with part of a script in clipboard. Also pasted some router config in by mistake once which changed the server hostname and messed up the application running on the server, amongst other things. :o
 
Last edited:
Coding 101, check your vars are valid before passing into a command or function.
 
Typically you'd need to do "sudo rm -rf /". If the two variables are empty then the path to the rm command will be "/". No code reviews?

Another point I was going to make, he would be doing all this as root, which is scary.

Every script we have written that deletes data requires an extra step beforehand like "touch destroy" which the script checks for to see it was created in the last 10 seconds and then deletes it and then gives a user prompt to make double sure they know what they're doing, so mistakes can't really be made.
 
I just deal in SQL much safer, the worst I can do is drop a live DB and if there's no backup then that's the other guys fault :D
 
I just deal in SQL much safer, the worst I can do is drop a live DB and if there's no backup then that's the other guys fault :D

Incorrect. The worst thing you can do is insert invalid / corrupt data and not realize until your back-up cycle has expired (day/week/month/year) and find there's no way to realize what is real and what is not, except perhaps three weeks with regular expressions and your apache logs.

Worried about downtime? Amateurs... ;)
 
Surprised there would be no safeguard in place to stop you doing this. Maybe there will be now :D

It's difficult to safeguard against people being stupid

He should have had versioned off-site backups, or at least some way that means he doesn't mount the entire backup on the live servers! :rolleyes:

Sounded like a terrible setup that was destined to die in the end due to accident, attack or other.

Even my personal data/media has a better setup :p
 
Last edited:
Back
Top Bottom