Quick way to delete 240GB of files?

Associate
Joined
19 Jun 2003
Posts
1,680
Location
West Yorks, UK
Hi all,
I need to delete 240GB of data from a web server. I've tried the normal rm -rfv /dir/to/delete way, but it's taken a day to delete 1.5GB. The data is made up of loads of small files (< 40KB).

Is there a quicker way to delete them that doesn't involve formatting the partition?

Cheers,
Matt
 
Yeah, it's EXT3. To complicate matters, it's a live web-server, so shutting it down isn't really an option. Do i just have to grin and bear it?

Matt
 
They are a few hundred miles away unfortunately ;) Plus, i'd probably get into a modicum of trouble.

I like your thinking though.

Matt
 
Can't think of another way of doing it unless it's on it's own fs in which case you could format. Is it having any adverse affect on the system? If not, just run rm and leave it for a few days!
 
a whole day?

It takes me a few minutes on my fileserver, running ext3, granted it's not thousands of small files but still.

Like suggest if it's completely seperate fs you could format it
 
That's an interesting page.

In his example he populates the "virtual" filesystem with dummy files created by a perl script. In feenster's case he has real files that would need to go there. How would he get them into the image? It seems like using mv would be just as slow as rm.

Or am I reading you wrong and not seeing that you're suggesting this for the future to avoid these sort of headaches again?
 
That's an interesting page.

In his example he populates the "virtual" filesystem with dummy files created by a perl script. In feenster's case he has real files that would need to go there. How would he get them into the image? It seems like using mv would be just as slow as rm.

Or am I reading you wrong and not seeing that you're suggesting this for the future to avoid these sort of headaches again?

Yeah that example was for creating new files not existing ones. Thus the edit.

Hmm well actually thinking about it, if your using mv then its just moving the link from the source directory to the destination directory providing they are on the same file system (inodes etc.. still the same place). Not sure how this would work with a loopback fs.

mv'ing them to the image, then just removing that image might actually be faster than recursively looping through a dir and rm'ing them. Though you go to create a 240gb disk image so thats gonna take a while itself, heh.

Im not 100% about this, never tried my self, might be wrong. Ill put the link back incase: http://www.ledscripts.com/blog/2007/11/09/linux-quickly-remove-millions-of-files/

Oh and remove the -v switch from your rm command. I/O to stdout/stderr is an expensive operation. That should speed it up somewhat.
 
Last edited:
Oh and remove the -v switch from your rm command. I/O to stdout/stderr is an expensive operation. That should speed it up somewhat.

Missed that first time around, definitely worth doing! Although, I suspect the OP has deleted all the files by now though assuming he left rm running!
 
Back
Top Bottom