Hi there.
I have just managed to recover lots of files from a 14TB hard drive that I accidentally wiped, but I have ended up with masses of duplicate files all spread over 4 smaller hard drives.
I am using Dupeguru to get rid of all the duplicates but am a bit unsure as to what options to use.
I have set it to scan standard content but there are a number of extra options that I am not sure of:
* Use regular expressions when filtering
* Partially hash files bigger than X MB
* Ignore duplicates hardlinking to the same file
All three of these options are unticked by default. What do each of these options mean and is there any reason at all to enable them?
I have just managed to recover lots of files from a 14TB hard drive that I accidentally wiped, but I have ended up with masses of duplicate files all spread over 4 smaller hard drives.
I am using Dupeguru to get rid of all the duplicates but am a bit unsure as to what options to use.
I have set it to scan standard content but there are a number of extra options that I am not sure of:
* Use regular expressions when filtering
* Partially hash files bigger than X MB
* Ignore duplicates hardlinking to the same file
All three of these options are unticked by default. What do each of these options mean and is there any reason at all to enable them?