Not the usual 137GB query! (Can't copy a 180GB file)

Associate
Joined
17 Oct 2005
Posts
311
I'm having trouble copying a 186GB file (yes, that is the right size!) between two external USB drives (both Lacie).

The copy keeps failing after copying about 140GB (~151,000,000,000 bytes to avoid confusion about 1024 v.s. 1000); Explorer gives a "device is not ready" message (from memory, might not be exact wording).

I wrote a small C program to try to do the same copy (so that (a) I could try to resume after a failure and (b) I could try to get more info on the error). It didn't shed a lot of light, but the problem seems to occur when reading from the source drive rather than when writing to the destination, and perror() reports "Permission denied", which I don't totally believe, seeing as there didn't seem to be any problems for the first 140GB.

The drives in question are both Lacie, the source drive is 250GB and the destination is 500GB, and there is ample space on the destination.

I'm running XP/SP2 on a fairly modern (2 years or so) old machine, so I don't think it should be the 48bit addressing issue. But the point at which problems occur is a little suspicious.

So does anyone know about any known issues with accessing very large files?
 
Try zipping the file at the source end and then copy it over, 2 reasons for this, 1 it should hopefully make the file smaller and 2 when unzipping it will perform a CRC check on the contents, this will ensure you've got an exact copy.
 
Not sure that zipping it will work. Firstly it may not zip into the remaining space on the drive and secondly the data all has to be read by the host system and will likely fail for the same reason as the OP's C program fails.
 
Have you tried using SetFilePointerEx to seek to a point past ~140GB? You can get a problem like this writing if the NTFS block size is set too small, but I've never seen it happen when reading.

Although, saying that, I made a ~30GB file a couple of years ago, which BSOD's Windows 2003 Server everytime I try to copy it - it remains on the RAID array to this date. NTFS isn't exactly sane :rolleyes:
 
matja said:
Have you tried using SetFilePointerEx to seek to a point past ~140GB? You can get a problem like this writing if the NTFS block size is set too small, but I've never seen it happen when reading.
I've been using the "low level I/O routines" (_open, _lseeki64 etc); I haven't specifically tried to "jump past" the problem point but I could try it and see I guess.

To answer the other posters:

I really don't fancy the ZIP/RAR suggestion, based on how I've seen them perform with files of a few GB. Tying the machine up for days isn't high on the priority list. And I think it is still likely to fail as soon as it reads past the 150GB mark.

There is 250GB free on the destination drive, so that should not be the issue. And as I say, the problem very definitely occurs during a call to _read().

Although, saying that, I made a ~30GB file a couple of years ago, which BSOD's Windows 2003 Server everytime I try to copy it - it remains on the RAID array to this date. NTFS isn't exactly sane :rolleyes:
Joy... I think we are going to move away from the "one humungous file" approach; seems a little flaky, and solving problems remotely is a complete nightmare.

Thanks anyhow!
 
I'd split it mate, every 20 gigs or 10 or evan 50 if your brave ;) Winrar or another similar program. That way you can reassemble it on the toehr pc, if you have the time ;)

Shr3k
 
Back
Top Bottom