Expected NAS HDD life 24/7 operation

Associate
Joined
28 Feb 2012
Posts
1,007
Location
Herts
I got my first NAS, a Synology DS215j back in Oct 2015 and installed 2x new 3TB WD Reds.

It's worked beautifully for 2.5 years..until this month!

On the 1st, the monthly disk health check reported a bad sector on Disk 1. I Performed an RMA on the drive and received a replacement yesterday. I installed it and had the NAS repair the RAID 1 configuration - all was well...so I thought.

Then last night, the system reported issues with Disk 2, lost connection to disk several times and then finally completely failing to recognise the disk. I've tried re-seating the drive and also hooking it up to my desktop - the drive spools up but is not detected on NAS or desktop.

This seems crazy - I've just setup a new RMA for Disk 2. I'm wondering if I've done something wrong here - I was very careful when removing and replacing Disk 1.

Anyway, the plan is now to replace Disk 2 and repair the RAID 1 volume and then I hope that will be it.

However, I was expecting these WD Red drives to be good for around 4-5 years of 24/7 use (warranty is 3 years), not fail after 2.5 years! My NAS is well ventilated and dust free and although on 24/7 is not used that heavily - occasional media streaming and file storage.

Also, annoyingly I can't wipe the HDD I am sending back as I did with the first as it is not being detected - will WD wipe the drive somehow?

What are other's experiences?
 
In my NAS, the usual operating temp of Disk 1 is 27-28c and Disk 2 29-31c. I always assumed the notable difference in operating temp was down to the design of the NAS enclosure. However, I understand that 31c is still perfectly fine for a HDD.

Unfortunately the failed drive is not recognised by my PC either, despite spooling up. Maybe a problem with the circuit board on the HDD.

Anyway, it just seemed like too much of a coincidence that Disk 2 would fail on the same day I replaced Disk 1.. but Disk 2 is now truly dead. I really can't think it was anything I did though as I was very careful.

As you say, aside from the increased operating temp of Disk 2, it would have been subjected to the same read/rights as Disk 1 in RAID1. Also looking at the serial nos, the original pair were from the same batch - same dates too.

This has shaken my confidence in the NAS somewhat, which is a shame as it's been one piece of technology that I have been truly delighted with.
 
The default setting in Synology DSM is to spool down the drives when not in use, but I found that they would spool up for one process or another several times a day, so I changed the setting to always on for this very reason - they are NAS drives after all. I changed this a few days after setting up the NAS back in 2015, so should not have been an issue with these disks.

Very good observation though.
 
Remember. RAID isn't a backup.

Indeed, I have my NAS backed up to the 'cloud'. At the moment my NAS is running with just the one disk until the RMA replacement arrives, then I will rebuild the RAID1 volume and hopefully all will be back to normal.
 
Back
Top Bottom