NAS drives. Which kind of hard drives do you actually need...

Associate
Joined
3 Oct 2014
Posts
1,772
I have a 2-bay Synology NAS unit and I'm just looking to upgrade my two 2TB drives.

I access my NAS daily. I have it split into two volumes, one for important data and one for movies which wouldn't be the end of the world if it was lost. I use Raid 1 for backup and also have an external drive to backup my important data volume.

I'm looking at the prices of 4TB drives and between the blue and red variants; knowing red to be more suited for a NAS right? Maybe not.

I've had my Western Digital Green 2TB drives for around 5 years and I was checking the Load Cycle Count. The green drives I understand are good for 300,000 cycles. After 5 years I'm only up to 42,000 cycles.

So I don't think I really need a red drive from my usage and will get two 4TB blue drives for around £170. Then after a few years another cheap upgrade when 8TB have come down a bit.

Just giving my experience of how the budget drives have served my uses well and I'm nowhere near the cycle estimates for the drive. A mirror raid and external backup, I think I should be good right?
 
Load Cycle Counts isn't the primary issue with NAS drives - that's just a byproduct of drives going into power saving and parking heads (which the greens used to do) - Reds are designed to run 24/7 and shouldn't be parking heads anyway.

Main issue with Nas vs Non-Nas drives is how they handle errors when used in a RAID configuration. Non-RAID specified Drives will try to re-read e.g. a bad sector hundreds of times without failing the drive (usually causing the array/NAS to become unresponsive until it succeeds, but also potentially causing further damage to the raid array - e.g. by mirroring the bad data). A RAID specified drive will report an error after a short predetermined amount of time, allowing the NAS to make the decision to deal with the problem (e.g. by marking the drive as failed, and removing it from the array).

Whilst you can run Non-nas drives in an array (indeed I have done 8x mixed brand desktop 1TB drives in RAID6 on a Synology 1812, and have done 24x 2.5" SATA Laptop drives in RAID10 on a HP Smartarray+Disk shelf), they work fine until you encounter an error, and at that point you are putting any data stored on them at risk. (On my Synology this was apparent when I started changing disks to large capacity ones, that a drive had partially failed was audibly spending ages re-seeking and consequently the rebuild took far longer than expected, but had not been failed - changing that drive out reduced the rebuild times of the remaining disks from around 4 days to ~8 hours)
 
Not that I recommend it but I stuck a bunch of regular Seagate Barracudas into my NAS that I had lying around - 5.7 years of power on time (pretty much constant) and 32K landing cycles - one of the drives flagged up 2x reallocated sectors a couple of days ago - the first issue I've had so swapping that one out soon.

NAS drives are pretty much guaranteed to have minimal issues while the behaviour of some non-NAS drives can be more or less compatible with potential for reduced lifespan, etc. can't remember if its was WD greens or blues eco model or something but my brother used some of those when it turned out they had an incompatible with NAS type use and were dying after just a few months :s

For the main setup I use 2x drives in RAID internally and do real time replication to an external USB drive (as well as monthly snapshots to a couple more external drives via the copy port I round-robin) - real time replication slightly slows down the maximum peak transfer rates a bit but nothing significant and potentially makes recovery easier.
 
Good points.

My Nas would only be acessed a handful of times a day. So spin down would probably be beneficial from a power point of view.

Do reds spin down.
 
Main issue with Nas vs Non-Nas drives is how they handle errors when used in a RAID configuration. Non-RAID specified Drives will try to re-read e.g. a bad sector hundreds of times without failing the drive (usually causing the array/NAS to become unresponsive until it succeeds, but also potentially causing further damage to the raid array - e.g. by mirroring the bad data). A RAID specified drive will report an error after a short predetermined amount of time, allowing the NAS to make the decision to deal with the problem (e.g. by marking the drive as failed, and removing it from the array).

When using a software raid like Synology, is this a workaround to what you have said?

Desktop/Enterprise Class Drives

Of note, since DSM 2.2Ref: 26, released in September 2009, Synology introduced their own sector recovery subroutine operating in the background, called Dynamic Bad Sector Recovery, to “further enhance the system reliability” with hard drives. This function basically operates in conjunction with DCDs or ECDs, to help maintain availability of a volume when encountering defective sectors on hard drives.
 
Thinking about NAS drives too, WD Blue are about £30 less than WD Red.
As for WD Green issues from nas info

WD Green 193 Load_Cycle_Count 0x0032 140 140 000 Old_age Always - 181753
Toshiba DT01ABA300 193 Load_Cycle_Count 0x0012 098 098 000 Old_age Always - 2993

Hitachi 193 Load_Cycle_Count 100 100 000 1009 OK
Toshiba 193 Load_Cycle_Count 100 100 000 746 O

So from that WD greens in NAS getting LLC

What's the limit on WD Green LLC?
 
Good points.

My Nas would only be acessed a handful of times a day. So spin down would probably be beneficial from a power point of view.

Do reds spin down.
Reds and any other drives can still spin down to save power via Synology's HDD Hibernate function (so e.g. after 30 minutes of inactivity they spin down). The specific issue with early WD Greens is they were way too aggressive with parking heads themselves (~every 8 seconds controlled by the drive) purely to save power and make their spec sheet look better.

A quick google of "WD Green head parking issue" will reveal information about the historical issues with greens.


When using a software raid like Synology, is this a workaround to what you have said?
Not really, as it's the drive that is trying to automatically reread the data, not the OS e.g. Synology DSM.

Some further information on this can be found here:
https://en.wikipedia.org/wiki/Error_recovery_control
 
It's one of those things, if you never have a problem then you just think that it's fine.

I've run a random assortment of desktop drives in raid for years with no issues... But when I built my UnRAID I put reds in.
 
On a related note - was planning today to replace the disc in my NAS that was throwing up reallocated sectors, even got the replacement out and put on one side to do it later today, when *beeeeeeeeeeeep* the mirror drive of that pair which was healthy went - looks dead as dead - so now having to use the replacement to replace that drive and migrating from the one with reallocated sectors :| fortunately got the external USB replication and offline backups to verify data against.

To be fair they are just run of the mill drives that have been worked pretty hard for just short of 6 years of continuous use - but reinforces the need for a robust backup solution :s

EDIT: Sigh "RAID is in degraded mode" won't accept the new drive for some reason - gets half way done and then just stops with "failed" and no further information - hate it when developers are that **** - looks like I will have to pull the failing drive, stick in 2 good ones and rebuild the whole setup including the NAS configuration from backups. First time I've been really disappointed by QNAP.
 
Last edited:
Back
Top Bottom