NAS Drives - Media Playback

Soldato
Joined
15 Aug 2003
Posts
19,916
Location
Essex
Hello,

I've had 4x8TB WD Red Drives in my NAS:

1. Purchased Dec 2016 - Died Feb 2023
2. Purchased Feb 2017 - Died Mar 2023
3. Purchased Nov 2017 - Currently Alive
4. Purchased Apr 2019 - Died June 2022

As you can see i've had 3 drives die in the last 6-8months - all out of warranty, I run regular checks on them the drive in June that died in fairness did have some bad sectors so knew that'd be on its way out - the drives that died this year had no bad sectors but both crashed instead.

I want to replace this (slowly given the cost of drives!) - but not sure i want to stick with WD Red, appreciate these things can be a bit of a lottery but any brands people prefer? Was looking at the Seagate Ironwolfs? Or are there any that give better warranties?

I mainly rip 1080p/4k content and do run a plex server for a couple of family members to use remotely - but i wouldnt say they take a massive beating...

Cheers,
 
What NAS, how many drives does it hold, what's it cooling like etc? The wear and tear on a drive as a media server using 1/2 drive is going to be different to 4 drives... and cooling seems to be a big thing with drive life in my experience.

Were these the WD Red drives that turned out to be smr which isn't great for NAS use, could be part of the issue.
 
What NAS, how many drives does it hold, what's it cooling like etc? The wear and tear on a drive as a media server using 1/2 drive is going to be different to 4 drives... and cooling seems to be a big thing with drive life in my experience.

Were these the WD Red drives that turned out to be smr which isn't great for NAS use, could be part of the issue.

Synology NAS - 4 day - DS916+

Drives were around 35deg normally - in a server rack under the stairs. Been in same place for 2 years (moved 2 years ago - before that similar but in another house) so their conditions for the last 4+ years has been the same.

How would i know if they were SMR or not?
 
As WD didn't start selling SMR drives until mid-2019 this is not an issue for the OP's drives and he should forget about it as a possible issue.
It's also helpful if people check their basic facts before commenting.
 
Synology NAS - 4 day - DS916+

Drives were around 35deg normally - in a server rack under the stairs. Been in same place for 2 years (moved 2 years ago - before that similar but in another house) so their conditions for the last 4+ years has been the same.

How would i know if they were SMR or not?

Random thought but has anyone moved in with heavy feet recently or any young kids running up and down the stairs?

My NAS only has 3 WD Reds but I've had no failures or bad sectors since 2017. I work in a training environment with hundreds of desktop drives and server drives. The desktops are treated like crap quite often being moved from site to site and in and out of storage and yet I see hardly any HDD/SSD failures.
 
Random thought but has anyone moved in with heavy feet recently or any young kids running up and down the stairs?

My NAS only has 3 WD Reds but I've had no failures or bad sectors since 2017. I work in a training environment with hundreds of desktop drives and server drives. The desktops are treated like crap quite often being moved from site to site and in and out of storage and yet I see hardly any HDD/SSD failures.

Nah - just me and the Mrs, although its under the stairs its tucked around a corner off the utility room so 0 vibration from the actual stairs and its in a raised server rack / enclosed.

Might just be super bad luck I guess, annoying!!
 
As WD didn't start selling SMR drives until mid-2019 this is not an issue for the OP's drives and he should forget about it as a possible issue.
It's also helpful if people check their basic facts before commenting.
2019... you mean the year the last drive was bought....damn my lack of encyclopedic knowledge of the exact dates that hard drive architecture changes fails again...I just need to remember that there is a mid in front of 2019.... I was soooo close.... :rolleyes:
 
With that failure rate there must be something else going on. Faulty batch, excessive use.

Nothing has changed in the years i've used them - I only have 3-4 plex users and generally no more than 2 at a time - and only for a couple hours a day if that! I'd hardly say usage is excessive vs what the drives are rated at. I dont know if plex is doing something to keep the drives spinning maybe? Historically i used emby/kodi and just switched to Plex for external user ease.
 
Nothing has changed in the years i've used them - I only have 3-4 plex users and generally no more than 2 at a time - and only for a couple hours a day if that! I'd hardly say usage is excessive vs what the drives are rated at. I dont know if plex is doing something to keep the drives spinning maybe? Historically i used emby/kodi and just switched to Plex for external user ease.
I'm pretty sure Plex prevents NAS drives from shutting down. It was the main reason I didn't use it (or anything similar).
 
I'm pretty sure Plex prevents NAS drives from shutting down. It was the main reason I didn't use it (or anything similar).
Not in my experience.

I'm using Plex on a Synology DS918+ NAS - there's an option for whether you'd like the HDDs to spin-down after a certain time in the Synology software, Plex doesn't override that unless of course you keep dipping into your media often. Whether it's better to do spin down, or leave the drives running 24/7 is, of course, an entirely different debate.
 
Not in my experience.

I'm using Plex on a Synology DS918+ NAS - there's an option for whether you'd like the HDDs to spin-down after a certain time in the Synology software, Plex doesn't override that unless of course you keep dipping into your media often. Whether it's better to do spin down, or leave the drives running 24/7 is, of course, an entirely different debate.

Is that the HDD Hibernation setting if so i've got that enabled also (20 mins).
 
That’s it. I switched that off.

AIUI, most wear on HDDs occurs when spinning up to speed. Modern drives use almost no power when just spinning (<1w for many), so there’s a logic to leaving them on all the time with no power management/hibernation, or having a much longer interval before hibernation (6-12hrs).

Back on thread, I think you’ve been really unlucky, or something else is happening to affect the reliability. All my HDDs are older than yours, aren’t NAS drives (just shucked WD elements pot-luck) and have no errors.
 
Last edited:
I've got an Asustor Nimbuster 2 (AS5202t) with two 8TB Seagate Ironwolfs.
They are horribly noisy when booting the NAS up and not what i would call quiet whilst being a NAS.
Running Plex keeps them awake no matter which power setting i use. (Might be an Asustor thing.)
Knowing what i know now i would have shucked in a couple of Toshiba 8TB Enterprise HDD MG Series 3.5" Sata 6Gbit/s 7200RPM drives.
I have one as a backup to my backup.
It is almost silent.
 
I think this something to do with the AS5202T rather than the drives. I have one with two shucked WD Elements 18GB drives and they grumble for ages when powered up but the six 6TB Ironwolves in my AS6706T are barely noticable when they spin up or are in use.
 
Back
Top Bottom