Ssd or hdd for Nas

Depends on your use case, but I’m not sure you’d be making the best use of the hardware. Ignoring unriad and freenas builds, I’d lean towards 10 drives in RIAD 10 and use any SSD drives as cache for the array. Although 10 HDD’s in RAID 10 would probably be plenty quick enough for most use cases.
 
Last edited:
RAID5 with hot spare is asking for data loss
Raid doesn't protect against data loss, its there for increased availability. You need a robust and tested backup solution for preventing data loss it why we don't bother with RAID 6 unless we have 16 or more drives in the server.

Raid 6, lose a drive and you have to start worrying about replacing it before you get restore protection and performance.
 
Raid doesn't protect against data loss, its there for increased availability.
Correct

You need a robust and tested backup solution for preventing data loss
Again correct

it why we don't bother with RAID 6 unless we have 16 or more drives in the server.
Eh?
Raid6 is a sensible choice with as few as 5 drives. All depends on your use case

Raid 6, lose a drive and you have to start worrying about replacing it before you get restore protection and performance.
I think you are getting very confused somewhere.
Raid 6 protects against up to 2 drive failures. Losing a single drive will have a performance impact but you are still protected against another drive failure. Even if the second drive fails whilst you are rebuilding the first there will be no data loss.

RAID5 with hot spare is not the same as RAID6 at all, other than the amount of storage space "wasted".
The risk with a hot spare on RAID5 is that because it starts rebuilding automatically you have no control over when you are adding more strain to the array (as a RAID 5 or 6 rebuild is a very strenuous activity - and has a statistically high risk of another drive failure which will cause Data loss with RAID5).
Better to manually choose when to rebuild a failed RAID5 array, so that you can take another backup first, take steps to minimise use of the array (e.g. disable file shares on a file server), and then change rebuild priority to maximum
 
It’s a lockerstor 10 gen3 AS6810T If I actually bite the bullet on it.
I don’t mind wasting space as long it has enough memory for all the backups.
One iMac, one Mac studio , two windows pc,s.
I won’t be storing music or films.
But many photos which I can’t lose.
Then RAID10 your NVMe SSDs too. That will still give you 16TB. Alternatively, set up a chron job (or something on the PC) to mirror the NVMe drives to HDD overnight.
 
NVME drives will pull more power and produce more heat than HDDs. The big plus of solid state is response times but that performance comes at a big price.
You've said this before, and it's still wrong.

At idle NVME typically use less than a HDD. And in a home NAS, they'll spend most of their time idle.

1745918206899.png


1745918256074.png
 
Correct


Again correct


Eh?


RAID5 with hot spare is not the same as RAID6 at all, other than the amount of storage space "wasted".
The risk with a hot spare on RAID5 is that because it starts rebuilding automatically you have no control over when you are adding more strain to the array (as a RAID 5 or 6 rebuild is a very strenuous activity - and has a statistically high risk of another drive failure which will cause Data loss with RAID5).
Better to manually choose when to rebuild a failed RAID5 array, so that you can take another backup first, take steps to minimise use of the array (e.g. disable file shares on a file server), and then change rebuild priority to maximum
From my experience, the biggest cause of data loss is people not replacing a failed drives, sometimes months or even year after its happened with is why i would always recommend a hot spare. I don't think I've come across a second drive failure while a rebuild is happening unless there is a firmware or hardware issue with the NAS, not drives themselves. Plus with SMART you should be getting alerts that a drive is starting to play up before there is a complete failure hopefully.

If you want to do RAID 6 with HS you can do that. And taking that performance hit is an indicator that something as gone wrong and needs investigation.

That is why I said alerting it key. People rarely look at their NAS once setup, it just 'works'. Aside from failed drives, they often overlook doing updates which are often security related.
 
NVME drives will pull more power and produce more heat than HDDs. The big plus of solid state is response times but that performance comes at a big price.
I have 4x SATA SSDs and 4x HDDs in my NAS. The HDDs always sit hotter than the SSDs.
that's when it's spinning. Most NAS's have a HDD shut down feature after 30 mins or so...1W under those circumstances
No, it's the average power consumption.
 
that's when it's spinning. Most NAS's have a HDD shut down feature after 30 mins or so...1W under those circumstances
Which
a) Then requires the drive to spin back up aka Startup draw of 2.0A in the datasheet above = 24 Watts
b) Doesn't necessarily work depending on the array format / underlying OS. (e.g. my Synology randomly spins back up despite having turned off absolutely everything I can find)
c) Puts extra wear and tear on the drives.


I don't think I've come across a second drive failure while a rebuild is happening unless there is a firmware or hardware issue with the NAS, not drives themselves.
Play the lottery - because that's what you are doing. Statistically the chance of experiencing an URE (Unrecoverable Read Error) with the size of modern drives, means that the chance of the rebuild failing on a RAID5 array is significant.

Honestly read up on it:



From my experience, the biggest cause of data loss is people not replacing a failed drives, sometimes months or even year after its happened with is why i would always recommend a hot spare.
With RAID1/10/6, absolutely I'd have a hot spare available every time - the risk of a rebuild going wrong is far lower, and having the rebuild start as soon as possible to minimise the window with reduced redundancy is absolutely the right thing to do.

With RAID5, by all means alert to a failure, even have a warm spare (i.e. in the disk enclosure but not set to auto replace), but choose when to initiate a rebuild (or if you have the means don't even rebuild - create another array and copy the data over)
 
Play the lottery - because that's what you are doing. Statistically the chance of experiencing an URE (Unrecoverable Read Error) with the size of modern drives, means that the chance of the rebuild failing on a RAID5 array is significant.
Precisely why my important stuff is backed up on a second NAS, and in two cloud locations.

Everything else I can just re-download.
 
You've said this before, and it's still wrong.

At idle NVME typically use less than a HDD. And in a home NAS, they'll spend most of their time idle.

1745918206899.png


1745918256074.png

You’re still quoting the wrong data/ wrong environment. My average HDD power use 6.5 watts NVME drives are 10.4 and require more cooling. Outside of cold boots rust still beats silicon.
 
Simply because SSDs pull more power. At high loads the situation becomes much worse.
Except home NAS' don't tend to run high loads, they spend most of their time idle. Even when not idle, because they are so much faster, they read or write the data faster and return to an idle state much sooner.
 
Except home NAS' don't tend to run high loads, they spend most of their time idle. Even when not idle, because they are so much faster, they read or write the data faster and return to an idle state much sooner.

Expect nothing. One simple requires more power than the other. That’s just the hard facts of the situation.
 
Another option is you have the NAS to set to individual drives (or perhaps JBOD), no redundancy. Then use loose hard drives and perform a backup manually say once a month.

Then do specific folders which you wouldn't want to lose, say my music collection.

That's fine for personal data.
 
and capacity. Good luck finding a 24TB NVME drive for sensible money

Yeah, the capex and opex figures would be horrendous, but transfer speeds would be amazballz though… you’d just need 10TbE NICs on some crazy bus. Then can sit back and watch power spikes during transfers.
 
Last edited:
Back
Top Bottom