Adding SSD cache drives to a NAS, how much and is it a good or bad idea?

Soldato
Joined
29 Aug 2010
Posts
8,263
Location
Cornwall
So I'm debating adding some SDD cache drives to my NAS (TerraMaster F4-424).
I guess the first question is, is this a good thing or is it just a gimmick and a waste of time and money? SSDs aren't cheap so is there really much benefit to this?
I'm planning on upgrading my network to 2.5G (once I can decide on the switches to get) but even then it won't be close to the sustained read/write speeds of SSDs, so will it add anything?

Secondly, if the answer to the above question is that it is worth it, how much cache should I add and is there any benefit to using 2 drives vs 1 drive (twice the size)?
Is there a formula or something that says you want X GB of cache per TB of storage or something like that?

I'm not sure if NAS stuff is covered in here or the network sub-forum so sorry if this is in the wrong place...
 
Not a direct answer, but I switched out a fairly high performance QNAP NAS with some decent Seagate HDDs in there for a mini PC with NVME storage up front and doing real time replication to externally connected HDDs and the difference in things like opening a folder with a lot of media or working with lots of mixed files is massive, but long sequential read/write performance is only mildly better (though I am planning on utilising 10Gb where HDDs would be a bottleneck - currently a mix of 1 and 2.5Gb).

EDIT: As below it also substantially reduces the amount of noise happening through the day.
 
Last edited:
I found the biggest difference is to have your first volume on an SSD this then contains your OS and Apps.
Volume 2 should then be your storage.
Cache was no where near as useful.
 
So I'm debating adding some SDD cache drives to my NAS (TerraMaster F4-424).
I guess the first question is, is this a good thing or is it just a gimmick and a waste of time and money? SSDs aren't cheap so is there really much benefit to this?
I'm planning on upgrading my network to 2.5G (once I can decide on the switches to get) but even then it won't be close to the sustained read/write speeds of SSDs, so will it add anything?

Secondly, if the answer to the above question is that it is worth it, how much cache should I add and is there any benefit to using 2 drives vs 1 drive (twice the size)?
Is there a formula or something that says you want X GB of cache per TB of storage or something like that?

I'm not sure if NAS stuff is covered in here or the network sub-forum so sorry if this is in the wrong place...
I have a spare NVMe drive in my DS920 for hosting virtual machines and the likes, it has immensely quietened down the HDD noises through the day.

The question of if it's worth it is very dependent on what you do. For example with Synology units having one NVMe (without forcing it to behaving like a normal drive) would mean it caches the most used files to speed up access to those files and the more storage you give it the more files it can do that for. Placing a second NVMe allows it to use the drives as a write buffer and that massively speeds up transfers.

If one of the 2 drives subsequently dies then it reverts back to a read accelerator configuration until a new second drive gets popped in.


I would say it's worth it if you are trying to quieten down the NAS for background tasks like I did or if you are trying make your transfer tasks quicker for your other devices (you will still see activity for about the same length of time on the NAS though whilst it moves data in to the discs).

You can work out roughly how much quicker than direct to the HDD's it would be but we need to understand if there is any RAID configured (and what RAID type that is) as that would determine the achievable throughput.

Edit: spelling as usual...
 
Last edited:
I run my NAS in a similar way to @robj20 but my M.2 SSDs are big enough (2 x 4TB in RAID1) to hold everything I'm likely to need on a day-to-day basis. These will saturate a 10Gbps link when writing to a similar SSD in my PC. Anything that's longer-term storage goes onto the HDDs. These also contain a mirror of Volume1 and this is updated overnight when my PC backs up to the NAS. I've been looking at the HDD hibernation log and normally they only spin up a couple of times per day.
 
I have a spare NVMe drive in my DS920 for hosting virtual machines and the likes, it has immensely quietened down the HDD noises through the day.

The question of if it's worth it is very dependent on what you do. For example with Synology units having one NVMe (without forcing it to behaving like a normal drive) would mean it caches the most used files to speed up access to those files and the more storage you give it the more files it can do that for. Placing a second NVMe allows it to use the drives as a write buffer and that massively speeds up transfers.

If one of the 2 drives subsequently dies then it reverts back to a read accelerator configuration until a new second drive gets popped in.


I would say it's worth it if you are trying to quieten down the NAS for background tasks like I did or if you are trying make your transfer tasks quicker for your other devices (you will still see activity for about the same length of time on the NAS though whilst it moves data in to the discs).

You can work out roughly how much quicker than direct to the HDD's it would be but we need to understand if there is any RAID configured (and what RAID type that is) as that would determine the achievable throughput.

Edit: spelling as usual...
I'm not entirely sure what I want to use it for, I mostly use the NAS as backup and as a media server, but thinking of maybe also using it as a shared network drive to use across multiple PCs to put downloads and video editing stuff on.
I've got it configured with using TerraMaster's TRAID, which I believe uses a combo of RAID5 and RAID1, but I may be wrong on that.

I'm not sure How a TerraMaster NAS handles the cache drives. I think with the newer operating system you can choose to have the OS installed on the SSDs, but not sure what else it allows.
 
So in terms of storage pool access speed it should be similar to RAID 5/6 in terms of read/write. So you can use something like this tool to estimate your transfer rate: https://wintelguy.com/raidperf.pl

Adjusting the read percentage will let you see what the overall speed would be if you are mostly writing or mostly reading from the pool. You can estimate a large write by setting 0% read.

You can then use your network speed divided by 8.62 as a best case scenario (8 is conversion to MB/s and the 0.62 is the best case jumbo-frame overhead, giving us 116MB/s on gigabit) to statt to work out the transfer speed you will get on a cache vs the calculated speed for the pool raw.

Worked example:
4 WD Gold in RAID 6 gives a per drive speed of 184MB/s and at 0% read (so a large write operation) that gives a pool transfer rate of 122.67MB/s.

This means in theory I can saturate my pool just using Ethernet (116MB/s Inc overhead) and if I were to shift to 2.5gbps and have a write cache then the transfer time at my PC would reduce to 40% of the time to complete (as the transfer is 2.5 times the gigabit speed) and the NAS would complete the write to the pool up to ~7% quicker - as the gigabit link would no longer be the bottleneck.

For read operations the network is the bottleneck in this scenario for gigabit and 2.5gigabit and the SSD cache essentially drops the seek time on the files it happens to have in the "hot" pool.
 
I added an nvme cache to my Synology - in reality it made very little difference outside of some improvement to Emby (or Plex) metadata - most files are sequential rather than random access.

The better option imo is an SSD storage volume (assuming you have drive bays available) and use it run Apps/Dockers/Virtual Machines from.
 
SSDs aren't cheap so is there really much benefit to this?

You don't necessarily need to have a big capacity SSD. A 1TB SSD is pretty cheap these days and that's possibly more than sufficient for a caching drive. Whether it'll make any difference to your usecase will depend on what you're doing. But a cheapy 1TB SSD will at least give you some ideas on whether it's worth doing.
 
I don't use my NAS for intensive work, I guess if it had constant I/O with multiple users, RAID, as well as being use as a internet server site probably be different. MY NAS has fixed 2GB of RAM non upgradeable but I don't think it's using much of that, I think it allocates a fair amount but it's just unused.
 
Write cache increases write speeds due to not doing parity at the same time, this is helpful depending on how you've set up your writes for the array.

Read cache is probably not worth it unless you are using it for hot access for files life video editing, but on 1gbit this is likely always going to be network limited anyway.

Is worth it for a large zfs array especially if you don't have enough ram for the addressing data.

It's worth it in some cases with some setups and not in others, and depends on how you use them.

Just get a 500gb drive and use that, then it's not expensive and doesn't matter if it's only a small improvement.
 
Also worth considering is that if you are using one as a write cache, then you pretty much need a UPS, as otherwise in the event of power loss you can end up with corruption due to data not being written to the disk array.
 
Thanks guys, lots of good info and lots to think about.

For a cache drive I assume you need something with high endurance, such as a WD Red SN700 SSD?
I'll be dropping in an SN700 to switch to RAID1 on my NVMe when I get round to it.

Might aswell focus on endurance rather than speed when you are probably limited to PCI-E 3.

Also worth considering is that if you are using one as a write cache, then you pretty much need a UPS, as otherwise in the event of power loss you can end up with corruption due to data not being written to the disk ararray

This is an extremely good point
 
I'm not entirely sure what I want to use it for, I mostly use the NAS as backup and as a media server, but thinking of maybe also using it as a shared network drive to use across multiple PCs to put downloads and video editing stuff on.
I've got it configured with using TerraMaster's TRAID, which I believe uses a combo of RAID5 and RAID1, but I may be wrong on that.

I'm not sure How a TerraMaster NAS handles the cache drives. I think with the newer operating system you can choose to have the OS installed on the SSDs, but not sure what else it allows.
If TOS is like Asustor's ADM, you won't be able to use an SSD for cache if it has the OS on it. You have to choose between storage or cache.
 
I’m not familiar with that system but the key question is are you happy with the current performance?

Solid state cache is definitely not a gimmick if implemented correctly, but it is limited in its ability to increase performance.

There is no ideal ratio. Under a read scenario if you have 1tb of data on a volume and 250gb of SSD as cache you will have 1/4 of the volume accessible at higher speeds and if the data you’re looking for has been mirrored you win.

Writing to a volume the SSD is typically filled and the data written to the main volume later and flushed. The downside is each transfer requires the data hang around to be written twice and read once before it’s safe.

The gains from adding an SSD for cache become noticeable as the speed of your network, number of users or file sizes increases.

For a home user or three on a sub 10 gigabit network a few half decent HDDs is pretty close to optimal IMO.
 
Last edited:
Gonna bump my network to 2.5G so it's not the bottleneck first and will take things from there.

Also need to figure out how I'm going to use the NAS to know if I need the performance.
 
Back
Top Bottom