RAID1 with solid state drives

Soldato
Joined
17 Jan 2007
Posts
8,944
Location
Manchester
Has anyone done or been asked to do this and what is the general consensus?

I'm doing a small server build, nothing fancy just an off the shelf ML110 G6 with some extra RAM and another disk for software RAID1 in 2008 R2. But I've priced up two 80GB Intel 320s and in the grand scheme of things it's not really that much more expensive, and a bit of a no brainer to go with solid state as they don't need the capacity and you get all the other goodness that comes with them. Problem is old habits die hard and I'm a bit reluctant just to have a single drive in there even though two look overkill on paper.

The stuff that's on there is important but it is backed up and the usage is quite light - only 4 people accessing the database. For the sake of £100 on a server thats hopefully going to be in place for at least 5 years would you add a second drive?

Cheers
 
Yes. Purely because of downtime. Backups will safeguard data but it'll take time to get a drive installed and rebuilt. Time = money. In the event of a drive failure RAID1 will quickly pay for itself by virtue of keeping the server up while you source another drive.

Backup = DR
RAID = Resilience, aka. Business Continuity

A good setup has both the above.
 
Either put two in or none.

I put RAID 1 SSD drives into storage arrays on a regular basis.

As already said RAID 1 means the data will still be accessible in the event of a failure. Rememebr that if it's a single drive you lose data if you lose the drive - sure you can get data off the old backup but anything that's been written since in that day is gone.
 
I've got 2x 30GB SSD's running the OS on my server, but I have them in software RAID1 due to the lack of TRIM support in hardware RAID1
 
Never run a server on a single disk! I would however question the longevity of home SSD in a server, but as long as you have good warranty then you should be set - given you want the server for 5 years!

Don't expect software raid to be all that great if disk 0 fails, you'll still need to sort out repairing boot loaders etc... depending on your OS. It's not as resilient as hardware raid.
 
Putting SSD's in RAID1 wont help any they will both fail at the same time. That is when they reach their write cycle limit which is around 100,000 writes to each indivisual cell.

Don't waste your money on two
 
Just had an idea, bearing in mind its 10 past 8 in the morning so itll probably be a silly one.

Raid 10 - 2 Mechanical drives in the RAID 1 then 2 SSD's in Raid 0. You may get some slow down but itll be very resiliant.
 
You cant have mis-matched drive speeds in a RAID array (well technically you can but you cant if you understand).

Your controller will have kittens trying to figure out what is going on. Best thing to do is have RAID 1 on your SSD's with a big RAID 5 volume behind it. Or just get a Filer with some arrays :) (Kidding..)
 
I'd run the OS on a 64GB SSD and then get 4 10K SAS disks running RAID10 for the Database.

Should just all fit in a ML110 G6 if you put the SSD in the CD Slot
 
Putting SSD's in RAID1 wont help any they will both fail at the same time. That is when they reach their write cycle limit which is around 100,000 writes to each indivisual cell.

Don't waste your money on two

Errr...no.

This hasn't been the case for a while, the new generation of SSDs last a lot longer than 100,000 ops per cell. If that were true, anyone who's bought a SAN unit running 16 SSDs in RAID10 is in for a shock.

even so, taking the hypothetical situation that they do fail after 100,000 ops, that's a rounded number, some cells will last longer others shorter, so the chances of both drives dying in syncro is still very unlikely. Plus for it to save business cost one only needs to last a week longer than the other max. Just long enough to source and replace the failed drive.

If he's that bothered he could even run one drive for 2 weeks THEN add the second in RAID1 so the read/write duty cycles would be asymmetrical.
 
Errr...no.

This hasn't been the case for a while, the new generation of SSDs last a lot longer than 100,000 ops per cell. If that were true, anyone who's bought a SAN unit running 16 SSDs in RAID10 is in for a shock.

Anyone running SSDs in a server environment (certainly a SAN) would be using SLC based SSDs, which have an order of magnitude greater write endurance than their cheaper MLC (consumer grade) counterparts.
 
Back
Top Bottom