Server Raid

Associate
Joined
27 Jul 2003
Posts
423
Just interested in this one, for a standard decent business level 2003 file server or small SBS server what is the preferred or recommended raid setup.

The ones I've seen have 2 mirrored disks for the OS and 3 disks in raid 5 for data, however the other day was in a data centre with Dell Servers and these appeared to have 4 disks in a raid 5 (I assume), 4x145GB 15k the data drive was 300GB so I assume 4 disk raid 5 with the drive partitioned.

This setup saves a disk however does it impact on real world performance, machine had plenty of ram 4GB so I guess to minimise page file use.
 
These machines will be proper servers with proper hardware RAID cards so all the disadvantages you normally see with RAID5 disappear.

A4 disk RAID5 array will have better read performance than a 3 disk one using the same hardware simply because the data is coming off more disks. Each part of the file comes off at the data rate on each array but the parts are smaller so take less time to read on the 4 disk array. Sharing the array for both the OS and data means that the pagefile is on the data array but as you say with 4Gb of RAM there shouldn't be that much pagefile access.

I tend to deal with machines that use RAID1 for the OS and separate arrays for data storage but these are big AIX boxes so the data is kept separate so that we can either cluster or upgrade the CPUs without having to mess with the data.
 
hot news lacie are teaming up with intel to provide a NAS box up to 2TB that will act as a DCHP server and has raid 5 for arround 600 to 800 quid more news and a proper anouncment will come shortly, I saw one at storage expo last week.
 
Microsoft recommends (or certainly used to) RAID1 for the system volume, another RAID1 for the transaction logs and a RAID 5 for the data. Personally, I'm quite happy to have the system and logs on different partitions of a single RAID1. There's a very good reason for not using just three disks in a RAID5 - imagine what happens if one fails. There's just two drives trying to handle data & redundancy so the whole thing gets seriously slow. With five disks in a RAID5 you only notice when one drops out if the controller or OS puts out a warning message. Four disks is a reasonable compromise.

Jonathan
 
Thank you for the replies it's quite a complex topic when you start to dig into it especially when you start optimising for certain applications.

I guess there is always a performance/cost ratio to take into account. Some interesting points too that I never realised.
 
The jump between a 3 disk -> 4 disk array is quite large.
4 disk -> 5 disk a smaller increase.
5 disk -> 6 disk smaller agian.

All still increase the performance - But a little less each time.

So I would definitely use 4 or 5 disks for the RAID 5 array.

In work we use single disks for the OS - But RAID 1 would be beneficial.

I would also have a "hot spare" in the array.
The drive will just sit there, powered down until such time as a HDD fails.
As soon as the HDD fails, the hot drive will spin up and replace the faulty one.
(Replace the other disk ASAP as if another drive dies - Only 2 HDD dying away from downtime when there is no hot spare)

When you start getting 6+ disks in the array, the chances of any one drive failing increases too. (6 * the chance, 8 * the chance etc)
So between 4 and 6 disks would be optimal IMHO - But that entierly depends on what it is getting used for!
 
Back
Top Bottom