DELETED_3139

Personally I would go with option 2. Use the Mirror for the OS and then place the data on the RAID5. You could then create partitions on the RAID5 for the DATA and SQL.
 
I'd be tempted to find out what the database workload is before arranging the disks. If it's query oriented than there's possibly some benefit to be gained from the read speed offered from the RAID5 array but if there are a lot of updates then you'd be better off looking at option 3 and splitting the database and it's transaction logs to reduce I/O contention.
 
Typically for an SBS2K3 server, I'd suggest 2x73GB SAS drives in RAID 1 for the O/S and however many else in RAID 5 according to required capacity.
 
Is it not a possibility to run it in a large raid 1+0 ?

That way you get good redundancy (could lose 3 drives and still run) and you can then carve up the partitions as you see fit. The main reason I mention this is because having 146GB just for the OS partition is massive overkill IMO. Even with an exchange store I would'nt have thought you would use this much.

I would allocate out around 10-20GB for the system drive then place all the other separate components on other logical drives. You would end up with a layout like this i suppose:

c: - system
d: - exchange store
e: - SQL
f: - doc store
 
Personally I'd go for

Raid 1 Pair
C: OS 20GB
D: Exchange logs 10GB
E: SQL logs 10GB
F: Data around 90-100GB

RAID 5 + 1 hot spare (290GB Usable)
G: Exchange
I: SQL Data

Configure the data space split as you think appropriate but if its a CRM system them I guess it isn't going to be big. I'm also guessing as its SBS the whole system is going to be relatively lightly loaded.

I'd also chuck in the BBWC module onto the raid controller as that'll bump performance a bit and doesn't cost a lot.
 
Just to toss in a thought, what have you spec'd for the back-up requirement?

Sure you have it covered, but it sometimes gets forgotten.
 
I would echo this setup. I benched raid 10 against all other options when building a new sql server recently & raid 10 was monsterously fast compared to raid 1 for the os, raid 5 for data, 1 large raid 5 array, etc etc


TheKnat said:
Is it not a possibility to run it in a large raid 1+0 ?

That way you get good redundancy (could lose 3 drives and still run) and you can then carve up the partitions as you see fit. The main reason I mention this is because having 146GB just for the OS partition is massive overkill IMO. Even with an exchange store I would'nt have thought you would use this much.

I would allocate out around 10-20GB for the system drive then place all the other separate components on other logical drives. You would end up with a layout like this i suppose:

c: - system
d: - exchange store
e: - SQL
f: - doc store
 
m_cozzy said:
I would echo this setup. I benched raid 10 against all other options when building a new sql server recently & raid 10 was monsterously fast compared to raid 1 for the os, raid 5 for data, 1 large raid 5 array, etc etc
A single RAID10 will be quick for benchmarks but as soon as you get any form of concurrent I/O (database writes etc) then your performance will go through the floor.
 
I disagree. Using a sqlstress program that mimics multiple users & simultaneous read/writes, a 6 disk raid 10 array finished 50% quicker than any other 6 disk configuration I tested, apart from raid 5, which was only fractionally slower, but with the disadvantage of not being able to lose more than 1 disk whereas with the raid 10 I could loose up 3 (assuming not more than 1 from each mirror :) )

rpstewart said:
A single RAID10 will be quick for benchmarks but as soon as you get any form of concurrent I/O (database writes etc) then your performance will go through the floor.
 
Back
Top Bottom