Replacing drives in a RAID5

Associate
Joined
27 Sep 2009
Posts
1,693
I have a Dell R610 with 6 x 600GB disks. I want to replace these with 6 x 1.2Tb disks. Can I just swap 1 out, wait for it to build, put a 2nd new one in etc? Is there best pratice for this kind of thing?
 
yup. best practice is to do the spare first though
.

set the write cache to max during the process as well to sped it p a bit
 
Best practise would be to backup and restore, though clearly that comes with downtime, but its the best way of safeguarding the data against corruption.
3tb isn't really a lot of data.

Admittedly iam guilty of performing the old disk shifteroo :)
 
If its anything like HP kit. Once all disks are replaced you will need to go into the raid controller software and extend the array. Once this is done the extra space will be available to the OS.
 
Best practise would be to backup and restore, though clearly that comes with downtime, but its the best way of safeguarding the data against corruption.
3tb isn't really a lot of data.

Admittedly iam guilty of performing the old disk shifteroo :)


This. Especially as you're only on drive failure away from losing the data. I had a brand new WD Red fail within the first hour of use.
 
Does that raid controller support expanding the array size?

It is a PERC H700 btw:

Online Capacity Expansion (OCE) can be done in two ways. If there is a single virtual disk in a disk group and free space is available, the virtual disk´s capacity can be expanded within that free space. If a virtual disk is created and it does not use the maximum size of the disk group, free space is available. Free space is also available when a disk group´s physical disks are replaced by larger disks using the Replace Member feature. A virtual disk's capacity can also be expanded by performing an OCE operation to add more physical disks.

I think I need to use something called OMSA:

Insert drive HOT (never power down to replace or introduce a hot-swappable drive)
Launch OMSA, make sure it shows under Physical Disks as "ready" (not foreign)
go to storage, PERC, Virtual disks, choose Reconfigure from dropdown menu for RAID 5
The disk will be added to the RAID 5
 
Last edited:
I think raid will just reconfigure the 1.2tb as 600gb. It will do that in hp servers. You can use any disk in a raid as long as its the same RPM and larger than the other disks in the raid. If you add a disk using another slot then it will be able to expand the raid to include the disk but i think it will still only use 600gb of the 1.2tb, otherwise the other disks 600gb ones could not work as backup for the 1.2tb disk if it was to fail. Even in HP p2000 i would not recommend expanding the size of a raid by adding a disk. Not if it is production data as the performance will drop for days. It is always a better idea to create a new one if you have the slots and if you don't have the slots then build a new server and migrate the data or use backup and restore method as described earlier. If you have three slots, build a new 1.2tb raid 5 or 6 then migrate the data and then remove the old disks and then do a one time expand of the 1.2tb raid with new 3 disks etc.
 
Last edited:
Sounds good... I was just checking as not all support it with ease... but looks as though that one does.

So yes, I would replace 1 drive at a time, wait for the rebuild to complete, replace the next drive, etc until they're all done & rebuilt... then you should be able to expand the array.


anything i don't mind:
You don't have to have the same RPM, for any raid array... really.

Only usually the raid performance will slow down to whatever the slowest drive can do.
 
Raid 5 and >1TB disks is generally considered a no go. Dell certainly do not recommend this in their storage arrays, may be worth checking they will support you in this configuration.
 
A couple of years or so, longer according to some people. The amount of I/O disks have to undergo when rebuilding a RAID 5 array massively increases the chance of a URE occurring and bang goes the array. Similar when writing large amounts of data, the disks get hammered.

And add in the performance hit whilst rebuilding (drive failures do happen, especially for consumer grade disks). RAID 10 these days makes so much more sense for larger drives.

Just because you've seen them, doesn't mean they're best practice. We have a WD Sentinel at work - 4x4TB disks in RAID 5 (for some reason it won't do 10) - we've had 2 drive failures recently, WD replaced the disks quickly, but we were bricking it both times it was rebuilding - well over 24 hours each time.
 
Last edited:
Since when? I've seen multiple raid 5 arrays with drives exceeding 1TB...

For about two years plus, similarly raid 50 isn't recommended for obvious reasons. Where Ive not been clear is this was mainly aimed at sata spindles. I'm personally of the opinion any raid design should consider best practices and considerations for disk failure and the impact of rebuilds and their duration. Of course R5 works and yes people use it but it is not recommended for production use. Regardless Dell should confirm for certain as per my earlier post just to be certain rather than rely for production information on here (No offence intended)
 
Last edited:
Yes, but i found a problem if you want to use raid 0 in esxi you will have to reset your raid and install the new sas or esxi wont detect your drive as a whole
 
Back
Top Bottom