Please spec me HDD's for a DIY Server

Associate
Joined
18 Oct 2002
Posts
322
Location
North London, UK
I have built a server and I am looking for some hard disks to fill it.

I recently built another similar server with 6x 2TB Western Digital WD20EARS, but there must be better HDD's out now for speed, efficiency, reliability, cost?

I was thinking the Seagate 3TB ST3000DM001 7200rpm/64MB Cache, which I have seen for around £115 a disk.


The server (if this info is helpful):

Supermicro X8SIL-F:, Xeon X3450, 32Gb ECC RDIMM

The case can fit 9 standard 3.5" HDD + I have installed a Vertex 3 SSD (60Gb) to use as cache in ZFS

RAID controller is a IBM M1015 flashed to be an LSI 9240-8i
It has 8 ports which I have passed through ESXi 5 to OI+Napp-it ZFS RAID10 as an NFS datastore. Unfortunately, I may have to use one port for the cache SSD? leaving 7 for the ZFS Raid, I'd hope to find a way around this to have all 8 ports for storage, but thats another problem. I also hope there are no issues with me exceeding 2TB per disk.

The mainboard ports can be used as datastores but not passed through, I need to look into this.

Thank you for any advice you can offer
 
Last edited:
For an architects practice, so we mainly need:
Fast access to CAD, Indesign, Photoshop, 3D Renders
Search through a lot of project photos
Search PDFs of admin ocr scans & product literature
 
You might want to check out WD's new Red range, which are purpose-designed with small servers and NASes in mind - they're supposed to have optimised firmware, a higher MTBF and are specced for continuous 24/7 operation (unlike "regular" consumer drives).

All of this comes at a (slightly) higher price though, and whether you'd actually see the practical benefits is anyone's guess... there's a thread at Hardforum's data storage section if you're interested: http://hardforum.com/showthread.php?t=1704476 :)
 
For an architects practice, so we mainly need:
Fast access to CAD, Indesign, Photoshop, 3D Renders
Search through a lot of project photos
Search PDFs of admin ocr scans & product literature

For those uses - how many simultaneous connections?

More than 2 and I would recommend paying the extra for faster SAS drives...
 
You might want to check out WD's new Red range, which are purpose-designed with small servers and NASes in mind - they're supposed to have optimised firmware, a higher MTBF and are specced for continuous 24/7 operation (unlike "regular" consumer drives).

All of this comes at a (slightly) higher price though, and whether you'd actually see the practical benefits is anyone's guess... there's a thread at Hardforum's data storage section if you're interested: http://hardforum.com/showthread.php?t=1704476 :)

2TB Reds £100, Green £85
3TB Reds £150, Green £118

So if I go for 8x 3TB then Reds are only £200 more overall for peace of mind, which is ok.

I'll probably go for a 6x 3TB in RAID10 (9GB) and in the remaining ports I have a SSD for ZFS cache and maybe another 3TB drive as a hot spare?

the SSD for ZFS cache is currently a 60GB Vertex 3, I also have a 120GB Vertex 3 which I was going to fit in my PC as a boot drive, I could put this in the server instead, or perhaps I should buy an SSD specifically suited to cache use?

For those uses - how many simultaneous connections?
More than 2 and I would recommend paying the extra for faster SAS drives...

There are 5 of us now and unlikely to be more than 10 over the next 3 years. We've been working off a WHSv1 with pooled drives, each over 5 years old, so we expect a pretty hefty improvement in all areas.

I know what your saying, but SAS drives are too expensive for us, the cheapest I could find was double SATA.
Seagate Constellation ES.2 2TB £200
Seagate Constellation ES.2 3TB £300
 
Fair enough, definitely go for the Reds then - worth the extra.

I'd recommend raid 5 over raid 10... you'll get a bit of a speed bump & won't lose as much capacity but still have the redundancy. It's very unlikely for more than 1 drive to fail in a bank of 6 close to another.

Really want 2x drive backup... go for raid-6
 
Fair enough, definitely go for the Reds then - worth the extra.

I'd recommend raid 5 over raid 10... you'll get a bit of a speed bump & won't lose as much capacity but still have the redundancy. It's very unlikely for more than 1 drive to fail in a bank of 6 close to another.

Really want 2x drive backup... go for raid-6

I was tempted by RAID 5 & 6, but concerned about the time it takes to rebuild the RAID if a drive fails, can it be done live, and what sort of a performance hit there would be while its doing this? I'm not sure how each option compares in speed on my actual controller.
 
Depends on the speed of the drive & how much data it has to copy/what format the data takes (ie, a few big files will be quicker than lots of small files)... can't really quantify it.

A failed drive in a raid-5 array or two failed drives in a raid-6 array doesn't stop you from being able to access the data... so you can continue working through that day, albeit without the redundancy.

I would then replace the faulty drive and rebuild overnight.

I can see this might not be the best option, so raid-6 might be the best in that scenario, so you can lose one drive and still have redundancy til you rebuild overnight?
 
RAID6 with Red's sounds like a good option (I assume Napp-it does RAID6)
I don't know how long a 6x 3TB rebuilding takes, or even rebuilding the RAID with the future addition of more drives?
So that leaves performance on my controller as a deciding factor. I'm just passing the controller through with hardware RAID disabled, so ZFS is doing all the work. So I'd assume my server's CPU, memory and cache disk are the key factors in performance and the controller is just I/O?
 
Back
Top Bottom