Home Server + 10G ethernet

Kei

Kei

Soldato
Joined
24 Oct 2008
Posts
2,751
Location
South Wales
Decided I wanted to build a server/nas to help with backups etc so I've cobbled together all of my spare parts and collected a few freebies across the last few months. Here's how it looks at present:

Gigabyte MA-790FXT-UD5P
AMD Phenom II X4 955
Corsair Dominator 4GB 1600
Nvidia Quadro FX3500
LSI MegaRaid SAS 8888ELP
Delta 850W PSU (WTX form factor so may well be swapped for my corsair TX650 if it'll fit in the phanteks else I'll probably have to buy another PSU)
Belkin USB 3.0 card
Prolimatech Megahalems
Coolermaster CM690-II

Hard drive wise, at present I have a load of different disks ranging from 500GB to 1TB. (6x 500GB, 1x 640GB and a 1TB) I will probably be looking at getting several WD SE 2TB disks to make up the array. The card doesn't support larger than 2TB, but 8 of those would make more than enough storage space for me at present. If i find a suitable expander, I should be able to add more in the future if needed. (or just get a different HBA)

 
Last edited:
Put most of it together this morning. Trying to keep it reasonably neat, though i imagine cable chaos will ensue once I have all 12 disks fitted.



My old phenom II x4 955 which I bought back in 2008 and ran at 3.7GHz right up until january this year has a new lease of life. I will probably drop the clocks back down and see how low i can drop the voltage. (being an old C2 stepping it liked lots of volts) Get the feeling it'll be overkill for the intended purpose.


An HP branded LSI MegaRAID 8888ELP 8 port SAS HBA and an nvidia quadro FX3500. (again probably too powerful for the purpose)


Tried the PSU in the phanteks and it doesn't fit either. (even though the bolt locations are ATX, the physical size is WTX and is significantly bigger than ATX. Nicely made supply though.


Unfortunately upon testing this evening, I have not been able to get any life from it, just powers on and sits there gpu fan spinning full tilt. Tried some basic bits like reseting the cmos, trying one memory module at a time and no drives connected, but no luck as yet. As it stands, the gpu and psu are the only unknowns. The rest of the system was working when dismantled. Will be looking into it further tomorrow.
 
Spent this morning doing the testing and have so far found that the GPU, RAID card and memory is all still working perfectly. That leaves the psu, motherboard and processor. Though the motherboard, processor and ram were all working happily together before which leaves me with major doubts about the psu. I'll need to pull the psu out of the other pc to test tonight. If it is where the fault lies, I'll have to buy a new psu as I think my only spare is an old antec truepower 430 which pre dates 24 pin ATX and EPS.

I spent some time thinking about disk configuration the other day and I reckon 8x 2TB drives on the HBA in RAID 5 (maybe 6) should suffice for main storage. (may start out with 4 due to cost and expand later) I can then use the 4 500GB seagate constellation ES2 drives I already have in a RAID 10 array using the onboard sata for the OS giving 1TB mirrored, which should fit within the non EFI constraints for bootup. I'll need to get two ICYBOX backplanes that fit 3x 3.5" drives into 2x 5.25" bays. I'm hoping to use ubuntu server 14.04 on it too, not sure on the file system type yet though.
 
Psu swap has confirmed my suspicions. WTX supply has different pinout and therefore doesn't play nice with an ATX board. So i've finally got it to boot up and it all seems to be working except that the LSI card is not detecting any physical drives even when there are 4 known good disks connected. I've tried swapping the SAS connector over and I've check the config as best i can at present but no luck yet. I'm doing something wrong or it has to be a borked SAS - 4x SATA cable.

I am also going to have to get some decent fans to cool this properly as it runs quite warm.
 
No idea. The model number for the cable is 79576-3007. This is the info from the molex site.

Code:
Part Detail

General

Status	Planned for Obsolescence
Category	Cable Assemblies
Series	79576
Assembly Configuration	Dual Ended Connectors
Connector to Connector	Serial ATA-to-iPass
Overview	iPass™ Connector System
Product Name	iPass™, Mini Multi-Lane, PCI Express*, SAS, Serial ATA
UPC	822350138345

Physical

Cable Length	1.0m
Circuits (Loaded)	36
Color - Resin	Black, Natural
Gender	Male-Male
Lock to Mating Part	Yes
Material - Metal	Phosphor Bronze
Material - Resin	Low Density Polyethylene, Polyester
Net Weight	106.700/g
Packaging Type	Bag
Pitch - Mating Interface	0.80mm
Single Ended	No
Termination Interface: Style	Crimp or Compression, Surface Mount
Wire Insulation Diameter	N/A
Wire Size AWG	28
Wire/Cable Type	Twin-ax
 
Last edited:
Cheers for the help. Thankfully I didn't buy that cable, it came with the card. Will be buying two proper forward cables next week.

The OS is going to be ubuntu server 14.04, not sure what the equivalent of stablebit drivepool would be.
 
I've now placed an order for most of the outstanding bits. So to fill in the gaps in the specs:

4x 2TB WD SE drives (will expand to 8 further down the road, £800 on disks is a bit much)
4x 500GB Seagate Constellation ES2
2x LSI CBL-SFF8087OCF-06M forward breakout cables
2x ICYBOX 553SK SAS/SATA backplanes
1x Noctua NF-F12 IndustrialPPC 3000RPM PWM
5x Scythe Kama Flow2 1900RPM Fan - 120mm (hoping they are basically the same as the old S-flex series)
1x NMB-MAT 4715KL-04W-B40 120x38mm fan
1x Yate Loon D14BH-12 140x25mm fan
550W Superflower Golden Green HX PSU
 
I prefer enterprise class drives. Since I intend to use hardware raid 5, I thought it wise to use disks designed for parity raid. Red pro's could do this, but they cost more than the SE's do. (would have made more sense to go with the superior RE instead)

No idea what it's going to run yet. It will certainly get used for backups and common storage for all machines around the house. Anything that I want to be properly backed up will be archived off onto tertiary media. (be that a portable hard drive, Bluray or LTO cartridge)
 
True, reds have TLER which helps, but their URE rate is <1 in 10^14 which isn't good for parity raid. The Se are <10 in 10^15 which is massively better. (The Re are <10 in 10^16 and Xe even better again at <10 in 10^17) Basically once every 100,000,000,000,000 (1 in 10^14) bits, the disk will not be able to read back a sector. One hundred trillion bits is 12.5TB. (if my whole array is 14TB that pretty much guaranties it's doom) Factor that up to the Se level and it's a more healthy 1250TB, Re is 12.5PB and Xe is huge 125PB. I'm not sure how much of a concern this is in soft raid but it's a big risk that I'd rather not take. I'm still not sure on whether to go with raid 5 or 6 though.
 
Bits from ocuk arrived today.


Finally starting to resemble a server. Not sure whether to have the side panel fan pull air in or blow it out.



I've tried to keep the cables reasonably tidy. No cables are tied in yet as I need to wait until I have the other parts that I'm waiting for. I reckon the SAS cables are going to be a nightmare to keep neat.
 
Job done. Second backplane was an extremely tight fit and took some serious persuasion. Fans are a little loud but not a patch on a real server. It now works correctly and the raid card is finally seeing the disks connected to it. (inc those on the backplanes after I realised I'd plugged the wrong ports in)



The back could possibly be a bit tidier, but it could have been far worse. (so it'll do as it shouldn't impede airflow or affect functionality)
 
Went through the ordeal of trying to install ubuntu server onto the raid 10 array. First off I couldn't find it at all, just the LSI card and the raid 5 array. Tried setting the onboard SB750 sata back to AHCI and IDE modes then gave up and pulled the LSI card out. Drive list now shows empty. Gave up trying to use the gui method of installing and opted to try the CLI. Some moderate success was had here using parted and mdadm as the devices were present. (/dev/sda, sdb, sdc, sdd) Managed to fudge my way through creating a software raid using mdadm as device /dev/md0. Go back to installer and hey presto the device is listed as a 999GB disk ready to partition. Try to set it going using the partition tool and it fails to write changes to the disk. (even after I waited for the synchronisation to finish) 6 hours last night wasted.

Going to try and run the motherboard "fakeraid" again and see if i can find it using the CLI. (Something I didn't try) I can't work out how to get linux software raid to work. I can't use the LSI as I don't have sufficient SAS ports to run 12 disks and an expander is quite an expensive way to get one extra port. Might just give up and resort to windows server instead.
 
I've run into some difficulties trying to set it up as I can't persuade grub2 to boot an mdadm raid 10 array. I've been trying to sort it out for 3 days and have finally given up. I can see a few different choices available:-

1. Buy an hp sas expander card and use the LSI to run the raid 10 array in addition to the raid 5 array
2. Bin the whole raid 10 array idea and just use my spare ssd for the OS
3. Run windows server instead

The choice that seems most logical to me is #2 but buy the expander anyway as dumping the 4 OS disks out of the case would allow me to use all 12 for the storage array. (or more should I upgrade to a larger case)
 
After some considerable chaos trying to get things to work via ubuntu, I gave up and moved to openSUSE and things have been plain sailing from there. I binned the RAID 10 array and used the spare SSD for the OS. This has freed up 4 more drive bays so I have also gone a picked up an intel Res2sv240 sas expander card to increase my port capacity to 16 dual linked. I can expand that to 20 if i single link it, and adding a second expander to the other port would give me 40. I can also daisy chain expanders too but I somewhat doubt I'll ever need that much space. When testing the array it completely saturated the gigabit lan connection managing a sustained write speed of 109mb/s for large video files. Smaller files dropped down into the mid 50's. Read speeds seem to be nice and high at ~4-500mb/s. (this was using just 4 disks)

I also got remote desktop via vnc to work too.


I also dropped another 2 disks into the array this morning to expand the storage up to a shade under 10TB. Reconstructing the array is taking a very long time though. It's taken 12 hours to get to ~40%. The icybox backplanes look great when running
 
Last edited:
I'll be running mine 24/7 as its healthier for the disks not to be spinning up and down frequently. (and i'd rather they last well as they are a huge cost in this project) I've dropped the voltage to the cpu and lowered the peak clock speed as I don't need it, but i reckon power usage will still be quite high. It will need to be fine tuned to get it to an optimum level.

Installed plex and hoping to configure it so that I can use it to stream old recordings lifted off my humax pvr. It works perfectly to my panasonic plasma but not got it to talk with vlc via upnp yet. (a non issue really as i can play the files direct via the shared storage) Still need to get the FTP server side of things running though.

6 disks finally migrated and fully initialized. Performance is pretty good. Intel expander should be here soon, though I have no need for it yet as I don't have enough disks to require it.
 
Tested out FTP today and that works perfectly too, uploading to it at ~8-10mb/s and downloading from at ~2-3mb/s. Need to tinker with the settings a bit to make sure it is secure.

Intel SAS expander arrived today. May well look into fitting it over the weekend.
 
The intel sas expander is now fitted and working perfectly. I was expecting to have to rebuild the array from scratch but the controller picked up the virtual drive as if nothing had changed and is working exactly as before. A minor headache came when I tried to fit the card as it requires a pcie 4x sized slot and my gigabyte board only has two 16x slots and three 1x slots. I've had to use the vertical mount on the case for it until I can come up with something suitable to help neaten the cabling back out. (considering a 1x to 16x riser card for low profile cards) I also crimped the connectors for the 38mm NMB-MAT fan to aid cooling as I had to loose the side panel fan to fit the expander.
 
Added the third breakout cable and adjusted the fan power connections so that they are a tad quieter. Also installed the proper nvidia drivers which has finally cured the fan on the quadro running at full tilt all the time. It's finally reasonably quiet now. Got 7 disks connected giving me 12TB, still got room for another 5. (I've put some spares in for now to simulate a full case so i can monitor temperatures)
 
Last edited:
I got ubuntu working although, I'm glad i ditched server as I couldn't work entirely using the CLI, much rather have a GUI. Opensuse was a breath of fresh air compared with ubunutu, the yast configuration panel really is brilliant for sorting out most network functions. Best of all, it was fairly easy to install the LSi megaraid management software so that i can mange things from across the network. I've been using it for transcoding mkvs using handbrake rather than tying up my main pc.

I moved on from the RAID 10/1 ideas and just stuck with a single SSD for the OS and RAID 5 spanned across 7 disks (so far) for the storage array which is mounted into /media rather than /home. I have been wondering whether RAID 6 with a 64/128k strip size may have been a more logical choice though. (i went for 256k)
 
Last edited:
Back
Top Bottom