Home Server + 10G ethernet

Kei

Kei

Soldato
Joined
24 Oct 2008
Posts
2,751
Location
South Wales
Decided I wanted to build a server/nas to help with backups etc so I've cobbled together all of my spare parts and collected a few freebies across the last few months. Here's how it looks at present:

Gigabyte MA-790FXT-UD5P
AMD Phenom II X4 955
Corsair Dominator 4GB 1600
Nvidia Quadro FX3500
LSI MegaRaid SAS 8888ELP
Delta 850W PSU (WTX form factor so may well be swapped for my corsair TX650 if it'll fit in the phanteks else I'll probably have to buy another PSU)
Belkin USB 3.0 card
Prolimatech Megahalems
Coolermaster CM690-II

Hard drive wise, at present I have a load of different disks ranging from 500GB to 1TB. (6x 500GB, 1x 640GB and a 1TB) I will probably be looking at getting several WD SE 2TB disks to make up the array. The card doesn't support larger than 2TB, but 8 of those would make more than enough storage space for me at present. If i find a suitable expander, I should be able to add more in the future if needed. (or just get a different HBA)

 
Last edited:
Put most of it together this morning. Trying to keep it reasonably neat, though i imagine cable chaos will ensue once I have all 12 disks fitted.



My old phenom II x4 955 which I bought back in 2008 and ran at 3.7GHz right up until january this year has a new lease of life. I will probably drop the clocks back down and see how low i can drop the voltage. (being an old C2 stepping it liked lots of volts) Get the feeling it'll be overkill for the intended purpose.


An HP branded LSI MegaRAID 8888ELP 8 port SAS HBA and an nvidia quadro FX3500. (again probably too powerful for the purpose)


Tried the PSU in the phanteks and it doesn't fit either. (even though the bolt locations are ATX, the physical size is WTX and is significantly bigger than ATX. Nicely made supply though.


Unfortunately upon testing this evening, I have not been able to get any life from it, just powers on and sits there gpu fan spinning full tilt. Tried some basic bits like reseting the cmos, trying one memory module at a time and no drives connected, but no luck as yet. As it stands, the gpu and psu are the only unknowns. The rest of the system was working when dismantled. Will be looking into it further tomorrow.
 
Spent this morning doing the testing and have so far found that the GPU, RAID card and memory is all still working perfectly. That leaves the psu, motherboard and processor. Though the motherboard, processor and ram were all working happily together before which leaves me with major doubts about the psu. I'll need to pull the psu out of the other pc to test tonight. If it is where the fault lies, I'll have to buy a new psu as I think my only spare is an old antec truepower 430 which pre dates 24 pin ATX and EPS.

I spent some time thinking about disk configuration the other day and I reckon 8x 2TB drives on the HBA in RAID 5 (maybe 6) should suffice for main storage. (may start out with 4 due to cost and expand later) I can then use the 4 500GB seagate constellation ES2 drives I already have in a RAID 10 array using the onboard sata for the OS giving 1TB mirrored, which should fit within the non EFI constraints for bootup. I'll need to get two ICYBOX backplanes that fit 3x 3.5" drives into 2x 5.25" bays. I'm hoping to use ubuntu server 14.04 on it too, not sure on the file system type yet though.
 
Psu swap has confirmed my suspicions. WTX supply has different pinout and therefore doesn't play nice with an ATX board. So i've finally got it to boot up and it all seems to be working except that the LSI card is not detecting any physical drives even when there are 4 known good disks connected. I've tried swapping the SAS connector over and I've check the config as best i can at present but no luck yet. I'm doing something wrong or it has to be a borked SAS - 4x SATA cable.

I am also going to have to get some decent fans to cool this properly as it runs quite warm.
 
No idea. The model number for the cable is 79576-3007. This is the info from the molex site.

Code:
Part Detail

General

Status	Planned for Obsolescence
Category	Cable Assemblies
Series	79576
Assembly Configuration	Dual Ended Connectors
Connector to Connector	Serial ATA-to-iPass
Overview	iPass™ Connector System
Product Name	iPass™, Mini Multi-Lane, PCI Express*, SAS, Serial ATA
UPC	822350138345

Physical

Cable Length	1.0m
Circuits (Loaded)	36
Color - Resin	Black, Natural
Gender	Male-Male
Lock to Mating Part	Yes
Material - Metal	Phosphor Bronze
Material - Resin	Low Density Polyethylene, Polyester
Net Weight	106.700/g
Packaging Type	Bag
Pitch - Mating Interface	0.80mm
Single Ended	No
Termination Interface: Style	Crimp or Compression, Surface Mount
Wire Insulation Diameter	N/A
Wire Size AWG	28
Wire/Cable Type	Twin-ax
 
Last edited:
If you want a flexible storage solution which you can expand/swap drives later - look at something like StableBit DrivePool - I've been using it on my home server for a couple of years.
 
A quick Google suggests that cable is a reverse break out, you'll need a forward breakout for connecting to drives rather than a backplane.

plenty of the correct type on eBay I'd you search for

sff-8087 to sata forward breakout cable
 
Cheers for the help. Thankfully I didn't buy that cable, it came with the card. Will be buying two proper forward cables next week.

The OS is going to be ubuntu server 14.04, not sure what the equivalent of stablebit drivepool would be.
 
I've now placed an order for most of the outstanding bits. So to fill in the gaps in the specs:

4x 2TB WD SE drives (will expand to 8 further down the road, £800 on disks is a bit much)
4x 500GB Seagate Constellation ES2
2x LSI CBL-SFF8087OCF-06M forward breakout cables
2x ICYBOX 553SK SAS/SATA backplanes
1x Noctua NF-F12 IndustrialPPC 3000RPM PWM
5x Scythe Kama Flow2 1900RPM Fan - 120mm (hoping they are basically the same as the old S-flex series)
1x NMB-MAT 4715KL-04W-B40 120x38mm fan
1x Yate Loon D14BH-12 140x25mm fan
550W Superflower Golden Green HX PSU
 
I prefer enterprise class drives. Since I intend to use hardware raid 5, I thought it wise to use disks designed for parity raid. Red pro's could do this, but they cost more than the SE's do. (would have made more sense to go with the superior RE instead)

No idea what it's going to run yet. It will certainly get used for backups and common storage for all machines around the house. Anything that I want to be properly backed up will be archived off onto tertiary media. (be that a portable hard drive, Bluray or LTO cartridge)
 
True, reds have TLER which helps, but their URE rate is <1 in 10^14 which isn't good for parity raid. The Se are <10 in 10^15 which is massively better. (The Re are <10 in 10^16 and Xe even better again at <10 in 10^17) Basically once every 100,000,000,000,000 (1 in 10^14) bits, the disk will not be able to read back a sector. One hundred trillion bits is 12.5TB. (if my whole array is 14TB that pretty much guaranties it's doom) Factor that up to the Se level and it's a more healthy 1250TB, Re is 12.5PB and Xe is huge 125PB. I'm not sure how much of a concern this is in soft raid but it's a big risk that I'd rather not take. I'm still not sure on whether to go with raid 5 or 6 though.
 
Bits from ocuk arrived today.


Finally starting to resemble a server. Not sure whether to have the side panel fan pull air in or blow it out.



I've tried to keep the cables reasonably tidy. No cables are tied in yet as I need to wait until I have the other parts that I'm waiting for. I reckon the SAS cables are going to be a nightmare to keep neat.
 
Back
Top Bottom