PC based NAS problems.

Associate
Joined
10 Nov 2004
Posts
2,237
Location
Expat in Singapore
I have had a couple of threads concerning this but only for specific issues and under titles that may not directly point to the underlying project... i.e. building a PC based NAS.

I have started this thread to track the build and try to sort out any continuing issues with the help of the knowledgeable people here.

Ok so first off...

Aim

  • To have a 24*7 NAS capable of streaming to 4->6 HD (1080p) movies concurrently via shares (i.e. not as a media server transcoding the video).
  • Needs to be able to hold 2+ TB of data plus have a backup and be optimised for read speed over write speed.
  • Prefer it to be based on Linux (Redhat or CentOS as they seem more logical in their set-up to me over the Debian systems).
  • Prefer to stay away from 'application' images as I will probably use the server for a few other things like DHCP, DNS, NZBget, Transmission etc.
  • Needs to be able to play without stutter to both AC Ryan HD Mini and WD HD Live TV media players.
Current hardware available for NAS

  • Intel C2D 8500 (prob 8500 but certainly around that level).
  • MSI socket 775 (1*PCI-e x16, 2+ PCI slots, 4*DDR2, 6*SATA2). Will confirm model when home.
  • PCI-e low end ATI video card (using VGA connection, DVI available).
  • 5* WD Caviar Green 1.5TB drives (one RMA with WD now).
  • 300GB Samsung 7.2k HDD
  • 1TB Seagate Barracuda 7.2K drive
  • 250GB Hitachi drive
  • Dell Perc6 SATAII raid controller (8 drives, PCI-e X8). Awaiting cables delivery and motherboard with 2*PCI-e x16 or onboard video.
  • Midi tower case
  • Icydock 3->5 drive hot swap bay
  • 2*Intel 1000Pro network cards (PCI)

Current set-up on the NAS

  • Built minus the PERC6 and one WD CG 1.5TB drive (RMA).
Software on NAS

  • CentOS 5.5
  • SMB for shares
  • mdadm raid 5 array for all WD CG drives.
  • 300GB drive as boot
  • 250GB drive as upload
  • 1TB drive as private backup (photos, documents etc).

Current Network Hardware

  • Cat5e cables
  • HP Procurve 1810-24G 24 port switch
  • D-Link E3000 N+ dual band Gigabit switch (4 port).
Other network devices

  • D-Link DNS-323 (NZBGet newsgroup reader).
  • Workstation (Intel Pro 1000 network card)
  • Laptop (wireless N)
  • 2*WD HD Live TV media players
  • AC Ryan HD Mini media player
Final network layout (thanks for all the help on this)

Network-5.png


Current issues

  • Network transfers still slower than expected (approx 50MB/s read and 60MB/s write to the NAS)
  • Media players all hang when watching movies (single stream). AC Ryan hangs sooner than the WD (8GB MKV - ACRyan=10 minutes, WD = 30 minutes before first hang). A single drive light is solid on from the remaining 4 disks in the array. The movie resumes and all 4 drives activity lights flash as normal after 2-3seconds. This happens repeatedly.
Attempted fixes

  • Checked drive using WD drive health check software. This highlighted the issue with the drive that has now been RMA'd to WD. No error detected on the current drive with the solid activity light during hangs.
  • Changed Sata cable
  • Changed motherboard Sata socket for that drive.
Known 'not perfect' items

  • Not all WD CG drives are the same revision 3 are newer. The RMA'd drive is of an older revision, the drive with the solid activity light is of a newer revision.
  • The cable between the media player (ACRyan or WD HD as I swap them around when testing) is fairly old and is two cables joined with a coupler. The copy speads were taken from my workstation connected to the NAS directly via the Procurve 1810 and 1/2 metre new cables.
  • The media player is connected to the D-Link E3000 along with the D-Link DNS 323 which may be active pulling from the news servers but only to a max of 1MB/s.
Proposed Next steps

  • Remove the WD CG drives from the Icydock and connect them directly to the motherboard.
  • Just purchased an Intel DP45SG Extreme Series P45 ATX DDR3 1333 2xPCIe 2.0x16 3xPCI 1333MHz FSB LGA775 Desktop Board. Should arrive at the end of the week. 4GB DDR3 ram also ordered for it.
Any suggestions to sort out the array speed would be very welcome as it is having me pulling my hair out and I have lost enough of that already :D.

Thanks
RB
 
Last edited:
Seems Fedora 13 supports the new 4K sector drives so I may move over to that. CentOS 6 may have support but no ETA on release I have yet found so Fedora may be the easiest route for a RedHat derivative.

Yes the WD drives are advanced format and I have used parted to align sector boundaries starting at 8 and ending as a multiple of 8. Interestingly even though most guides state a multiple of 8 for the starting sector, they tend all to use something like 40 rather than 8 or 16 etc. For a non-boot drive does it matter starting at 8 or 16 rather than 40 or 64 ?

RB
 
Have downloaded Fedora 14 last night and backed up everything on my array.

Cables ahve arrived for the Perc6 raid card.

WD have called about delivery of the RMA hard drive for tonight or tomorrow.

New motherboard should be with my next Monday along with the ram.

I will wait for the HDD to be returned and then pull the main board from my workstation (has 2 PCI-e slots, one of which is being used for a SATA to PCI-e HBA card so the PERC6 should work) and use it for the NAS. I will use the new board for my workstation. This of course means I need to rebuild both the NAS and my workstation though :). If Fedora 14 will do for the mean time I will wait for the new motherboard before rebuilding the workstation.

Selling a WD black tonight so wont get home until 8pm+. Late nights home for most of this week so it is hard to get time to work on the machines.

RB
 
I got round to swapping the motherboards so the NAS now has the Gigabyte board (X38) with two PCIe x 16 slots and it now refuses to stay on. Starts for 2 seconds then turns off. Have tried a different psu, no ram, no video or other cards, no hdd etc, reseated the CPU and made sure the thermal compound was reasonable. All no go.....

On reading it seems it is likely to be either the motherboard has failed or the CPU has an issue. Knowing my luck with this project, if I replace the CPU it iwll be the motherboard that is the problem.

The MSI board I put my workstation CPU on and put in the workstation case is working fine.

WD have sent back one of my two RMA'd hard driver (the other was a 2.5" scorpio blue). Yep it was the scorpio blue they sent back and not the other 1.5TB WDCG I am waiting for to rebuild my array with.

The motherboard from the US has been dispatched for a day or two now but has not arrived at my US shipping mailbox so no chance to get it before next week.

I have also purchased, in my foolishness, a Norco rack mount 20 hot swap 4U server case. The case will do everything I need for a fair few years (is the plan anyway) but it is 640mm meaning it will not fit a wall mount cabinet and I would need to get a floor standing cabinet which are around 500 quid over here even second hand. I came across the Lack server mounting article on the internet and may take a look at doing this or building a frame from the metal shelving.

RB
 
That is a very pretty network diagram. But by both the contents and the OCD nature of the diagram, I hope this is a home office/self employed setup else I think you might have a little too much time on your hands. :)
 
That is a very pretty network diagram. But by both the contents and the OCD nature of the diagram, I hope this is a home office/self employed setup else I think you might have a little too much time on your hands. :)

Haha, nah, I do diagrams for presentations / training at work quite a lot so it really does not take that long to put together and it has been quite slow at work recently so at least it looks like I am doing something work related rather than just internet browsing ;).

The Rack mount case has arrived and I am liking it. Not too much space between the backplanes and the centreline fans but I can make do.

I installed win7 on a small spare hard drive and booted the machine. Downloaded Megaraid to try and sort out the Perc 6/ir but to no avail. I can see the controller but cannot configure it and it cannot see my drives. Very disappointed at that. Even if it would only do raid 0 or 1 it would have done.

Now I am aware that people advises all sorts of raid configurations for redundancy and minimal downtime and fault tolerance and raid 5/6 or 10 tend to come out high on the list.

I have now decided I will just stripe the disks.... why ??. I do not need quick rebuild times. If the server is out for a couple of days then no big hardship. The nature of the data is primarily static (movies and music) so I will just backup to 1 or 2 drives overnight if the data has changed. I am not in a business environment where every minute downtime is critical.

I also came across the backblaze blog with their 67TB one 4U pod design and parts list. Very interesting reading and the parts list is just like gold dust. May get a couple of port multipliers if I end up getting some more drives.

I have pulled the Perc 6/ir and have put back in the Adaptec HBA which will handle 4 drives.

Now I either get a perc 6/i which I intended first of all or I go the back blaze way which may be cheaper but gives only 2 sata ports per card and I only have 2 PCI x1 slots on the new motherboard and one PCI slot free. With port multipliers then 4 drives per port, two ports per card gives 8 per slot so 16 in total plus the PCI card. THere are also 4 motherboard ports. Just thinking of the future as I now have 20 hotswap bays although I am not likely to fill them anytime soon :D.

RB
 
Well another Nas building weekend.

Fedora 14 installed
4*1.5TB drives installed and formatted to take in to account the 4K sectors (1MB gap at the front of the drives).
Formatted for ext4 (need to confirm best FS for raid speed).
Benched with hdparm -tT and got around 340MB/s. Will do again tonight if possible and post up the full set of speeds.
Benched with the disk utility in Fedora and got average of 250MB/s reads (peek 350MB/s).
Installed a 2TB archive drive for backups. Need to get a second one as this one is already full now. Question on whether to have as seperate drives or JBOD / LV spanned disks.
My 1.5TB seagate drive is reporting bad sectors. Time to check and possibly RMA it.

I had a big play around with trying to flash the Perc 6/ir. Flashed with the Dell latest firmware, no change. Tried to flash with LSIs latest firmware (1068e chipset), failed to flash due to vendor code :(. I have read the previous set of firmware should work but I cannot find it on the LSI site or anywhere else. The latest LSI firmware was on softopia and not their support site for their 1068e cards :(:(. Have pulled the PERC 6/ir and gone back to the Adaptec 1045 again.

After chatting to a couple of people, I came up with an alternative to the backblaze setup. I have ordered a couple of LSI 1068e 8 port PCI-e controllers (US$95 each new - these usually go for around S$200 each) and a PCI video card. The LSI controllers will give 16 ports with the mother board doing the other 4. All for around US$300. The 1068e's are only HBA and not raid cards so will only do raid0 or 1 (possibly 1+0) so back to software raid :D

So awaiting delivery of my motherboard, the LSI 1068e controllers, 3 more sets of cables.

I did look at using the HP SAS expander which would have given upto 24 ports but you need a compatible SAS card (the Adaptec 1045 is not compatible), you need the green PCB version as the other yellow PCB version is not flashable, you need 1.52 firmware and if you need to update it you need a HP SAS controller (US$300+) and the expander is selling for around US$350 second hand.

The rack mount case I purchased was this one. I need to modify the fan wall for 120MM fans to cut the noise down. An excuse to get a dremel :D.

RB
 
Well I got the dremel and also got told off by my wife for carving up the fan wall in the kitchen with sparks flying :D.

Need to finish cutting the holes outside the apartment but it also gets dark here close to when I get home and as the renovation has just started on our new place I am kinda busy making sure that doesn't go wrong.

On another note, life has decided to stab me in the eyes with plastic forks again.....

I had another order arrive from [another supplier in the UK who does deliver to Singapore] (Christmas presents a little delayed and a SFF8484 -> 4 sata cable).

Just for the sake of it I tried the new cable in the Perc 6/ir and it worked fine straight away.

The Perc6 /ir is now working like it should .

The two original cables I bought were Adaptec branded and do not work it seems, or at least not with the Perc 6/ir. I still have two more 1068e controllers on their way from the US (I hope) so now I have 3 :(.

Looks like I will end up selling one or using it for my Vertex II 60GB 2 disk array in my workstation.

Oh well :D.

RB
 
Well things have moved on.

I have finally got the motherboard, ram, etc. I finished cutting the oles in the fan wall and have attached two 120mm fans err, with blue lights ;). Have changed one rear 80mm fan and will get another to match Monday as they only had one in stock.

The two LSi cards turned up as did the PCI video card. Still waiting on the LSI card cables though. I have one already but 3 still to come.

Down side is I cannot get the motherboard to boot correctly with the PCI video card in. I get fans and lights but no video and no booting. This means I an only use one LSI controller or the dell controller and I only have one sata fan cable at the moment for each fo only 4 drives can be connected at a time ... Still :(.

I have bought 3 1TB WD Black drives and will get a 4th as I am easily coming to 2TB of data. I am hoping the Blacks will behave a bit better in an array than the Greens did.

I have also got a WD Scorpio black for the boot drive. There is a separate mount point in the case for it which is not hotswap so no chance of the kids pulling it out and it is significantly cheaper than getting a ssd for a boot drive.

If anyone has any ideas about getting the PCi video card working I would be most grateful.

Cheers
RB
 
Have a mosey in the bios. There might be something about legacy VGA or something you need to turn on. Also don't rule out that the PCI card might be DoA. Worth testing in another Box to be sure.
I'm having to think Hard, it's been a very very long time since I've done anything with a PCI video card lol
 
Try booting with the PCI graphics card in, but no other cards. You might also need to set your BIOS to display graphics from PCI first.

If you do eventually plan on going up to a 20 drive setup in your norco, I'd look at ZFS on FreeBSD or Solaris. Going for a pool made up of mirrored disk pairs (effectively a RAID10) is the easiest long term for expanding capacity.
 
Try booting with the PCI graphics card in, but no other cards. You might also need to set your BIOS to display graphics from PCI first.

If you do eventually plan on going up to a 20 drive setup in your norco, I'd look at ZFS on FreeBSD or Solaris. Going for a pool made up of mirrored disk pairs (effectively a RAID10) is the easiest long term for expanding capacity.

Thanks for the suggestions both of you.

Yeah, have tried the standard test of having no other cards in and there is an option in Bios for video (auto, PCI-E, PCI-ext). Set it to PCI-ext but still no luck.

As suggested, may need to try on my other machine to make sure the card is not doa :(.

Thanks Zarf, I appreciate the suggestion for zfs and raid 10 but..... this is a home setup with a seperate backup to cheap WD Green disks and I can stand a weeks downtime or longer as required so a stripe set should be fine. If I wanted max uptime then I would agree Solaris, as I know it better, would probably be a good move.

Thanks
RB
 
Back
Top Bottom