Home SAN

Associate
Joined
27 Sep 2009
Posts
1,693
I'm currently speccing out a home SAN solution :) Just an average power server for backups across fibre. Looking to build a 2U rack server but not sure how loud they are? Also i was aiming to get a 6U rack but none of the ones I found are deep enough to support this case.

This is what I have so far:
  • X-Case RM 206
  • Gigabyte GA-990XA-UD3
  • AMD Bulldozer FX-6100
  • 8GB 1333MHz RAM
  • 128GB Crucial M4
  • WD Black 2TB x 6
  • Antec HCP-750
  • VS1 9U rack
  • Mellanox MHEA28-XT
  • Dell PERC 5i

What do you guys think?

I'm going to follow the excellent guides:
http://forums.overclockers.co.uk/showthread.php?t=18442377
http://davidhunt.ie/wp/?p=232
http://www.thegeekstuff.com/2009/05/dell-tutorial-create-raid-using-perc-6i-integrated-bios-configuration-utility/
http://www.smallnetbuilder.com/nas/...n-fibre-channel-san-for-less-than-1000-part-1
http://greg.porter.name/wiki/HowTo:Openfiler#Using_Openfiler_for_iSCSI
http://www.servethehome.com/lowcost-mellanox-mhea28xtc-10gbps-infiniband-performance-ubuntu-1204/
http://www.linuxforu.com/2011/08/storage-management-using-openfiler-part-1/
http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks
 
Last edited:
Get a PERC 6/i instead, they're around £50-60 on eBay now and can be a fair bit faster than the 5/i in some scenarios. Having said that, mine isn't happy about detecting drive speeds at the moment and won't budge past around 130MB/s per drive :rolleyes:

If you're going to be shoving a bunch of disks in it, go with a 4u. It also makes component shopping easier (full height expansion cards, desktop/consumer CPU coolers, ATX PSU, etc). I got the 400/10 from them recently and it will take 13 drives, which for me is going to be 8 on the RAID controller and 4 on the mainboard. Only JBOD for me at the moment.
 
This sounds like an interesting project and I'm looking forward to seeing the outcome!

As above really, going 4U will give you a good amount of disks while staying in the "normal" hardware, full height cards etc. Picking up a good hardware RAID card with BBU would make this whole thing insanely quick/reliable.
 
2U servers can be really loud. Have you considered where you will locate this?

I am intrigued as to what you will do with a home SAN as opposed to using a NAS with CIFS, NFS and iSCSI...
 
2U servers can be really loud. Have you considered where you will locate this?

Agreed, larger form factor with 120MM fans will be awesome.

I am intrigued as to what you will do with a home SAN as opposed to using a NAS with CIFS, NFS and iSCSI...

Make it more awesome! I'd love a project like this. I am sure my electricity supplier would as well though! ;)
 
If you're going to be shoving a bunch of disks in it, go with a 4u. It also makes component shopping easier (full height expansion cards, desktop/consumer CPU coolers, ATX PSU, etc). I got the 400/10 from them recently and it will take 13 drives, which for me is going to be 8 on the RAID controller and 4 on the mainboard. Only JBOD for me at the moment.

Thanks for the recommendations, I'm looking forward to tackling this project too! An few reviewers have said to swap the stock fans with quieter ones on the 400/10. Do you think its loud? I'll be locating it in the spare room which functions as an office so i dont want it to be too loud. My PC atm (see sig) is just right.

2U servers can be really loud. Have you considered where you will locate this?

I am intrigued as to what you will do with a home SAN as opposed to using a NAS with CIFS, NFS and iSCSI...

The NAS boxes I looked at "only" have 1GB ethernet, eSATA or USB3 which are too slow :p I'm going to run a fiber cable from this to my main PC to get fast file transfers :) Also going to be used as a media streaming server. But I am worried about the sound level though.
 
Last edited:
Do you think its loud?

Well it's all relative really ;)

Sat next to a 2u Poweredge 2950, it's whisper quiet! But next to my workstation, yeah it's loud. It has good airflow though, and the loud fans are the rear 80mm ones, the front 120mms aren't that bad. You could very easily make it as quiet as a regular desktop machine without sacrificing a huge amount of air flow, IMO.
 
Well it's all relative really ;)

Sat next to a 2u Poweredge 2950, it's whisper quiet! But next to my workstation, yeah it's loud. It has good airflow though, and the loud fans are the rear 80mm ones, the front 120mms aren't that bad. You could very easily make it as quiet as a regular desktop machine without sacrificing a huge amount of air flow, IMO.

I just watched the review had noticed you had to remove the front fans to fit in the hot-swap caddies. Wouldnt that make the airflow worse?

Onto the software, either freeNAS or Openfiler?
 
Last edited:
Yeah it would drastically reduce the air flow. I wouldn't bother with caddies - it's not going to be in a mission critical environment so if a drive fails, just power down, pop open the lid and replace the drive normally. It comes with quite a useful internal rail mounting setup which is a lot better than I was expecting.
 
In terms of air flow? Well hopefully there would be air conditioning so it would be slightly less of an issue. Trust me I'm all for building extravagant stuff at home, but I'd rather have good air flow over tightly packed disks than the ability to save a few minutes if one failed :)
 
Ah right thanks for the explanation :)

What OS/software are you using to run your setup?

What you have said on the other thread about IOMMU is really hard to find out! There is no documentation saying if the Gigabyte GA-990XA-UD3 and AMD Bulldozer FX-6100 support it! Plus I forgot about having to add a GPU bah!

This post is quite interesting:
Well, in the end i given up on idea of AMD/Bulldozer. Why ?

Power hungry (it uses more power than any of Sandy Bridge Xeon E3 or Core i5/i7).
Doesn't have IGP, so it will become even more power hungry because of the need for additional graphics card. Even the most low power card adds at least another 10-15W to power consumption.
Hunting down an actual motherboard which really supports IOMMU is like searching for needle in haystack. Sure, CPU supports it, chipset supports it, but does the board and BIOS support it ? That is not 100% in all cases, even with AMD 900 series chipsets.
Lower performance.
I HATE the AMD cooler mounts.


Sure, i could have built an FX-6100 + SABERTOOTH 990FX combo for ~290 euros compared to ~420 euros for the Xeon E3-1235 + P8B WS combo, but at cost of lower performance, higher power consumption and no 100% guarantee that IOMMU will actually work.

http://forums.bit-tech.net/showthread.php?p=2965518
 
Last edited:
Server 2008R2 on both servers at the moment, but I'll switch to ESXi on at least one soon (probably the one that inherits my AMD workstation kit). It's all a bit work in progress.

The whole IOMMU/VT-d thing is very annoying for consumer boards. I discovered this list on the Xen wiki via one of their archived email threads that might help you. I've confirmed first hand that the ASUS Crosshair IV Formula board supports IOMMU (once you switch it on in the BIOS), and that uses the 890FX chipset and AM3 Phenom II chips so perhaps you could find the right bits on the 'Bay for similar money to new Bulldozer kit. Not sure what power usage is like though. Some of the other 890FX boards on the list might be cheaper/better on power, but the Crosshair has enough fast PCIe slots to run a couple of RAID controllers and an InfiniBand adapter too.

And just get the cheapest most basic GPU you can find, even if it's PCI it will do the job!
 
The problem is I want to buy a motherboard bundle off another popular computing site which has a 5 year warranty and they only sell 990FX. According to Wikipedia the 990FX motherboards have IOMMU support: http://en.wikipedia.org/wiki/Comparison_of_AMD_chipsets

And the Gigabyte GA-990XA-UD3 has a setting in the BIOS to enable it according to the manual. I guess I'll have to just see.

Just reading a review of the Sabertooth 990FX (http://www.bit-tech.net/hardware/motherboards/2011/08/08/asus-sabertooth-990fx-review/5) ad noticed the SATA performace outclasses any of the other mbb. Might try to get that instead if my budget stretches!
 
Last edited:
An update: I've refined my specs to:

  • Intel Xeon E3-1230 V2 @ 3.40GHz
  • Noctua NH-U12P-SE2
  • Asus P8Z77 WS
  • 16GB (2x8GB) Corsair XMS3
  • 850W Corsair HX Series
  • OCZ Technology 256GB Vertex 4
  • WD 2TB Red WD20EFRX x 8
  • Samsung SSUNG/SH-222BB/BEBE

Just waiting on various quotes for a pre-built system!

I've got my Xcase 400/10, the PERC 6 card, the 2 Mellanox MHEA28-XT cards, a 30m fibre cable and an Asus N66 router :)

Here is a rough network diagram I drew up to give you a better understanding of my setup:
http://sdrv.ms/Y7x0Ex
 
Last edited:
An update: I've refined my specs to:

  • Intel Xeon E3-1230 V2 @ 3.40GHz
  • Noctua NH-U12P-SE2
  • Asus P8Z77 WS
  • 16GB (2x8GB) Corsair XMS3
  • 850W Corsair HX Series
  • OCZ Technology 256GB Vertex 4
  • WD 2TB Red WD20EFRX x 8
  • Samsung SSUNG/SH-222BB/BEBE

Just waiting on various quotes for a pre-built system!

I've got my Xcase 400/10, the PERC 6 card, the 2 Mellanox MHEA28-XT cards, a 30m fibre cable and an Asus N66 router :)

Here is a rough network diagram I drew up to give you a better understanding of my setup:
http://sdrv.ms/Y7x0Ex

Ok, just a few comments...

  • Intel Xeon E3-1230 V2 @ 3.40GHz
    Why not a E3-1220 v2 or 1220L v2
  • Noctua NH-U12P-SE2
    E3s run cool so you could happily stick with the stock HSF.
  • Asus P8Z77 WS
    Why not a server board rather than a workstation board for this server. The Supermicro X9SCM-F is cheaper, allows ECC ram and has IPMI & KVMoIP so you can control it from power on -> bios -> OS remotely
  • 16GB (2x8GB) Corsair XMS3
    Step up to a server board and the money saved can go on some ECC ram.
  • 850W Corsair HX Series
    I have just got a Supermicro SC743 TQ-865B-SQ (SQ for super quiet) and am very happy with it. It comes with a 865W super quiet Bronze PSU, 8x hotswap bays, 3x5.25" bays and two internal super quiet 80mm fans. Quality is fantastic and head and shoulders above other entry level chassis and it can be rack mounted with the addition of the optional rack mount kit. That one is around 300 inc PSU but they do models with lower PSU units which should be cheaper.
  • OCZ Technology 256GB Vertex 4
    What OS are you looking to use that you feel you need 256GB SSD or what else are you looking to do with that SSD.
  • WD 2TB Red WD20EFRX x 8
    Seems like a fairly sensible choice. I use 2TB Seagate Barracudas and have had no issues.
  • Samsung SSUNG/SH-222BB/BEBE

The above suggestions are based on you wanting to use it as a storage SAN and nothing else. They are also good if you wish to virtulise the SAN using vSphere, for example, and run other virtual severs on the same kit at the same time to maximise ROI.

RB
 
Last edited:
The NAS boxes I looked at "only" have 1GB ethernet, eSATA or USB3 which are too slow :p I'm going to run a fiber cable from this to my main PC to get fast file transfers :) Also going to be used as a media streaming server. But I am worried about the sound level though.

If you are copying from your SAN to a mechanical drive, gigabit is more than fast enough, you would need SSD's or a high speed SCSI array to get close to saturating a gig link.

You would also still need a copper connection from the SAN to your switch if you want other devices to use it, not to mention add in the cost of a fibre network card for your PC.

Not saying don't do it, but it does seem pointless as you will have to use copper at some point.

Kimbie
 
If you are copying from your SAN to a mechanical drive, gigabit is more than fast enough, you would need SSD's or a high speed SCSI array to get close to saturating a gig link.

You would also still need a copper connection from the SAN to your switch if you want other devices to use it, not to mention add in the cost of a fibre network card for your PC.

Not saying don't do it, but it does seem pointless as you will have to use copper at some point.

Kimbie

The Mellanox cards are Infiniband controllers and create a separate iSCSI network for sharing storage. The storage is then shared at a block level as if mounted directly on the server. If the disks are configured as an array then they can break the 1 GbE limits fairly easily. The server would usually then share out the partitioned and formatted storage with whatever content is to be shared to other machines. The server is the one that needs to be connected to the switch for client share access.

I have been looking at doing the exact same thing as GodAtum but with the newer 40GB Mellanox controllers, preferably the 'B' versions that are compatible with SMB 3.

Need to build a second vSphere server first to take advantage.

Oh, one thing that does jump out is that a number of parts for the diagram could be virtulised on to one set of hardware.

RB

RB
 
Ok, just a few comments...

RB

I've changed the Xeon to an i5 3570 as no Xeons are in stock from various online companies.

I need a board with 3 PCI 16/8 slots (RAID, Infiniband controller and GPU).

Need a PSU that supports enough SATA and PCI connections.

Want an SSD for quiet operation, I'll be putting multiple OSes of Windows and Linux.
 
RB, if you can afford the cards then definitely go with the ConnectX-3 cards. ConnectX-2 and 3 are the only cards confirmed to work with Windows 2012/8 and SMB3, but - and it's a big but - ConnectX-3 is I believe the only part that fully supports RDMA with SMB3, and that's where your speed is going to come from. That info is based on a blog post on Technet by José (Google it, I'm sure you'll find it).

There has been some success on 2012/8 with the OFED packages and older Infinihost III adapters (such as the ones GodAtum and myself are using, see the Open Fabric Alliance forum) but no working package installer and no RDMA support, just SRP initiator and IPoIB.

I'm leaning away from InfiniBand at the moment. I'll use what I've got for an SRP target on a Linux VM, but that's it. I would have liked to add a switch and a couple more cards and use it for SMB shares but the performance is only 10-20MB/s better than 1GbE. With SMB3, NIC teaming becomes massively more useful as it should utilise all ports even in a single user scenario, so that's what I'll be pursuing since it's much more cost effective at the moment.
 
Back
Top Bottom