Homemade san suggestions

Associate
Joined
18 Oct 2002
Posts
1,361
Location
Bamber Bridge
Hi all,

Right i've got 2 esxi 5 servers, and would like to have shared storage accessible to them and the rest of the network. The esxi I would like to connect via iscsi or nfs. The rest I would like to map a drive to via smb, and also if possible stream movies to the ps3, xbmc and tv. If I can't stream i'll use ps3 media server for it.

Hardware i've got available is a core2duo machine, 2 lsi megaraid 8308elp cards , which have 8 ports each, which support sas and sata. I've got 5x 2tb, 2x 1tb and various other smaller drives. I have a 24 port gig managed switch and a 48 port 100mb managed switch with 2x gig ports.

I know if I mix a smaller drive i'll end up having a smaller array so know i've either got to do raid via software, or just have 4 or 5 different arrays.

Can someone suggest some good ways of connecting everything together, and what software to use, i've been looking at openfiler and freenas. I've racked my brains and am starting to struggle with how to get it all to work together. This is a test environment for testing out and playing with esxi, but I also want to centralise my storage, so everything is there, in one place, which can be accessed from any machine on the network. Might look at sorting access out from the internet at some point in the future.

Ste
 
I used nas4free which is better in some respects than its sucessor freenas. Freenas supports hardware raid, which nas4free does not, but imho nas4free is more flexible.

It is simplicity itself to setup a NAS using nas4free and if you have plenty of RAM, zfs pools are the way forward. If not, then setup a number of arrays using the drives you have (Raid 5 and a Raid 0 comes to mind)

You can elect to use teaming if you have multiple network cards, you set this up under lagg and it even gives you the choice of protocols to use, however, be aware that not all switched support them.

Setting up an iscsi target is a few button clicks, the same as most of the services within nas4free.

The only gotcha if you can call it that is that the drive you use for the o/s can't be used as a mount point, so I used a 2GB flash drive and used all the hard drive space I had available.
 
I use a 2008 R2 (ML110 G7) box that has the vcenter server installed on as well.

Its my DC, vcenter, iscsi (starwind I think) server and anything I need to drop on it for my 2 esx5 hosts(ML110 G7's).
 
I have two ESX servers connected to a NAS4FREE server. I connect each ESX server on a dedicated NIC (point-point/crossover cable) to the NAS server, dedicated to iSCSI network traffic. Then both ESX servers and NAS have an additional NICs for normal network traffic. Also both ESX servers are connect to each other by a 3rd NIC (point-point/crossover cable), between the ESX servers I can run multiple VLANs. This gets me round the problem of my switch not being Layer 2 compatible. Each ESX server only has a minimal boot drive (16Gbyte SSD from OCuk) and no other storage. I then present a shared iSCSI volume to both ESX servers for general VM storage.

With this environment I've been able to setup two SQL cluster with several SQL database instances. I present the disk for clustering using the Server 2008 iSCSI initiator. I also have several Srv2008 VM running at the same time. In total I think I have about 30 iSCSI volumes present from the NAS running at anyone time (this can run slow at times but good enough for development).

On top of that I use NAS4FREE to host a SMB network share for all my other devices for general storage. I've also got it setup as uPnP (DLNA) server and it streams to my PS3, LG TV and other devices extremely well. Streaming HD and SD content.

The best of it all is my NAS host is an old single core AMD64 (2800?)HP desktop with 2Gbytes of RAM and 2 * 1TByte, 1 * 2Tbyte HDD.
 
CentOS 6.3 minimal
SCSI_target_utils

Installs on 2GB (probably less), only a basic understanding of LVM, yum and changing the ifcfg-ethXX files needed so a base SAN setup.

Basic steps;

  • Install CentOS 6.3 minimal
  • Update basic install (yum update)
  • Configure network adaptors (ip addresses, networks/subnets, any bonding).
  • Configure hard drives (fdisk / gparted, LVM for PV, VG, LV)
  • Install scsi_target_utils
  • Edit iscsi config file (lots of examples already in config)
  • Set iscsi daemon to start on boot
  • Open firewall for iSCSI port
  • Restart network, firewall, iSCSI target
Seems like a lot but really it is suprisingly quick and easy if you have some basic Linux knowledge.

I ended up going this way due to a number of issues with link bonding, devices / drives / partitions not being recognised and LVM setup on Openfiler.

Speeds are reasonible at 60-90MB/s over a GbE connection. I am sharing to both a vSphere 5.1 server and to a Windows SBS 2011 Standard servers. Both connect fine.

Samba is also fairly easy to add for NAS funstionality.

Link bonding is also not limited to LACP which requires switch side compatibility. There are a number of other bonding techniques available in Linux which require no switch side support.

RB
 
Thanks for the input, now it looks like I just need to sort out someway of connecting all the drives together, as the sas card has a breakout cable to individual sata plugs, which won't connect onto the sas drives but will the sata ones.

Anyone know of a way of making a storage enclosure that will house 8 or 16 disks, got 2 controllers and thought that it would be a nice idea to have them all in an external box, so they're completely seperate.
 
Cheap way, the way which I plan on doing is using my old CM Stacker. Expensive way, head off to eBay and buy a rack chassis. Or middle ground, just get an ATX case which can hold a lot of disks, and then chuck the rest into the 5.25 bays.
 
CentOS 6.3 minimal
SCSI_target_utils

Installs on 2GB (probably less), only a basic understanding of LVM, yum and changing the ifcfg-ethXX files needed so a base SAN setup.

Basic steps;

  • Install CentOS 6.3 minimal
  • Update basic install (yum update)
  • Configure network adaptors (ip addresses, networks/subnets, any bonding).
  • Configure hard drives (fdisk / gparted, LVM for PV, VG, LV)
  • Install scsi_target_utils
  • Edit iscsi config file (lots of examples already in config)
  • Set iscsi daemon to start on boot
  • Open firewall for iSCSI port
  • Restart network, firewall, iSCSI target
Seems like a lot but really it is suprisingly quick and easy if you have some basic Linux knowledge.

I ended up going this way due to a number of issues with link bonding, devices / drives / partitions not being recognised and LVM setup on Openfiler.

Speeds are reasonible at 60-90MB/s over a GbE connection. I am sharing to both a vSphere 5.1 server and to a Windows SBS 2011 Standard servers. Both connect fine.

Samba is also fairly easy to add for NAS funstionality.

Link bonding is also not limited to LACP which requires switch side compatibility. There are a number of other bonding techniques available in Linux which require no switch side support.

RB

Or use NAS4FREE which does all this in a gui? :p

As for the SAS/SATA issue, you have done it the wrong way around lol. People usually buy the PERC cards and then buy the cable you have for connecting to Sata Drives. Your options now are to buy the adapters (SAS drives are backwards compatible with Sata and in fact, it you remove the tab on the cable, you can plug it straight in) or buy this which will install into a tower case in place of three dvd drives by the looks. You would need to buy the cable to go from the card to the backplane, and then something like this

I'm not 100% sure on the cable, but it should work YMMV.
 
Or use NAS4FREE which does all this in a gui? :p

True but then I prefer CentOS (and Linux in general) to FreeBSD and am much more familar with it. I also have no issues working from the command line and prefer the control it gives me. FreeNAS looks good, no doubt, but for me, when i can setup a minimal SAN with iSCSI in 15-20 commands I am happy to do that. Each to their own though.

As for the SAS/SATA issue, you have done it the wrong way around lol. People usually buy the PERC cards and then buy the cable you have for connecting to Sata Drives. Your options now are to buy the adapters (SAS drives are backwards compatible with Sata and in fact, it you remove the tab on the cable, you can plug it straight in) or buy this which will install into a tower case in place of three dvd drives by the looks. You would need to buy the cable to go from the card to the backplane, and then something like this

I'm not 100% sure on the cable, but it should work YMMV.

SAS drives use SFF-8482 connectors. For connecting drives without a backplane you would use SFF-8484 (Long thin SAS connector) or SFF-8087 (square mini-SAS connector) to 4x SFF-8482. If you use a backplane / drive cage etc then you would need to check the connectors. Note cables are directional for ones with SATA connectors so forward breakout connectors usually have the SFF-8087 at the controller end and the 4x SATA at the drive end whilst reverse breakout cables use the SFF-8087 for a backplane and the 4x SATA for the motherboard. Check before buying especially as some sellers do not list direction. 3Ware cables work well for me and the product name specifies the direction (i.e. CBL-SFF8087OCF-05M is forward and CBL-SFF8087OCR-05M is reverse).

Beware fo server cages as they can be non standard sizes. I have 2x Intel 4 drive cages at home that do not fit 3.5" or 5.25" drive bays. It is safter to go for cages designed for PCs, just make sure they have SAS compatible drive connectors. IStarUSA, Supermicro and ICT all do them but they are not the cheapest thing to get. For the cheap option, get a case which will take multiple drives, get your controller, get a cable to connect to your sas drives (Controller connector to SFF-8482).

Don't get pigeonholed with Dell Percs, they are fine but other controllers are also just as good if not better. There are a number of M1015s around for close to 100 GBP (check they come with a full height bracket as some dont), There is a M5014 going for 70 quid buy it now which is a steal as it is a proper raid card (3 available). These are the same gen as the Dell H200 and a generation newer than the PERC 6.

RB
 
Well i've got the hardware already, so am just reusing that. I've found the cable that I need for connecting the lsi megaraid to sas drives directly without a backplane, it's a SFF8087 to 4x SFF8482. Works out cheaper than buying sas to sata apaptors.

I've also found out the controller will support 8x direct connected drives or 16x drives using an expander or two. Does anyone know whether it would be better performance running 2x controllers and 8 drives on each or one controller with 16x drives?

I've also found an old full tower atx case with a load of hard disk bays, so am going to look at filling that with hdd's and seeing how many I can get in.
 
[Darkend]Viper;23257101 said:
Well i've got the hardware already, so am just reusing that. I've found the cable that I need for connecting the lsi megaraid to sas drives directly without a backplane, it's a SFF8087 to 4x SFF8482. Works out cheaper than buying sas to sata apaptors.

I've also found out the controller will support 8x direct connected drives or 16x drives using an expander or two. Does anyone know whether it would be better performance running 2x controllers and 8 drives on each or one controller with 16x drives?

I've also found an old full tower atx case with a load of hard disk bays, so am going to look at filling that with hdd's and seeing how many I can get in.

Bit of maths...

PCIe v1.1 x4 (4x 250MB/s) = 1,000MB/s at the PCIe bus.
8x 300MB/s (8 lanes of 300MB/s) = 2,400MB/s

What drives are you putting on it ?. If the burst speed of 16 drives added is greater than 1GB/s then you will be better off with 2 cards rather than a single card and expander. That is around 62MB/s burst per drive (all drives being equal) for 16 drives of a single controller. 120MB/s per drive for 8 drives which is a lot more reasonible.

Your bottleneck is the PCIe v1.1. Upping to a PCIe 2.0 card would double the bandwidth and potentially give decent speed for 16 mechanical drives on a single card.

RB
 
Back
Top Bottom