Home SAN Project "HBA"

  • Thread starter Thread starter RSR
  • Start date Start date

RSR

RSR

Soldato
Joined
17 Aug 2006
Posts
10,011
Project: HBA Home SAN

I have been running a file server within a VM for a while now, which I have been using for a central storage server and iSCSI Target server. As I have been experiencing poor performance with iSCSI I led me to look at the other options, that don't have the enterprise cost associated with them. So reading about other people on this forum I have been looking at Inifiband as I noticed a few other members have been using it with some success. However, as its IP based it still has the IP overhead. Then I started to look at 10Gb Ethernet adaptors but was quickly put off due the huge price. This then lead me to look at FC HBA's, so lucky I managed to picked 2 Qlogic HBA 2460 HBA for £25 each which is a bit of a bargain to do a proof of concept testing.

The next thing I was thinking about was what O/S to use, Windows by its nature doesn’t support FC target mode without 3rd part tools. So something like San Symphony, which again is hugely expensive. I have been running a demo copy for a few days with it now and I am very impressed but again it’s too costly for home use. This then leaves the Linux and UNIX solutions, as I have used Openfiler in the past I started a test using this a proof of concept before rolling out to more advanced OS's like Solaris. This seem to work very well and I have been experiencing around Reads of up to 350MB/s, which is a massive gain from 100MB/s on iSCSI.

HBA.jpg


Why use a Solaris base.

•ZFS File system
•COMSTAR SCSI Target System supporting iSCSI, iSER, FCOE and FC
•IP Multipathing
•Integrated Layer 3 and Layer 4 Load Balancer
•Crossbow high performance network stack
•Solaris-derived OS

What is ZFS:

ZFS is a combined file system and logical volume manager designed by Sun Microsystems. The features of ZFS include protection against data corruption, support for high storage capacities, integration of the concepts of file system and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs. ZFS is implemented as open-source software, licensed under the Common Development and Distribution License (CDDL).


SAN O/S Choices:

ZFS: Solutions

Solaris 11
Open Indiana
NexentaStor CE 3.1 (v4 is a illumos core and not Solaris)*


Other Linux Solutions:

Open Filer
Open-E DSS 7v

Windows:

No native support for FC Target HBA
Datacore - SAN Symphony (Very Expensive)


Currently Tested and working:

Open-E DSS 7v - This is the easiest to get working
Open Filer - This requires CLI to enable HBA targets to work with LUNS

Components

Case: Lian-Li PC-A04B
PSU: Be Quiet 430w
MB: Asus P8B-M
CPU: Xeon E3-1230
16GB Cruical ECC RAM
1x Intel 330 120GB SSD - OS Drive

Additional Parts:

Intel Dual Port ET NIC
Qlogic QLE 2462
Qlogic QLE 2460



Current Storage:

1x LSI 9266-4i
4x Seagate 2TB Barracuda 7.2K
1x Intel 320 300GB SSD
1x Samsung 840 250GB SSD

Future Plans

3x Samsung 840 250GB SSD
Icy Dock MB994SP-4SB-1

Updates and more benchmarks to follow.

Please a how to guide if anyone else is interested.
 
Last edited:
Sounds expensive considering I have some dual-port QLogics that I paid £15/each for.

PCI-E or PCI-X? I didn't think they where too bad as they are 4Gb and 2Gb HBA have problems with P2P mode and target drivers depending on model again.

I have a QLE2462 on the way, which is also on the Solaris 11 HCL.
 
Sounds like fun, I did the same thing with a pair of qlogic 2460's and openfiler. I had a hell of a time getting the latest release to set up the HBAs and ended up having to revert back to version 2.3 then it worked straight away.

If you could do a write up and how too guide for the rest of your findings I would be very interested.
 
Sounds like fun, I did the same thing with a pair of qlogic 2460's and openfiler. I had a hell of a time getting the latest release to set up the HBAs and ended up having to revert back to version 2.3 then it worked straight away.

If you could do a write up and how too guide for the rest of your findings I would be very interested.

I had fun and games with openfiler, the lastest verison 2.99.2 drops the important target driver which breaks the FC. So I had to roll back to 2.99.1 which contains the correct driver.

I am all up and running on Solaris 11 now and I am very impressed with it.

Code:
root@SAN:~# zpool list
NAME        SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool       111G  12.9G  98.1G  11%  1.00x  ONLINE  -
vol_cifs   5.44T  1.14T  4.30T  20%  1.00x  ONLINE  -
vol_intel   278G  41.3G   237G  14%  1.00x  ONLINE  -
vol_ssd     232G  93.7G   138G  40%  1.00x  ONLINE  -

Code:
root@SAN:~# fcinfo hba-port
HBA Port WWN: xxxxxxxxxxxxxxx
        Port Mode: Target
        Port ID: e8
        OS Device Name: Not Applicable
        Manufacturer: QLogic Corp.
        Model: HPAE311
        Firmware Version: 5.2.1
        FCode/BIOS Version: N/A
        Serial Number: not available
        Driver Name: COMSTAR QLT
        Driver Version: 20100505-1.05
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: xxxxxxxxxxxxxxxxxx

Code:
root@SAN:~# stmfadm list-lu
LU Name: 600144F0248288000000510E989A0001
LU Name: 600144F0248288000000510E999D0002
LU Name: 600144F0248288000000510ECF620001

Performance graphs to follow.
 
I now have completely finalized the setup.

Code:
root@san01:~# zpool list
NAME        SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool       296G  12.8G   283G   4%  1.00x  ONLINE  -
vol_data   9.06T  2.96T  6.10T  32%  1.00x  ONLINE  -
vol_ssd_1   444G   246G   198G  55%  1.00x  ONLINE  -
vol_ssd_2   444G   215G   229G  48%  1.00x  ONLINE  -

Code:
root@san01:~# zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
rpool                            13.1G   278G  4.58M  /rpool
rpool/ROOT                       2.82G   278G    31K  legacy
rpool/ROOT/solaris               2.82G   278G  2.33G  /
rpool/ROOT/solaris-backup-1       133K   278G  2.27G  /
rpool/ROOT/solaris-backup-1/var    56K   278G   112M  /var
rpool/ROOT/solaris-backup-2        62K   278G  2.30G  /
rpool/ROOT/solaris-backup-2/var     1K   278G   118M  /var
rpool/ROOT/solaris/var            235M   278G   114M  /var
rpool/VARSHARE                    141K   278G   141K  /var/share
rpool/dump                       8.23G   279G  7.98G  -
rpool/export                       98K   278G    32K  /export
rpool/export/home                  66K   278G    32K  /export/home
rpool/export/home/andy             34K   278G    34K  /export/home/andy
rpool/swap                       2.06G   278G  2.00G  -
vol_data                         3.76T  3.37T   243K  /vol_data
vol_data/data                    1.72T  3.37T  1.72T  /vol_data/data
vol_data/vmware_lun_3             516G  3.87T  48.2M  -
vol_data/vmware_lun_4             516G  3.87T  48.2M  -
vol_data/vmwarelun               1.03T  3.76T   662G  -
vol_ssd_1                         423G  14.1G    31K  /vol_ssd_1
vol_ssd_1/vmware_lun_1            423G   191G   246G  -
vol_ssd_2                         423G  14.1G    31K  /vol_ssd_2
vol_ssd_2/vmware_lun_2            423G   222G   215G  -

Zpool Information:

Code:
root@san01:~# zpool status vol_ssd_1
  pool: vol_ssd_1
 state: ONLINE
  scan: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        vol_ssd_1                ONLINE       0     0     0
          c0t50015178F369449Ad0  ONLINE       0     0     0
          c0t50015178F369466Ad0  ONLINE       0     0     0

errors: No known data errors

Code:
root@san01:~# zpool status vol_ssd_2
  pool: vol_ssd_2
 state: ONLINE
  scan: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        vol_ssd_2                ONLINE       0     0     0
          c0t5001517803D75FC1d0  ONLINE       0     0     0
          c0t5001517803DB359Ad0  ONLINE       0     0     0

errors: No known data errors


root@san01:~# zpool status vol_data
pool: vol_data
state: ONLINE
scan: scrub in progress since Fri Mar 22 10:08:47 2013
147G scanned out of 2.96T at 226M/s, 3h37m to go
0 repaired, 4.85% done
config:

NAME STATE READ WRITE CKSUM
vol_data ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c0t5000C5004E31AC95d0 ONLINE 0 0 0
c0t5000C5005CB67BACd0 ONLINE 0 0 0
c0t5000C50051D8D402d0 ONLINE 0 0 0
c0t5000C50051E4E37Ad0 ONLINE 0 0 0
c0t5000C50051E51DD0d0 ONLINE 0 0 0
cache
c0t5001517BB28B45A5d0 ONLINE 0 0 0

errors: No known data errors

Comstar Details:

Code:
root@san01:~# fcinfo hba-port
HBA Port WWN: 
        Port Mode: Target
        Port ID: ef
        OS Device Name: Not Applicable
        Manufacturer: QLogic Corp.
        Model: QLE2462
        Firmware Version: 5.2.1
        FCode/BIOS Version: N/A
        Serial Number: not available
        Driver Name: COMSTAR QLT
        Driver Version: 20100505-1.05
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 
HBA Port WWN: 
        Port Mode: Target
        Port ID: ef
        OS Device Name: Not Applicable
        Manufacturer: QLogic Corp.
        Model: QLE2462
        Firmware Version: 5.2.1
        FCode/BIOS Version: N/A
        Serial Number: not available
        Driver Name: COMSTAR QLT
        Driver Version: 20100505-1.05
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN:

Code:
root@san01:~# stmfadm list-lu
LU Name: 600144F0C44A8F000000514B6CA90005
LU Name: 600144F0C44A8F000000514B6CB30006
LU Name: 600144F0C44A8F000000514B91D70001
LU Name: 600144F0C44A8F000000514B91DD0002

These present to VMware in 4 Volumes, which I have created two Storage groups of High Performance (SSD) and General Performance (SATA). This should then balance the load across both presented drives.


Finial Spec Data List:

5 x Seagate 2TB 7.2K Drives
4 x Intel 335 240 GB SSD
1 x Intel 330 120GB
Intel SAS Expander RES2SV240
LSI HBA 9207-9i
Qlogic HBA 2642
 
Last edited:
Back
Top Bottom