Cheaper SAN storage

Soldato
Joined
15 Sep 2003
Posts
9,454
Hi,

As you are probably aware from my other posts I'm trying to pop together a solution for my new company.

I've been looking at SAN solutions from Dell and HP (Left hand networks specifically). These are coming in at around $25k for a single unit, leaving me with a single point of failure as I can't afford two units.

I know the word openfiler is a little dirty when considering SAN storage for production environments. Would I be better off having two openfiler boxes over one dedicated SAN storage unit? Has anyone used openfiler in production environments?

We have spare servers here with plenty of grunt to run openfiler, however, I am concerned that after all it is open source software etc. The network has around 60 users, 7 servers of which 6 will be virtualized eventually.

TIA for any opinions good and bad (I'm sure there will be more bad!!!).
 
Which SANs in particular?
I can't speak for Dell kit, but you do tend to find that the SAN itself is fully redundant and has dual pathed everything.

As long as your server(s) are dual pathed to the SAN there should be no issue, but there is no replacement for daily backups.
 
I was looking at the LHN P4500 (I think that's the mode name) with 1.8 TB in SAS drives, the units have double everything. I guess this would provide sufficient redundancy and would certainly be my preference. I'm going to dual path via two managed switches etc and multiple NIC's on the VMWare boxes.
 
Traditionally I haven't bothered having a second SAN on site (I'm not convinced it's much use, it's got to be a serious failure to bring it into play) but I have the advantage of multiple sites which could handle the capacity if we needed to take one offline due to a SAN failure...

In your scenario, I'd probably go for a single highly redundant unit and a decent support contract...
 
I agree with brs, a replicated SAN on the same site as the other seems like wasted investment to me (unless you have a lot of money).

Sounds like you've got everything covered to me (Don't forget a UPS).

As long as your VM Hosts are dual everything, PSU, HBA, NIC etc I think you have a pretty good setup without shelling out some serious cash.

What version of VMWare are you running?
 
I think I saw 4 x apc units, small ones so I may have to get some more. One redundant SAN it is I think.

I'm set to spec switches but will go with 2x 24 port gig managed HP ProCurv jobbies and a couple of quad port nics for the existing servers.

Probably going to have to go with: VMWare vSphere 4 Essentials Plus as enterprise is bloody expensive. It doesn't have vmotion but I figure HA will do the job just about, whilst saving $9k.

I will have a rough DR site in which I'm going to have a nas, so I may keep a server over that side with openfiler in case the SAN dies. Worst case scenario I collect the openfiler box and pop it onto the network.
 
The company I did my last project for wanted a SAN "on the cheap", so we ended up going for StarWind on Server 2008, using a Dell PowerEdge 2950 connected to a Dell PowerVault MD1000. It was used to host the virtual disks for the 6 or 7 servers we virtualised, which sounds pretty much like your planned setup.

I'm not going to try to pretend it was as good as a proper SAN, but for so few users it wasn't really an issue. The physical servers we replaced were P3 1.2GHz, so the performance increase was tremendous even though we were running everything in Hyper-V VMs. It also reduced the backup window by a couple of hours, and achieved the main goal of being able to restore the entire infrastructure within a day in the event of a disaster. It may be worth a look, if you have the time to test it using the free trial.
 
You could look into running drbd between two hosts. Drbd replicates data at the block level in real time. You can build a HA storage cluster with it. Does take a bit of work though.
 
Umm, I'd rather not tbh lol. I am aware of the technology but wouldn't feel comfortable running it in production. If I had no money to use then maybe. I'm not a fan of making things harder when they don't have to be. :p
 
I've got a couple of lefthand dl380's with 6x 300GB sas drives in both (iirc) I'm planning on selling if you're interested. They're the older models so aren't that fast though (compared to the dl320's with 12 drives I'm using now).
 
Last edited:
alternative?

Have you considered Lefthand's VSAs rather than a physical box, it's a virtual iSCSI SAN appliance and has some limited software add-ons that can take application consistent volume snapshots (it's all scripted atm)

Our set up that we've have just got going is two DL385 G6's each with dual hex core opterons, 16Gb RAM and 13x500Gb local SAS drives.

The drives have been divided up so there's one small RAID volume that the ESX OS sits on and a very small datastore. The the rest is used to create very large RAID volumes, but less that 2Tbs.

The VSA is then installed on the small datastore and then ESX presents the rest of the RAID volumes to the VSA as disks. The VSA then divies up the large RAID volumes into it's own volumes and presents them back to the ESX server as iSCSI LUNs!

Each server is set up exactly the same so you end up with a virtual two node iSCSI SAN cluster; each node synch'ing with each other so you could loose an entire server without loosing any VMs/data (ignoring OS or application data corruption)

Now I won't lie, it was a long and painful process as the ESX set up is complex and very important for this to all work (obviously) and we were close to throwing in the towel. But now it works once HP sent in one of their top Lefthand men (in the UK) and rebuilt the entire cluster.

Caveats - You HAVE to use a gig eth switch for the connectivity (we used a Cisco 2960G-24TC-L) and Lefthand requires a third VSA manager for the cluster to work (to keep quorum). We have this on a standard desktop (2Gb of RAM) running the free ESX server. Also the VSA's need to be outside and HA Vmotion config and we've yet to get the system to boot up, power on the VSA and then automatically rescan the iSCSI HBA to see all the local VSA's iSCSI LUNs

Now if that sounds like too much virtualisation (and sometime I think it is!) then how about NetApp? We have that at our Head Office and I love them!! The prices of the 2020's even with dual controllers are very good and you do get full fat ONTAP (unlike the now defunct storevaults) and with the software packages you can get a stack of very good functionality that make managing SAN data a breeze. These can also grow upto 60 odd Tbs.
 
Not too sure about VSA. I have used it before and it isn't a bad product. The main problem with it is that I already have most of the kit. Currently I wouldn't have enough spare hdd slots for storage and may as well spend the money on a SAN rather than two new servers.

I'll take a look at NetApp, although I fear they may be a little pricy.
 
@ £15K. I have just bought a pair of them. Bear in mind, if you buy a 2020, with all 12 bays populated, you will have @ 5.8Tb of useable storage

That price is for a clusteed NetApp, Dual PSU - Dual Controller & 3yr Support.
 
Last edited:
Back
Top Bottom