What SANs should I consider?

Associate
Joined
18 Oct 2002
Posts
1,044
Hi,

I'm busy putting together a spec for a VMWare cluster, I know most of the hardware and my preferences for vendors, so far I have: 4x HP DL360 G7s, 2x Cisco ASA5510s or Juniper equivalent plus assorted odds and ends. Now what I'm getting stuck on is shared storage.

We don't need a lot of space, only 3-4TB but some of our VMs are very read-heavy, so I know we need a fairly quick system (not SSD fast, but I'm thinking SAS disks and >= 4Gb/s interconnects). So far I've looked at:

  • A pair of servers running Datacore's SanMelody software with DAS disks - this isn't the cheapest option
  • A pair of HP's P2000 G3s - these would be perfect but I'd prefer continuous replication rather than scheduled snapshots
  • NetApp's FAS2020 - I suspect this is above my price range..
  • IBM's DS3400 - this feels very last gen technology as it's only got 4Gb/s FC ports, so I'm guessing there might be a model refresh soon.

Are there any other solutions/vendors I should look at? Any suggestions advice would be welcome :)

akakjs
 
Last edited:
I've done some searching on the forums, most of the threads I found discuss non-mission critical systems; development environments, home VMWare installs, or people with un-realistic expectations of costs. (I know SAN equipment costs a lot; I just don't think it's usually justified ;) ).

I've just had a look at for Left Hand Networks, is that the company HP purchased for the p4000 line of SAN boxes? It looks interesting; and a quick Google search shows the base pricing isn't unreasonable. I shall read up some more.

Thanks!

akakjs
 
Most of the reads will come from our MS Dynamics GP ERP system. It's horrifying in it's read requirements (and tempdb usage :P), and will happily read continuously in our current environment. It's currently got it's own dedicated 5 disk raid-5 array, attached to some shared EMC FC storage at our host (Rackspace),. and we get read-queues peaking at about 15. However the main reason for wanting more speed there is the read directly affects the responsiveness of the client applications; which can be bad enough that our accounts teams avoid running certain reports during the day.

I don't have IOPs to hand (I started this thread after getting back from the pub :D ), I'll update with those on Monday when I get into the office.

I'll add the VNXe to my reading list :)

Thanks for all the help :)

akakjs
 
It depends what sort of environment you building, and what level of resilience/HA/DR you need whether realtime replication is a good solution for you.
Eventually we're aiming for proper multi-site DR, so I can spend a little extra to get the realtime replication as I can justify that against the BC/DR budget instead of our "hosting" budget.

bigredshark said:
Good things to say about hitachi for low-mid end, reasonable performance boxes for virtualisation. Only notable downside is their ridiculous policy about when they will and won't sell you high density shelves, without them it becomes very rack space heavy in comparison to some competitors.
mm could you expand on that a little? This particular SAN box is going into a co-lo rack, so density of storage could be an issue, as we'd have to pay a per watt & rack.

Nikumba said:
We use bonded gig links from the SAN into our Cisco 3020 switches in the back of our blade centre
How scalable/reliable are bonded links in general for iSCSI? On of the reasons I'm leaning towards FC is I see bonded gbit connections as more complexity and stuff to go wrong. As you'd need 4x as many wires/switch ports/network card ports, so I see that as 4x as much stuff that could break, most breaks I assume are not complete link failures but rather intermittent issues like dropped frames.

To give some background I'm actually a lead developer, and I've never done full time sys/net admin. So I try and do everything as simply as possible and that requires the least amount of maintenance (fewer parts are better in my mind...).

I'll start talking to people about the p4000 systems on Monday, as they sound ideal for us (bonded gbit aside)

Thanks everyone again, I'm getting an eductaion :)

akakjs
 
Well you certainly can't fault the SAN market for lack of choice at this price point! Loads of choice (another good reason to seek advice!).

Been reading up on the p4000 it looks very interesting. Their demo video of them turning off one VMWare hypervisor and have the VMs recover in seconds was very impressive. And their SAN starter bundles look like a good option. How does the performance of the 10Gb compare to the 1Gb (bonded)? Connected servers and inter-box RAID using the same connections seems like it could make the standard 1Gb limiting (even bonded).

Laser402: I'm not sure I'd want anything to do with Sun/Oracle at the moment. Oracle just can't seem to hold on to Sun engineering talent, so I can seem them hitting rough waters there.

Kimbie: We're already looking at a VMWare enterprise plus accelerator pack, from web prices works out only 3k more for 4 dual socket hypervisors. It seems to make sense to go for the Enterprise Plus pack.

J1nxy: sorry I didn't answer your comments on the RAID configuration. We currently buy the SAN space by the GB, so we have no say in the RAID setup :(

Thanks again everyone.

akakjs
 
Back
Top Bottom