What SANs should I consider?

Associate
Joined
18 Oct 2002
Posts
1,044
Hi,

I'm busy putting together a spec for a VMWare cluster, I know most of the hardware and my preferences for vendors, so far I have: 4x HP DL360 G7s, 2x Cisco ASA5510s or Juniper equivalent plus assorted odds and ends. Now what I'm getting stuck on is shared storage.

We don't need a lot of space, only 3-4TB but some of our VMs are very read-heavy, so I know we need a fairly quick system (not SSD fast, but I'm thinking SAS disks and >= 4Gb/s interconnects). So far I've looked at:

  • A pair of servers running Datacore's SanMelody software with DAS disks - this isn't the cheapest option
  • A pair of HP's P2000 G3s - these would be perfect but I'd prefer continuous replication rather than scheduled snapshots
  • NetApp's FAS2020 - I suspect this is above my price range..
  • IBM's DS3400 - this feels very last gen technology as it's only got 4Gb/s FC ports, so I'm guessing there might be a model refresh soon.

Are there any other solutions/vendors I should look at? Any suggestions advice would be welcome :)

akakjs
 
Last edited:
I really like the Dell Equallogic boxes. Also Left Hand Networks could be a better choice over the HP P2000.

There have been a few SAN threads in Servers and Ent part of the forum. Have a search and read though. :)
 
I've done some searching on the forums, most of the threads I found discuss non-mission critical systems; development environments, home VMWare installs, or people with un-realistic expectations of costs. (I know SAN equipment costs a lot; I just don't think it's usually justified ;) ).

I've just had a look at for Left Hand Networks, is that the company HP purchased for the p4000 line of SAN boxes? It looks interesting; and a quick Google search shows the base pricing isn't unreasonable. I shall read up some more.

Thanks!

akakjs
 
When you say very read heavy what do you mean what sort of IOP numbers are they generating? Are the VMs on existing DAS based kit, on other storage or are you looking to P2V them (either by automated tool or reinstall as VM and transition). 4Gb/s interconnects are pretty hefty in thoughput terms and you going to have some big IO requirements to touch the edges of that.
 
Most of the reads will come from our MS Dynamics GP ERP system. It's horrifying in it's read requirements (and tempdb usage :P), and will happily read continuously in our current environment. It's currently got it's own dedicated 5 disk raid-5 array, attached to some shared EMC FC storage at our host (Rackspace),. and we get read-queues peaking at about 15. However the main reason for wanting more speed there is the read directly affects the responsiveness of the client applications; which can be bad enough that our accounts teams avoid running certain reports during the day.

I don't have IOPs to hand (I started this thread after getting back from the pub :D ), I'll update with those on Monday when I get into the office.

I'll add the VNXe to my reading list :)

Thanks for all the help :)

akakjs
 
Good things to say about hitachi for low-mid end, reasonable performance boxes for virtualisation. Only notable downside is their ridiculous policy about when they will and won't sell you high density shelves, without them it becomes very rack space heavy in comparison to some competitors.
 
Most of the reads will come from our MS Dynamics GP ERP system. It's horrifying in it's read requirements (and tempdb usage :P), and will happily read continuously in our current environment. It's currently got it's own dedicated 5 disk raid-5 array, attached to some shared EMC FC storage at our host (Rackspace),. and we get read-queues peaking at about 15. However the main reason for wanting more speed there is the read directly affects the responsiveness of the client applications; which can be bad enough that our accounts teams avoid running certain reports during the day.

I don't have IOPs to hand (I started this thread after getting back from the pub :D ), I'll update with those on Monday when I get into the office.

I'll add the VNXe to my reading list :)

Thanks for all the help :)

akakjs

The RAID config is not going to be doing you any favours from a performance perspective, regardless if its on a FC array or not. iSCSI will probably delivery more than sufficient performance with a properly specified disk system. The Left Hand or Equallogic boxes would be a good starting point but if cost is an issue the latest Gen P2000 series by HP are not bad at all.

It depends what sort of environment you building, and what level of resilience/HA/DR you need whether realtime replication is a good solution for you.
 
  • IBM's DS3400 - this feels very last gen technology as it's only got 4Gb/s FC ports, so I'm guessing there might be a model refresh soon.

Are there any other solutions/vendors I should look at? Any suggestions advice would be welcome :)

akakjs

There's an IBM DS3500, which is an LSI product (CTS2600). Dell also sell it as the MD3220i/3220.

I had a meeting with a LSI distributor recently and they said...

Dell take the LSI controllers but have the chassis, PSU, backplane etc made for them by other OEMs. LSI reckon Dell take ~5% performance hit because of this.

IBM take the LSI product, make some cosmetic changes but also some extra error checking in the controllers, which is a ~10% performance drop (compared to the CTS2600).

You have 6Gbps SAS / iSCSI (GB) and 8Gbps FC controller options.
 
Last edited:
We use HP P4000 Lefthands which do the job for our VM system

We use bonded gig links from the SAN into our Cisco 3020 switches in the back of our blade centre

Kimbie
 
It depends what sort of environment you building, and what level of resilience/HA/DR you need whether realtime replication is a good solution for you.
Eventually we're aiming for proper multi-site DR, so I can spend a little extra to get the realtime replication as I can justify that against the BC/DR budget instead of our "hosting" budget.

bigredshark said:
Good things to say about hitachi for low-mid end, reasonable performance boxes for virtualisation. Only notable downside is their ridiculous policy about when they will and won't sell you high density shelves, without them it becomes very rack space heavy in comparison to some competitors.
mm could you expand on that a little? This particular SAN box is going into a co-lo rack, so density of storage could be an issue, as we'd have to pay a per watt & rack.

Nikumba said:
We use bonded gig links from the SAN into our Cisco 3020 switches in the back of our blade centre
How scalable/reliable are bonded links in general for iSCSI? On of the reasons I'm leaning towards FC is I see bonded gbit connections as more complexity and stuff to go wrong. As you'd need 4x as many wires/switch ports/network card ports, so I see that as 4x as much stuff that could break, most breaks I assume are not complete link failures but rather intermittent issues like dropped frames.

To give some background I'm actually a lead developer, and I've never done full time sys/net admin. So I try and do everything as simply as possible and that requires the least amount of maintenance (fewer parts are better in my mind...).

I'll start talking to people about the p4000 systems on Monday, as they sound ideal for us (bonded gbit aside)

Thanks everyone again, I'm getting an eductaion :)

akakjs
 
We have no problems with the links barely using 15% of them and that is hosting our central SQL server, our EPOS SQL server, Exchange 2007 with 150Gb of mail store and another 25 central servers.

We use the blades as they have a 10Gb ethernet card onboard and the P4000 have 10Gb ethernet so its only our 3020 slowing it down, however should we ever get to needing that capacity we can swap out the 3020 for 10Gb versions and away we go.

As I understanding FC is limited to only about 8Gb transfer.

Have you looked at which licence you will get from vmware? You can also save some cash on the servers, if you order them without hard disks and get a USB flash disk/card and put that in theserver and installed ESXi to it save you buying 8 HDDs at few hundred from HP.

In terms of the licence I would recommend geting the Enterprise Plus, its about £2,500 a socket but gives you everything including what is called a Distributed vSwitch, this allows you to create a virtual switch and add your hosts to it, so you only have to config the network in one place rather than on each host. Works a treat when you have 10 VLANs and other bits trunked to the VMs


Kimbie
 
Last edited:
NetApp are quite affordable you know, you never pay what is on the website, they knock off a lot, my misses is a project manager with them.

Stelly
 
NetApp are quite affordable you know, you never pay what is on the website, they knock off a lot, my misses is a project manager with them.

Stelly

+1 Another vote for NetApp, Also all the Snap Tools come in very handy. :D

EDIT: IIRC Is the current 20XX going EOL soon?
 
Well you certainly can't fault the SAN market for lack of choice at this price point! Loads of choice (another good reason to seek advice!).

Been reading up on the p4000 it looks very interesting. Their demo video of them turning off one VMWare hypervisor and have the VMs recover in seconds was very impressive. And their SAN starter bundles look like a good option. How does the performance of the 10Gb compare to the 1Gb (bonded)? Connected servers and inter-box RAID using the same connections seems like it could make the standard 1Gb limiting (even bonded).

Laser402: I'm not sure I'd want anything to do with Sun/Oracle at the moment. Oracle just can't seem to hold on to Sun engineering talent, so I can seem them hitting rough waters there.

Kimbie: We're already looking at a VMWare enterprise plus accelerator pack, from web prices works out only 3k more for 4 dual socket hypervisors. It seems to make sense to go for the Enterprise Plus pack.

J1nxy: sorry I didn't answer your comments on the RAID configuration. We currently buy the SAN space by the GB, so we have no say in the RAID setup :(

Thanks again everyone.

akakjs
 
We recently bought lots of lefthand nodes, and Im very impressed.

Nobody really needs 10GB on the back of the nodes, the bonded 1Gb nics work fine. Although if you have lots of nodes, having 10Gb from your blade center to the iscsi network is worth it though. The SAN/iQ v9 software works really well with ESXi 4.1, as it offloads much of the disk io from the esxi host onto the lefthand nodes.
 
Back
Top Bottom