Upgrade path for HP MSA60 (for VMWare)?

Associate
Joined
18 Oct 2002
Posts
1,044
Hi,

I've been looking at moving our servers at work on to VMWare, and I've hit bit of a knowledge gap (I'm a developer who also does the IT) while i've been looking for some shared storage.

First some background: We run a mix of normal network services (ad, exchange etc) for 50 users, a development environment for our website (test servers & source control) and a "data warehouse" (replicated copy of our production database).

My end-game plan is about 12 virtual servers running on two servers attached to some shared storage; running on a version of VMWare (ESXi or vSphere Essentials).

The problem is we've got some existing storage hardware we'd like to reuse in the form of an HP MSA60 direct attached shelf loaded with 15k hard-drives. However I've been unable to find a replacement shelf that can be shared between servers, and accepts our existing hard disk drives. But the replacement for the MSA60 (the MSA2000) uses a different disk caddy. I've never been able to find a source of HP caddies for less than £50 each (which is a rip-off for plastic box...).

So I'm a little stuck as how to proceed, I don't know HP's product offering all that well so I'm unsure of other products we could use. There is some money available, however I have to justify every penny (grr) and anything over £10k total isn't going to happen. Am I stuck? having to replace all those disks? I'd like to use an HP product but another make would be OK (as long as the caddies are cheap ;) )

thanks!

akakjs
 
Last edited:
Associate
Joined
8 Dec 2008
Posts
1,391
Location
Basingstoke, Hants
If you go for an iSCSI SAN you could run 15 VMs off of it easily. Even using SATA disks in a RAID50 or something :)
We have done this very successfully at work!
 
Associate
Joined
5 Feb 2009
Posts
424
If you go for an iSCSI SAN you could run 15 VMs off of it easily. Even using SATA disks in a RAID50 or something :)
We have done this very successfully at work!


And more. Depends really on how many spindles you have. We run 16 disks in our iSCSI SAN (Dell Equallogic). It's been very impressive. We run 20+ VM's and there is no noticeable difference in performance from a user perspective. We could comfortably add more servers (admittedly a lot of them don't run under a high load). However, one of our servers runs a file server for ~6000 users (no more than 1500 concurrent though).. and an exchange 2007 server for currently ~7500 mailboxes.

Virtualisation is the way to go! [Although we did maintain some physical servers for DC's,Oracle, SQL Server, SCCM & MySQL (on Ubuntu)]
 
Associate
Joined
8 Dec 2008
Posts
1,391
Location
Basingstoke, Hants
Virtualisation is awesome! But it is entirely dependant on what you want to do. Some people will swear by it but sometimes it is really un-needed.
With an iSCSI SAN you can definatly go higher than 20, 40 or even 60 VMs, its all tried an tested! Just depends what sort of bandwith you need I guess.

VMs = Hell yes (for some things)

:)
 
Soldato
Joined
11 Mar 2004
Posts
4,999
Had 200 VM's on iSCSI at one place i was at....on Netapp 3040...moved them top NFS when i was there. No loss of performance and using the ASIS de-dupe went from using 6tb to less than 3tb.

Pretty awsome really.
 
Associate
OP
Joined
18 Oct 2002
Posts
1,044
Hi,

Sorry forgot to check on this thread :o

I was hoping for FC or SAS for the shared storage rather than iSCSI.

One of the applications in the development environment is MS Dynamics GP, which is a HUGE I/O read hog (constant 60-70MB/s). And I understand iSCSI can only do about 70-100MB/s over 1Gbit, which combined with iSCSI adapter teaming not working well in ESXi; doesn't leave any room for the other virtual servers.

As we already run our production website/systems fully virtualised (at Rackspace), I'm already sold on the idea of using VMs (and VMWare). I just have to find a solution that works for us, and doesn't cost ££££s :)

Thanks the help!

akakjs
 

RSR

RSR

Soldato
Joined
17 Aug 2006
Posts
9,145
Had 200 VM's on iSCSI at one place i was at....on Netapp 3040...moved them top NFS when i was there. No loss of performance and using the ASIS de-dupe went from using 6tb to less than 3tb.

Pretty awsome really.

Ahhh speaking of de-dupe, i need to do a ontap upgrade for our filers for this. :eek:

We have 20VMs from a NetApp 3020c
 

RSR

RSR

Soldato
Joined
17 Aug 2006
Posts
9,145
One of the applications in the development environment is MS Dynamics GP, which is a HUGE I/O read hog (constant 60-70MB/s). And I understand iSCSI can only do about 70-100MB/s over 1Gbit, which combined with iSCSI adapter teaming not working well in ESXi; doesn't leave any room for the other virtual servers.

IIRC You can now get 10Gbit HBAs for iSCSI, which will over come them limitations. However, id rather go a FC solution if you have a high IO requirment.

A
 
Associate
Joined
8 Dec 2008
Posts
1,391
Location
Basingstoke, Hants
With our testing at work FC and iSCSI are pretty similar. Obviously depending on setup. In the end its all connected to the same HDDs. FC is just more awkward with the cabling and stuff.
 
Top Bottom