SANs - Where to start?

I have another question...

Do SANs need dedicated switches or would running them on their own VLAN be feasible? Seems HP are pushing the dedicated netowrk hardware but is this actually necessary considering you would be looking at the best part of £8k for 4 switches.
 
I have another question...

Do SANs need dedicated switches or would running them on their own VLAN be feasible? Seems HP are pushing the dedicated network hardware but is this actually necessary considering you would be looking at the best part of £8k for 4 switches.

Ideally you need a separate storage ethernet subsystem, iSCSI can flatten a switch and that will slow down things a tad.

You could but a HP 6XXX series switches which take 4 or 6 mezzanine cards, use 1 card for normal traffic and 1 for iSCSI...
Im doing that for our regional offices.
 
Hmm, these SANs are proving more and more expensive the further I look. I have been given a quote of £24,000 for 1 Dell Equallogic PS6000XV with 16 x 450GB 15k SAS drives and dual controllers, this is for just 1 site! Do I need dual controllers? Assuming I am going to settle for ~3TB storage for the time being using RAID10 and replicating to another site this would work out over £50k with additional network hardware! I can/will be getting the price down by getting a quote for SATA but it still seems expensive for what you get.
 
That sounds about right in terms of price. You could get one controller - but you could also get one PSU and put the disks in a JBOD array to make it cheaper!
 
Hmmm, had a quote in for the HP P4300 in a similar configuration as the Dell kit with 16 x 450GB drives @ £15,000. Seems a contrast to earlier experiences as it's £9000 cheaper per unit than the Dell equiv.

Just working out capacities & drive redundancy:

If I configure the Dell or HP to use RAID50 I effectively lose 2 disks and can handle 1 disk failure on each side of the RAID5 mirror?

Total available storage space is 6.3TB or are there any overheads that I have to take off of this figure?
 
Before you worry about losing disks - consider whether you really need 15k drives. Remember you have 16 spindles in your Dell kit - you can bring the cost down using slower disks (SATA). We have one which has 16x1TB drives with SATA drives. We run Exchange (5500 users), file server, and multiple VMs. We don't run any database servers off it.

Plus, using 1TB drives you are going to get A LOT of usable space, whatever RAID you use.
 
brainchylde - What do you find the SATA disks like speed wise and how reliable have the drives been, HPs enterprise SATA drives only seem to have a 1 year warranty, not sure on Dell.

I have already asked for quotes with cheaper disks aswell, still unsure of the RAID setup, would be intesting to see the real world difference in speeds between SATA and 10k SAS in various flavours of RAID. Part of me thinks that 16 1TB SATA in RAID10 would be good enough and still have plenty of storage available whereas if we go for the SAS drives it would probably be used in RAID50 to maximise storage available.
 
brainchylde - What do you find the SATA disks like speed wise and how reliable have the drives been, HPs enterprise SATA drives only seem to have a 1 year warranty, not sure on Dell.

They have been in place for 10 months - no problems at all so far. We equallogic unit hasn't been rebooted even.

We have a virtual file server which has raw disk mapping straight to the SAN. Our Exchange uses VMFS.

5500 Users, probably 200 heavy users, 400 medium and the rest light usage.

We also have ~28 other vm's running. The speed has been fine.

We went through an exercise with Dell prior to purchase to ascertain our required IOPS. I did the performance monitoring on physical hardware and sent the requested information to Dell - they came back to us with a number of options then.

What we don't do is run any of our DB servers on the SAN. They tend to get high IOPS. We may purchase an additional unit with 15k drives in the future and use this specifically for Database storage.

In relation to warranty, I believe ours has 3 years, including drives - POSSIBLY 5 Years, but I'd have to check.


I have already asked for quotes with cheaper disks aswell, still unsure of the RAID setup, would be intesting to see the real world difference in speeds between SATA and 10k SAS in various flavours of RAID. Part of me thinks that 16 1TB SATA in RAID10 would be good enough and still have plenty of storage available whereas if we go for the SAS drives it would probably be used in RAID50 to maximise storage available.

We are running 16TB SATA in Raid-50. 10.47TB usable.

Give me a shot if you need to know anything else.
 
Thanks brainchylde, how did you go about contacting dell to find out about your IOPS etc? I have filled in a form on the Dell site but heard nothing back so far. Did you demo any of the kit and what sort of price did you pay per PS6000E box full of drives if you don't mind me asking?
 
Thanks brainchylde, how did you go about contacting dell to find out about your IOPS etc? I have filled in a form on the Dell site but heard nothing back so far. Did you demo any of the kit and what sort of price did you pay per PS6000E box full of drives if you don't mind me asking?

Ours was a PS5000E box -

Components:
1
16TB capacity, 16 X 1TB, 7.2K SATA, Dual Controllers
1
Dell-EQL PS5x00 on-site iSCSI install with 3-4 hosts (1x PSxxxx w/3-4 hosts)
1
Equallogic Order - United Kingdom
1
Free Road Freight
Services:
1
Dell EqualLogic 2 Y Return to Factory Hardware Limited Warranty
1
EqualLogic 1Y Base Software Warranty & Service 5x9 Access
1
EqualLogic 5Y Complete Care Plus 4 Hr Full Array

Cost - £29k.

We had an account manager with Dell - someone came to see us originally, all the contact was through them - they put is in touch with one of their Enterprise Solution Consultants to do the IOPS stuff. Then once you're set on buying it, they will assign you an Enterprise Deployment Manager who oversees everything. We also took our VMWare Cluster from them - they outsourced the VMWare installation to another firm in London (Systems Group i) - also fantastic. This isn't to say you will have no work to do, but they certainly hold your hand through everything.

We probably spent in the region of 85-90k in the end, including licenses etc.
 
Right...progress...

So I have looked at various incarnations of Dell/HP SANs and am leaning toards heavily towards the Dell PS4000E with 16 x 1TB SATA

I recently ran some performance counters on our Exchange server and found that they spiked to 600IOPS in places but for the most part was below this, the only exception being our B2D which sustained just over 1000IOPS for the duration. Our Dell man has stated that the PS4000E can handle up to 1000IOPS and said it should handle it fine but what is the general opinion on this? If we add any more load during the B2D of Exchange surely we will hit problems here!

Is it generally a bad idea to run B2D on the same SAN which is used for production data?

I was also looking at HPs P2000 G3 SAN as they have a dedicated iSCSI version coming soon. Any poinions on these? I have only had a demo of a much older P2000 unit which didn't even have the latest user interface loaded. Been put off of the HP/LH SANs due to the requirement for a Quorum machine and also having a dedicated machine for the management interface..
 
Right...progress...

So I have looked at various incarnations of Dell/HP SANs and am leaning toards heavily towards the Dell PS4000E with 16 x 1TB SATA

I recently ran some performance counters on our Exchange server and found that they spiked to 600IOPS in places but for the most part was below this, the only exception being our B2D which sustained just over 1000IOPS for the duration. Our Dell man has stated that the PS4000E can handle up to 1000IOPS and said it should handle it fine but what is the general opinion on this? If we add any more load during the B2D of Exchange surely we will hit problems here!

I wouldn't virtualise your backup disks. We didn't, but we also use DPM, syncing regularly throught the day - so not one big hit for backup. But roughly speaking you should get 80-90 IOPS out of a 7.2k SATA so on a 16 spindle unit you should have plenty IOPS spare.

Also - not sure about the PS4000 unit, but our PS5000 unit could just create volume snapshots, which is a quick way of achieving a backup.

I was also looking at HPs P2000 G3 SAN as they have a dedicated iSCSI version coming soon. Any poinions on these? I have only had a demo of a much older P2000 unit which didn't even have the latest user interface loaded. Been put off of the HP/LH SANs due to the requirement for a Quorum machine and also having a dedicated machine for the management interface..

The only thing I would say with the HP units is that, from my (limited) experience you need to be careful with the licensing. The Dell unit bundles everything in, for example replication etc.. as far as I know you need to pay a license for these with HP.
 
Right...progress...

So I have looked at various incarnations of Dell/HP SANs and am leaning toards heavily towards the Dell PS4000E with 16 x 1TB SATA

I recently ran some performance counters on our Exchange server and found that they spiked to 600IOPS in places but for the most part was below this, the only exception being our B2D which sustained just over 1000IOPS for the duration. Our Dell man has stated that the PS4000E can handle up to 1000IOPS and said it should handle it fine but what is the general opinion on this? If we add any more load during the B2D of Exchange surely we will hit problems here!

I hate SATA drives for anything random and depending on your version of Exchange it can be very random. Exchange 2003 is a pig. 2007 is better, 2010 looks OK but yet to see enough of them that are not in beta.

Worst case read IO each SATA drives can do 75 IOPS so yeah SATA can handle the IOPS spike with a little room for growth and this is excluding the cache IOPS.

What RAID type will you use? This can heavily affect the backend IOPS.

Don't really care about the B2D IOPS, this is usually out of hours work. Again, RAID type can bring the array to it's knees during B2D activity due to writes.

Is it generally a bad idea to run B2D on the same SAN which is used for production data?

No, it's a performance hitter but usually fine. Running the two at the same time is a bad idea. B2D usually kicks off at 7pm onwards by which time users should generally be offline.

If you have users working late or Blackberry's then expect people to notice when the B2D is going on. B2D will such SAN resources and hit you double hard if you use the same spindles.

I was also looking at HPs P2000 G3 SAN as they have a dedicated iSCSI version coming soon. Any poinions on these? I have only had a demo of a much older P2000 unit which didn't even have the latest user interface loaded. Been put off of the HP/LH SANs due to the requirement for a Quorum machine and also having a dedicated machine for the management interface..

I'd stick with Dell / EMC / Netapp personally.
 
Thanks for the responses guys. Will be using the SAN in a RAID10 configuration for maximum performance from the SATA drives, means using 2 drives as online spares but that I don't mind. Supposedly it whould give us 6.1TB of RAW capacity according to Mr Dell. Still not 100% on putting the B2D on the SAN, might try it out and see how it goes though, what happens if it exceed the maximum IOPS on the SAN, do I end up with corrupted/failed backups?

I have an MSA50 here aswell so might just get some bigger disks for it and use that for B2D to be on the safe side. Looking at Snapshots, I believe they are not entirely useful for Exchange/DBs unless they can quiesce the DB and pause the writes to it whilst it's take and I don't think The Equallogic supports this.
 
Thanks for the responses guys. Will be using the SAN in a RAID10 configuration for maximum performance from the SATA drives, means using 2 drives as online spares but that I don't mind. Supposedly it whould give us 6.1TB of RAW capacity according to Mr Dell. Still not 100% on putting the B2D on the SAN, might try it out and see how it goes though, what happens if it exceed the maximum IOPS on the SAN, do I end up with corrupted/failed backups?

No - you'll just end up with a big queue for IO on the disk, which means very slow VMs.

If you are putting B2D on the SAN, how are you going to take it offsite? Are you backing up to a cloud?

I have an MSA50 here aswell so might just get some bigger disks for it and use that for B2D to be on the safe side. Looking at Snapshots, I believe they are not entirely useful for Exchange/DBs unless they can quiesce the DB and pause the writes to it whilst it's take and I don't think The Equallogic supports this.

No you are right - because Exchange/SQL etc will utilise VSS for backups and a Snapshot is exactly that and nothing more.

However, if you have a file server which will be going on your SAN and your VM uses raw device mapping a snapshot could be useful as one layer of backup (you will still obviously need offsite etc).
 
We will be hosting our VMs on DAS with mirroring and hot spare drives so shouldn't affect our them too much. We currenty do B2D2T backups every night was looking at keeping it the same, maybe changing to weekly tape duplicates instead.
 
HP are way behind at the Moment- Iscsi Dell Equallogic are far far superior and for FC Boxes Emc are great- HP are really poor these days and once you look at thier Road Map you will steer well clear
 
Doh, wish I'd seen this thread earlier!

Currently looking at getting a SAN as we want to use it for storing Data Protection Manager 2010 backups, probably going to need about 5TB to backup everything was looking at the Equallogic P4000 (the basic) as 90% of the data it's going to receive is coming through 800k/s broadband lines!

Very price concious and I'm still waiting for our reseller to give us a price, I'm betting it's going to be too expensive :(
 
Back
Top Bottom