New Office - Server advice - Especially Storage

Virtualisation done right is a fantastic solution and has been a boon for us, so much so we're now not doing it right (too many eggs in too few baskets :)). unfortunately IMO to do virtualisation right for crazyswede I'd suggest almost doubling the budget

Still doesn't make it the solution to every problem. It's not the best solution for resiliency in all but a few very niche situations but that doesn't stop people thinking just copying the VM to a remote location and starting it up there is the best DR solution in the world. It's just not.

I'm seriously amazed about the ignorance to other potential solutions and blind belief in virtualisation that people have. Left alone for a few days our developers have started snapshotting database VMs and copying them to another site for DR purpose, they then fail completely to produce any good reason (any reason at all actually) for not choosing the simpler and better options of mirroring or even log shipping - I'm seriously curious how they managed to identify that as the best solution available...

how available are you expected to make the new solution?

What's the business' acceptable data loss in time of hardware failure (RPO)?
What's the business' acceptable outage in time of hardware failure in and out of business hours (RTO)? Does the hardware support added with time required to rebuild stuff cut it?

I very much doubt a business of that size is defining those values, more along the lines of 'it should break as little as possible'.
 
I never said virtualisation is the panacea to all your ills, especially with a tight budget. To do it right costs money, that is what I was getting at.

A decent ESX set up can be incredibly resilient, abstracting the OS and application from the hardware makes managing resiliency and DR easier. Applications tied directly to bare metal servers are so much more difficult to manage.

thinking just copying the VM to a remote location and starting it up there is the best DR solution in the world. It's just not.

eh, if it works how can that not be the best DR solution in the world? sounds f'ing simple to me!! :)

FYI our main production ERP system is not virtualised due to it's resource requirements but our DR process is a lot more convoluted and difficult than any of our VMs

I'm seriously amazed about the ignorance to other potential solutions and blind belief in virtualisation that people have. Left alone for a few days our developers have started snapshotting database VMs and copying them to another site for DR purpose, they then fail completely to produce any good reason (any reason at all actually) for not choosing the simpler and better options of mirroring or even log shipping - I'm seriously curious how they managed to identify that as the best solution available...

so if I mirror my application consistent VM volumes/datastores is that any better? btw that would be handled by our storage system and not by our hypervisor, so no, ESX hasn't made our DR scenario any easier but an IT ecosystem that works all together has.
So in that sense I am in complete agreement with you that virtualisation wasn't the solution (I've not looked into VMware's site recovery manager product)

I'm quite interested to know of your experience of virtualisation, and which flavour to understand your attitude as I seem to have had a much more positive experience
 
A decent ESX set up can be incredibly resilient, abstracting the OS and application from the hardware makes managing resiliency and DR easier. Applications tied directly to bare metal servers are so much more difficult to manage.

Applications are only tired to bare metal servers if you're architect is no good at his job though, using virtualisation for HA/DR always sounds a lot like an excuse for not doing it the proper way. We had perfectly workable HA and DR before virtualisation became the latest buzzword and we didn't have to shell out for v-centre licenses for it either...

There are times when virtualisation is good for HA and DR, that time is when you're dealing with applications which are so badly written it's not possible to do proper HA.

I'm quite interested to know of your experience of virtualisation, and which flavour to understand your attitude as I seem to have had a much more positive experience

I've seen extensive use of ESX and we were also among the first in the country to deploy hyper-v in a serious way (complete with Microsoft engineers camped out in our office for a month or two). It works, it's great for consolidation, if you really want to shell out to license v-centre and everything the HA features are nice. Personally I don't see the point myself though, we don't gain much consolidation benefit as the majority of our servers are either heavily loaded and not suitable or we want the headroom for spikes in demand. We have HA that works just fine, if we somehow loose all five UK datacenters, everything will keep working being served out of european sites and it'll happen automatically, in many cases in a sub 5 second window. You don't need virtualisation to achieve that.

I have our idiot developers coming round every other week and suggesting we virtualise something or other, usually it seems, just because they can. That's what drives my distaste for the constant "it's magic, it'll fix everything" message. It isn't and it won't, not without substantial extra cost anyway.

They say 'Lets put web services on a ESX HA platform at each site so it's always online, with a SAN and licensing it'll cost £20k per site', I say 'Or lets use IIS, cluster services and DFS-R to do exactly the same thing and save the extra £15k for something worthwhile?'.

I'm still unclear if their hard on for virtualisation is driven more by ignorance or laziness.
 
Last edited:
There isn't anything wrong with virtualization, especially when looking at it from a TCO perspective. Most (not all) of our infrastructure is virtualized using ESX, which offers us unparalleled HA for 95% of our services, and as a relatively new and growing company it made far more economic sense to consolidate from the off. If I had used bare metal instead of virtual environments, my budget would have only stretched to about 1/3rd of the platforms I have in production.

Sure, in a perfect world, I'd have more services and applications running on clustered bare metal, but this isn't a perfect world, and virtualization tech goes a long way toward making an imperfect world more palatable.

And for the record, we were one of the first people Microsoft had spoken to who weren't a Microsoft partner to put a full Server 2008R2/Exchange2010 environment into production, and even wanted me to write them a report and article on the deployment.
 
Well my little comment "Virtualise everything" caused this thread to derail significantly lol :D I think you have taken what I said entirely out of context (of the OP's position) and applied it to an entirely different environment. This guy has 1-2 servers! and its entirely, absolutely appropriate for him to virtualise them to aid in DR. end.

If he didnt virtualise, if there a simpler, faster and more cost effective way to have DR than using ESXI (free) and a spare box to move the image to? I'm not talking about a SAN or Vcentre licenses. i'm talking about, the hardware dieing and you taking your latest image backup and restoring it to another box.
 
Applications are only tired to bare metal servers if you're architect is no good at his job though, using virtualisation for HA/DR always sounds a lot like an excuse for not doing it the proper way. We had perfectly workable HA and DR before virtualisation became the latest buzzword and we didn't have to shell out for v-centre licenses for it either...

There are times when virtualisation is good for HA and DR, that time is when you're dealing with applications which are so badly written it's not possible to do proper HA.

Exactly, so how many people have applications that natively support clustering etc, or how many companies have in house expertise to set up and support clustering? If you can get most of the way there without most of the knowledge or cost, that sounds good to me!

We have an Oracle RAC ERP system, is it virtualised? No it needs bare metal for performance and the application has been designed to be highly available via it's own clustering etc. Mirroring is done via our storage system.

We've virtualised over 100 servers and most application can not be clustered and it's easier and cheaper not to cluster others, most notably Exchange and MS SQL. When virtualised we now know they're a lot more available and manageable than if they're tied to a single server. Spend 6k on a highly redundant server for an app that would just sit their idly is a waste of money and installing multiple services on one physical server with one OS, you'd agree, isn't great either

we were also among the first in the country to deploy hyper-v in a serious way (complete with Microsoft engineers camped out in our office for a month or two).

thankfully I've not ;)

Personally I don't see the point myself though, we don't gain much consolidation benefit as the majority of our servers are either heavily loaded and not suitable or we want the headroom for spikes in demand. We have HA that works just fine, if we somehow loose all five UK datacenters, everything will keep working being served out of european sites and it'll happen automatically, in many cases in a sub 5 second window. You don't need virtualisation to achieve that.

fine but just because it doesn't suit your requirements it doesn't mean it won't fit many others. Your requirements seem out of the ordinary; to do what you say sounds like you've got oodles of WAN b/w (100+Mb/s?) where as we've been talking about a solution for an office of less than 100 people, we both should get some perspective here

Well my little comment "Virtualise everything" caused this thread to derail significantly lol :D I think you have taken what I said entirely out of context (of the OP's position) and applied it to an entirely different environment. This guy has 1-2 servers! and its entirely, absolutely appropriate for him to virtualise them to aid in DR. end.

amen :D

I have our idiot developers coming round every other week and suggesting we virtualise something or other, usually it seems, just because they can. That's what drives my distaste for the constant "it's magic, it'll fix everything" message. It isn't and it won't, not without substantial extra cost anyway.

They say 'Lets put web services on a ESX HA platform at each site so it's always online, with a SAN and licensing it'll cost £20k per site', I say 'Or lets use IIS, cluster services and DFS-R to do exactly the same thing and save the extra £15k for something worthwhile?'.

I'm still unclear if their hard on for virtualisation is driven more by ignorance or laziness.

both but then developers don't understand the nuances of IT infrastructure and the options that are available (at least the ones we have in our company); from your posts it would seem like that's your job, in which case you should be educating them rather than/as well as berating them :)

Zz
 
thankfully I've not ;)

Shame, it's very capable given it's relative infancy as a product, better than ESX was at that level of development and given the upcoming integration of the hooks into the linux kernel it'll be even better. It integrates well and the developers writing the control panel for it seem to like it better than vmware for ease of scripting actions. I didn't expect it from Microsoft at all but it's a seriously good product and VMware are surely on the backfoot now.

Your requirements seem out of the ordinary; to do what you say sounds like you've got oodles of WAN b/w (100+Mb/s?) where as we've been talking about a solution for an office of less than 100 people, we both should get some perspective here

Does 4x10Gbit for the primary core interconnects count? :p There's no good excuse for solutions which don't scale though, if you have 25 people and chuck together a setup quickly it'll be headache later. Design it well and it'll scale to a thousand users without time consuming and disruptive rebuilds.

I never said 'don't use virtualisation' though, I said 'ignore the crowd who advise virtualisation no matter what the question and use it intelligently'. That was my point in reply to the at the time unexpanded advice to virtualise everything.

Anyway, this is dragging the thread off course in a big way now.
 
What do you guys think of the HP X1600 NAS? Its seems to offer good performance and good value.

http://h10010.www1.hp.com/wwpc/uk/e...-3954626-3954626-3954626-3954714-4059229.html

Processor (1) Intel® Xeon® E5520 (2.26GHz); Standard
Memory 6 GB DDR3 Registered (RDIMM) with ECC Capabilities
Storage capacity 12 TB Raw Internal SATA; 96 TB Raw External SATA; 5.4 TB Raw Internal SAS; Maximum configuration
Drive count 2; Included
Storage drive (2) 146 GB 6G 10K SFF Dual-port SAS (internal for O/S mirror); Included
Storage expansion MSA60, MSA70
Network controller HP NC362i Integrated Dual Port Gigabit Server Adapter
Storage controller (1) Smart Array P212/256MB BBWC; Included
Expansion slots At least one open PCIe expansion slot
Compatible operating systems Microsoft® Windows Storage Server 2008 Standard x64 Edition; Pre-installed
Number of simultaneous users (local + network) 25 -; Maximum depends on number of drives and RAID configuration
Form factor 2U
Product dimensions (W x D x H) 58.42 x 24.13 x 90.17 cm
What's in the box

X1600 Network Storage System with HP X1000 Automated Storage Manager, (2) x 146 GB SFF SAS drives in rear slots, up to (12) user-selected hard drives in front slots, (2) rack-compatible power cords, documentation kit, Windows Storage Server 2008 EULA, System Restore kit, iLO2 Advanced License, Microsoft Certificate of Authenticity.

So instead of getting a Dell with Exchange and have all the storage connected to the server with DAS boxes I can get a seperate SBS 2008 server and NAS boxes. That should split the load and everything would not go down if the server goes down.

What do you guys think of the Windows Storage Server 2008 running on these HP boxes?

Was thinking something like this:

2 x HP aw528a X1600 NAS
24 x Seagate Cheetah 600GB 15000rpm SAS
2 x MSA60 (12 drive expansion for x1600 SAS)
24 x Seagate ES 1TB 1TB SAS 7200rpm

or

24 x Samsung F3 1TB SATA 7200rpm

Thats about £21000 on storage.
 
Was thinking something like this:

2 x HP aw528a X1600 NAS
24 x Seagate Cheetah 600GB 15000rpm SAS
2 x MSA60 (12 drive expansion for x1600 SAS)
24 x Seagate ES 1TB 1TB SAS 7200rpm

or

24 x Samsung F3 1TB SATA 7200rpm

Thats about £21000 on storage.

You need to buy HPs drives really - you'll need to get carriers for bare drives which will add up and HP may just laugh at you if you want support and aren't using their drives. Though they aren't actually stopping you using third party drives yet to my knowledge (Dell - I'm looking at you..)
 
I think spending that kind of money on a NAS isnt something i'd want to do... but I guess 20tb isnt ever going to be cheap...
 
I have not properly checked what you get with the X1600. It would be annoying if the carriers are not included. I know Dell does not include them but I can source them from somewhere else (£29 each). I guess that HP will support everything including the OS drives except for the drives I put in. Would it better using their drives. In theory the drive self support is the easy part (In a way).
 
I think spending that kind of money on a NAS isnt something i'd want to do... but I guess 20tb isnt ever going to be cheap...

In what way. I guess there is no other way. I am trying to split into into fast access storage and slower but more storage so I sort of get the best of both worlds without spending too much money.
 
You need to buy HPs drives really - you'll need to get carriers for bare drives which will add up and HP may just laugh at you if you want support and aren't using their drives. Though they aren't actually stopping you using third party drives yet to my knowledge (Dell - I'm looking at you..)

Someone recommended the H series Dell RAID cards above and these are the series that aren't compatible with drives that Dell don't supply. Best off with the good old PERC 6i IMO.

SATA drives in big arrays aren't the best performers, especially in RAID5 or 6 so I'd suggest if you're going to go for SATA, using RAID10. The rebuild time on a 1TB drive on a RAID5 array is too long these days to be considered to be placed into an enterprise environment - you're just putting all your data at risk every time a drive rebuilds (which in a big, busy array could be once every 12 months).
 
In what way. I guess there is no other way. I am trying to split into into fast access storage and slower but more storage so I sort of get the best of both worlds without spending too much money.

I think I replied on my recommendation way back in this thread, but in a nutshell i'd want standard windows server on there, not the storage server variant. Its a given that storage server will be better than the standard linux type OS most NAS's run, but i'd still want to configure to my exact needs and if theres a problem open up an RDP session to diagnose the issue, and if needs be get my data off! Also, you dont know what type of RAID config is being used; is this software RAID in windows? or hardware RAID? (a bit rhetrorical as I dont know, but if the RAID is configed through storage server it would imply its software RAID which is pants) if your not using iSCSI what else does storage server do for you? What are your options when you fill this NAS appliance? how long will this take? is it all proprietary?
 
I've just had a look on the Dell website and you can beat those NAS prices at Dell *retail* prices, which you should be able to improve on by about 30% at least. Surely thats a better option than going NAS?

Dell MD1000 chassis w/ 14 x SATA 750gb 7.2k drives = £5,000
Dell MD1000 chassis w/ 14 x SAS 600gb 15k drives = £10,000

Throw in a couple of grand for a R610 1U chassis and your good. You can stack 3 of those MD1000 chassis onto each R610 if you must.
 
In what way. I guess there is no other way. I am trying to split into into fast access storage and slower but more storage so I sort of get the best of both worlds without spending too much money.

Solaris + ZFS does this out of the box.
 
Solaris + ZFS does this out of the box.

but has an uncertain future these days - I wouldn't be investing in it right now for all it's benefits. I'm a big solaris fan and I've always been impressed by ZFS but right now I couldn't recommend it for a new install.

It's also less suitable for SME installs, the command syntax is different enough to cause problems and you won't find so much help online as you will for the popular linux flavours.
 
In what way. I guess there is no other way. I am trying to split into into fast access storage and slower but more storage so I sort of get the best of both worlds without spending too much money.

Depending on how your unix skills are you could, out of left field, consider Gluster FS, it's an open source brick storage option which delivers fairly impressive IO at reasonable costs. I've seen one built recently with 10 DL380s which was a nice solution and offered better IO cheaper than any off the shelf NAS/SAN solution going.
 
Me again :)

What is better (Performance and data safety)?

RAID5 with 12 x 450GB 15000rpm SAS = 4.95TB

or

RAID10 with 12 x 1TB 7200rpm SAS = 6TB
 
Back
Top Bottom