New Office - Server advice - Especially Storage

Associate
Joined
18 Oct 2002
Posts
1,610
Location
Liverpool, England
We are moving to a new large office and I am taking the opportunity to update our IT systems. So I am posting this so I get some input and advice from people on this forum. I would be grateful of any advice that can be offered. I am thinking of a budget of about £20,000 (Less will always make the boss happier :) )

At the moment we have several NAS devices which have been built in house (A mixture of Windows XP and SUSE server machines). The storage capacity on these ranges from 1TB to 2TB and the storage amount is not enough at all. These NAS devices don’t have 2 network ports or 2 PSUs and there is no load balancing. All this has worked before because the company was quite small. But the company is now growing quickly and we are now starting to struggle. We create maps and 3D Maps, so a lot of data comes in and a lot of number crunching is done by our processors (Users). There are network switches spread about everywhere at the moment, but in the new office there will be only 2 x 48 port switches. I am also gaining a dedicated server room. All CAT6 cables are in, but not terminated (This includes the CAT6 cable for all the IP phones), so we are going to have fun :o .

We can split our users into 3 groups (Simplified):

Group A – These people process 200mb to 1Gb files but the data is more like video files and so it is more like just reading the data. They do of course at the end stick it all together to create a final image which before it is stitched together involves 100,000’s of really small files.

Group B – These people create 2D and 3D images, which involve a lot of number crunching and filtering. These files range from 100MB to 10GB.

Group C – CAD department. These people stick all the data together creating the finished charts.

There are also a lot of other type of data processing and of course report writing. I want to give Group A and B at least 12TB of storage each.

We are also going to implement an Exchange Server for our Emails. At the moment we have our email system with BT which is horrible and restrictive.

NB...I know what I am going to do with daily and weekly backups (NAS).


OPTION 1

Dell PowerEdge R710 - SBS 2008 Standard (30 users to start with) -> Intel® Xeon® E5520, 2.26Ghz, 12GB Ram. Report folders and final images also stored on this server maybe (Work well where they are now).

2 x Netgear Readynas 3200 – 11TB or 22TB (7200rpm SATA HDDs, RAID 0, 1, 5,6 and X-RAID2)
http://netgear.com/Products/Storage/ReadyNAS3200.aspx?for=Business+Networking

2x GIGABIT, STACKABLE SMART SWITCHES - GS748TS (48 Ports)
http://www.netgear.co.uk/stackable_smart_switch_gs724ts_gs748ts.php


OPTION 2

Dell PowerEdge R710 - SBS 2008 Standard (30 users to start with) -> Intel® Xeon® E5520, 2.26Ghz, 12GB Ram

2x Dell PowerVault MD1200, Direct Attached Storage (Connected to the Dell PowerEdge R710)
Each with 11TB (1TB SAS 7200rpm x 12)

2x GIGABIT, STACKABLE SMART SWITCHES - GS748TS (48 Ports)
http://www.netgear.co.uk/stackable_smart_switch_gs724ts_gs748ts.php

Will there be problems having everything connected to one server (The two MD1200) other than if it dies? The R710 does have 4 Ethernet ports. Should I be thinking of having another 'simple' server (SBS2008) running the direct attached storage units?

OPTION 3

Dell PowerEdge R710 - SBS 2008 Standard (30 users to start with) -> Intel® Xeon® E5520, 2.26Ghz, 12GB Ram

For Group A - 1x Dell PowerVault MD1200, Direct Attached Storage (Connected to the Dell PowerEdge R710)
With 11TB (1TB SAS 7200rpm x 12)

For Group B - 1 x Netgear Readynas 3200 – 11TB or 22TB (7200rpm SATA HDDs, RAID 0, 1, 5,6 and X-RAID2) and put fast drives into each workstation for processing the data locally.
http://netgear.com/Products/Storage/ReadyNAS3200.aspx?for=Business+Networking

2x GIGABIT, STACKABLE SMART SWITCHES - GS748TS (48 Ports)
http://www.netgear.co.uk/stackable_smart_switch_gs724ts_gs748ts.php


How much do I have to worry about cpu speeds?

I would be grateful for any pointers you clever people can offer.... ;)

Thanks,

Crazyswede
 
Get some decent switches as a starting point, netgear have no place in business environments. HP are the best value bet there and have a great warranty.

I don't much like either of those storage options much if you really need high performance. I might consider a Nexsan SATAbeast (or SASbeast for speed) but it eats almost all your budget. The number of spindles and raw capacity would suit though.

I wouldn't use 12x1TB disks in RAID5 myself, even with a top notch controller performance won't be fantastic, it's a setup more suited for archiving. I wouldn't have anything that big running RAID5 though, RAID6 is a no brainer in my opinion - 1TB disks can take a while to rebuild.

Server wise I'd buy HP if you have free choice but it's difficult to advise as I don't think you've said how many users.

I'd probably go for DL380s and some of various HP MSA storage units for direct attached storage. The right unit and disks will give you excellent performance but no matter how you approach it I think it's a struggle to provide even the 24TB of high-ish performance storage for your budget.
 
Thanks a lot for your reply! I will have a look at what you have said. Why do you recommend HP over Dell?

It's going to be about 30 users with up to 15 of those users who will be doing the heavy work.

Does RAID 6 perform better than RAID5? I know raid uses 1 HDD for parity and RAID 6 uses 2 HDDs for parity.


EDIT.. What REALLY irritates me with HP is that you can’t customise their servers in the UK but you can in the USA :mad: They have lost business from me in the past, regarding workstations, because of that.
 
Last edited:
I'd make a few of suggestions...

a) Throw out the ReadNAS from your plan. If your wanting to up the game of your infrastructure then you will outgrow these just as quickly as your last set of NAS boxes.

b) Throw out SBS from your plan. If your planning for growth then seperate the logical parts of your network. I'd go for perhaps 1 x AD/DNS + 1 x Exchange + 1 x File Server. This will allow much better growth potential as you continue to provide machines which further "specialise" in those logical areas.

c) Virtualise everything.

d) Dont bother with the 2U R710, get the smaller/cheaper 1U R610 which is just as good for what you need it for. Remember you dont need as much (any) internal HDD expansion if you have a MD DAS array connected to it.

e) Go for the Dell box + a single MD DAS array for your *current* storage plus a bit extra. Dont destroy its performance with a bunch of 1tb+ SATA disks. See if you can get a good quote for SAS @ 10k+. We've bought many servers+das over the years of the spec your looking at for less than 6-8k. When management want more space, you now have a great baseline infrastructure and are able to easily upgrade capacity.

f) When you do need more space your new infrastructure can either be upgraded by adding a further MD DAS array to your existing R610, or by buying the same equipment again, rinse, repeat as necessary.

g) I would look to plan your capacity needs over the next 3 years if possible and see how fast you will need to add more DAS arrays etc. A quick calculation may prove that moving to SAN infrastructure would be cheaper in the mid term (5yrs etc)

h) Rather than go for 2 edge switches, get a single larger switch for your server room which will act as the core switch in your new network. You wont regret it once you exceed the interswitch bandwidth across your 2 switch plan, and you will probably not outgrow it for a good while. A HP 5300 or 4200 modular switch would be great, and not especially expensive.
 
Last edited:
SBS has its place and in this instance when money is a little tight it makes sense. You can always upgrade a few years down the line.

I'd rather have SBS and buy a decent server and storage platform to start with.
 
SBS has its place and in this instance when money is a little tight it makes sense. You can always upgrade a few years down the line.

I'd rather have SBS and buy a decent server and storage platform to start with.

No, it makes no sense. He is moving to a new building and can create an infrastructure from scratch. If anything, I'd be looking at projected business growth and trying to negotiate a larger budget.
 
Thanks a lot for your reply! I will have a look at what you have said. Why do you recommend HP over Dell?

It's going to be about 30 users with up to 15 of those users who will be doing the heavy work.

Does RAID 6 perform better than RAID5? I know raid uses 1 HDD for parity and RAID 6 uses 2 HDDs for parity.


EDIT.. What REALLY irritates me with HP is that you can’t customise their servers in the UK but you can in the USA :mad: They have lost business from me in the past, regarding workstations, because of that.

I think HP are more reliable and better quality by a thin margin and better designed by a large one. Try taking apart the two to fit something like a BBWC, HP kit seems so much better designed in comparison.

No, with a decent controller, performance between RAID 5 and 6 should be pretty much identical. The problem with RAID 5 is if one disk fails, until it rebuilds the entire array is at risk of complete data loss from any further failure. Given you might not have a spare handy, might not notice immediately and it'll take a little while to rebuild a 1TB disk and there's a further 11 that could go wrong that's a bit of a risk for me. RAID 6 you're fine even if a second fails before the rebuild is done...

Yes, HP's lack of configuration options is annoying, they'e far more channel orientated than Dell. Then again, because we get very high discount levels from Dell we can't use the online config anyway and have to speak to our account manager. Who's useless (or not actually, just hopelessly slow to respond).

I'd take HP if I had the choice, Dell are still good kit though. I also think HPs storage kit is far superior to anything Dell make.
 
c) Virtualise everything.

Oh please don't. Use virtualisation intelligently where it makes sense - which isn't at each and every opportunity. I get really tired of people claiming virtualisation is the solution to every ill, it's not, it's a nice tool but use it when it's appropriate not as the standard solution to every problem.

d) Dont bother with the 2U R710, get the smaller/cheaper 1U R610 which is just as good for what you need it for. Remember you dont need as much (any) internal HDD expansion if you have a MD DAS array connected to it.

You'll get more PCI slots in a 2U box, which could matter if you want to attach multi external arrays and maybe use a quad port NIC to build a high bandwidth team.

h) Rather than go for 2 edge switches, get a single larger switch for your server room which will act as the core switch in your new network. You wont regret it once you exceed the interswitch bandwidth across your 2 switch plan, and you will probably not outgrow it for a good while. A HP 5300 or 4200 modular switch would be great, and not especially expensive.

By far the best solution is stackable switches with a high bandwidth backplane (which at the SME level means Cisco 3750's still - though Juniper's EX4200 is a seriously good switch but it comes at a price). If they're too expensive then just use multi-port trunks on cheaper ToR switches (virtually any HP or low end Cisco). I'd struggle to recommend anything chassis based to anyone on a budget though, they're always more expensive than a simple ToR option and only start to make sense when you need to worry about management (Which I don't think you do here)

I'd also question the 'core switch' terminology, a 'core switch' would do layer 3 routing and wouldn't have any hosts directly connected to it in the true sense of it. One big unit with everything plugged into it isn't a core switch, it is however usually the prelude to a masterclass in how not to design a network.
 
I want to thank you all for all the effort you guys have put in so far and I look forward to hearing more. I am going to discus everything you guys have said with my colleague tomorrow. I am also going to look more into HP as well and contact them.

Regarding Dell and DAS there seems to be 2 models to choose from (In my eyes):


MD1200 (New) 12 SAS HDDs (Includes RAID PERC H800A Controller Card)

450GB 15000rpm HDDs - 4.50TB in RAID 6 (RAID5 4.950TB) - £6,709.00
600GB 15000rpm HDDs - 6.00TB in RAID 6 (RAID5 6.600TB) - £8,749.00
1.00TB 7200rpm HDDs - 10.0TB in RAID 6 (RAID5 11.00TB) - £6,109.00

MD1000 15 SAS HDDs (Includes RAID PERC6E 512MB CONTROLLER)

450GB 15000rpm HDDs - 5.85TB in RAID 6 (RAID5 6.300TB)- £7,900.00
600GB 10000rpm HDDs - 7.80TB in RAID 6 (RAID5 8.400TB)- £8,950.00
600GB 15000rpm HDDs - 7.80TB in RAID 6 (RAID5 8.400TB)- £9,840.00
1.00TB 7200rpm HDDs - 13.0TB in RAID 6 (RAID5 14.00TB) - £7,180.00

As everything will get backed up to a NAS every night, I should get away with RAID5 and gain another 450GB or 600GB in storage.
 
No, it makes no sense. He is moving to a new building and can create an infrastructure from scratch. If anything, I'd be looking at projected business growth and trying to negotiate a larger budget.

Nobody has an unlimited budget. Yes, I agree if there is room for more spend put in place better software. I'm in a rather similar situation myself. For 30 users SBS is a good option unless you have something outside of the SBS realm.

Me personally, I'd take SBS with decent kit over average kit and Windows Server X etc. I.e. A greater hardware spend vs software.

Bah: I missed the "growing quickly" bit. Need to read threads better. So yes, full blown server is probably a better option.
 
Last edited:
30 users really is pushing the upper boundries of what I'd want to do with SBS, especially if they're power users (which it sounds like a lot of these are). I wouldnt want to be migrating in 6 months time, so separate servers (virtual or physical) are the way to go here.
 
Oh please don't. Use virtualisation intelligently where it makes sense - which isn't at each and every opportunity. I get really tired of people claiming virtualisation is the solution to every ill, it's not, it's a nice tool but use it when it's appropriate not as the standard solution to every problem.

Agreed, but my meaning in context of the OP's situation is that he definately should virtualise everything.... e.g. the one or two boxes (AD + Exchange + File Server) hes going to have. If he does this then he will improve his DR odds considerably. I dont advocate doing everything in a larger more complex environment, but for him I would definately do it.

In conjunction with this comment, was my point about moving away from SBS, and splitting up some functions of the network. He could do that in the short term and not have to add any hardware to his budget, and then when more budget becomes available he could very easily (because their virtualised) split some of the hosts onto seperate hardware, this way he doesnt have to rebuild DC's etc etc just because hes moving hardware.

You'll get more PCI slots in a 2U box, which could matter if you want to attach multi external arrays and maybe use a quad port NIC to build a high bandwidth team.

I dont think it matters on the R610 as you can attach 2 x DAS units to it. We have one like that I believe. I'm not sure 3 is even supported, and as I said in the post I'd probably rinse, repeat the whole purchase to add more. I think the R610 has 4 gige ports too, but would need to check.

By far the best solution is stackable switches with a high bandwidth backplane (which at the SME level means Cisco 3750's still - though Juniper's EX4200 is a seriously good switch but it comes at a price). If they're too expensive then just use multi-port trunks on cheaper ToR switches (virtually any HP or low end Cisco). I'd struggle to recommend anything chassis based to anyone on a budget though, they're always more expensive than a simple ToR option and only start to make sense when you need to worry about management (Which I don't think you do here)

I'd also question the 'core switch' terminology, a 'core switch' would do layer 3 routing and wouldn't have any hosts directly connected to it in the true sense of it. One big unit with everything plugged into it isn't a core switch, it is however usually the prelude to a masterclass in how not to design a network.

Your probably used to building larger network than me, but I still think if he can consolidate his 30 or so users into a single switch he wont then have to worry about bandwidth between his two switches, what to put on each seperate switch etc. We once had 6 24 port switches in our business and the whole bandwidth between switches issue got messy (albeit they didnt do trunking well). We replaced it with a HP modular switch and have never had an issue to do with that since. That was 5 years ago.
 
I dont think it matters on the R610 as you can attach 2 x DAS units to it. We have one like that I believe. I'm not sure 3 is even supported, and as I said in the post I'd probably rinse, repeat the whole purchase to add more. I think the R610 has 4 gige ports too, but would need to check.

OK, maybe a limitation of the Dell kit, I have in the past put eight MSA enclosures onto a 2U HP box, which simply isn't possible with a 1U machine (or wasn't at least, external SAS connections instead of U320 SCSI may have changed that).

Your probably used to building larger network than me, but I still think if he can consolidate his 30 or so users into a single switch he wont then have to worry about bandwidth between his two switches, what to put on each seperate switch etc. We once had 6 24 port switches in our business and the whole bandwidth between switches issue got messy (albeit they didnt do trunking well). We replaced it with a HP modular switch and have never had an issue to do with that since. That was 5 years ago.

Indeed, I've nothing against chassis units but they are almost always more expensive, if you need loads of interswitch bandwidth on the cheap then 3750 stackwise interconnects are 35.6Gbit/s (close to backplane bandwidth) and have the advantage you can form a single logical unit out of up to six (i think it's six anyway) units.

At £2000 for a 24port gigabit unit (with powerful l3 features thrown in) it's still one of the best bargains out there more than five years after it was first released.

Modular switches are nice in some circumstances but stackable units can be really compelling in terms of business case. They require less patching and less rack space, are as easy to manage if they form a single logical switch (which Juniper and Cisco ones do) and are generally cheaper per port.

Or indeed, a 48 port switch would be an obvious option if you can get the required ports down to that number...
 
Oh please don't. Use virtualisation intelligently where it makes sense - which isn't at each and every opportunity. I get really tired of people claiming virtualisation is the solution to every ill, it's not, it's a nice tool but use it when it's appropriate not as the standard solution to every problem.

Amen to that.

People jumping onto the virtualisation bandwagon and then running off trying to implement it only to realise it is not the holy grail!
 
Far too many people do virtualisation the wrong way.

They get all the gear in then say right, now what shall we virtualise.
 
Raid 5/6 over 12 or 15 drives is going to kill performance. Raid 10 would be better as has been mentioned already. If you really are keen to get as much space as possible then raid 50 would be a bit better.
 
Sorry, I was not very clear before, I was talking about TWO 48 port stackable switches giving us a total of 96 ports. I am giving each user access to 2 network sockets incase they need to use a second workstation or their laptop.

Not touching virtualisation... maybe in the future.
 
Raid 5/6 over 12 or 15 drives is going to kill performance. Raid 10 would be better as has been mentioned already. If you really are keen to get as much space as possible then raid 50 would be a bit better.
The H800 handles RAID6 with ease. You will max out the interface speed before the controller struggles with parity - in a file serving scenario. ;)
 
Last edited:
Oh please don't. Use virtualisation intelligently where it makes sense - which isn't at each and every opportunity. I get really tired of people claiming virtualisation is the solution to every ill, it's not, it's a nice tool but use it when it's appropriate not as the standard solution to every problem.

Virtualisation done right is a fantastic solution and has been a boon for us, so much so we're now not doing it right (too many eggs in too few baskets :)). unfortunately IMO to do virtualisation right for crazyswede I'd suggest almost doubling the budget

Also definitely separate out the DC/DNS from Exchange, that setup bit me in the arse a few times in the past.

crazyswede:

how available are you expected to make the new solution?

What's the business' acceptable data loss in time of hardware failure (RPO)?
What's the business' acceptable outage in time of hardware failure in and out of business hours (RTO)? Does the hardware support added with time required to rebuild stuff cut it?

If you're expected to supply something in the region of the 4 9's then shouldn't you be going for two servers at the very least and if you're lucky as mentioned cross stacked LAN switches, which IMO starts to lend itself to a HA virtualisation cluster, but then you'll have to move away from DAS and look at NAS or SAN storage...... more money :)
 
Back
Top Bottom