Deploying services to the cloud...

Associate
Joined
18 Jan 2004
Posts
1,980
Location
Somewhere
Were currently looking to upgrade to Exchange 2010 and Sharepoint 2010 from our existing on premise implementations. The obvious choice is to install these new apps onto existing VM infrastructure in our data centre.

however were interested in how we could perhaps deploy these into a cloud hosting service like Amazon Web Services which we have some experience in from a development perspective.

so im not interesting in a debate of why/how/if this should be done. What im interested in is how we can still acheive SSO with active directory even though these servers will be hosted in a AWS cloud? anybody know?
 
so nobody has ever done this? :surprised:

surely someone must have tried one of these hosted exchange packages you see everywhere; how do you manage authentication to that?
 
There are multiple options here, the 'cloud' is a very big term. If you're just talking about moving some functions to off sites servers (whether they be dedicated, VPS or cloud style machines) then yes, it'll be fine as long as they can talk to each other (which should be over a VPN...). That's no trouble at all.

If you want to integrate commodity hosted exchange then that depends on the provider and their approach.

I'd be very wary of moving any AD services to AWS or similar, it's not a great fit. You'll get a Windows VPS from a decent provider for not a lot more money (maybe less) and that's probably a better fit. AWS (and similar) works best (technically and economically) when you have to scale your resources quickly and unexpectedly, internal services are rarely good candidates.
 
thanks, thought you might reply to this :)

so basically theres no voodoo solution to this, you basically require a VPN in place between the hosted boxes and the internal network.

you'd have thought that this is a common problem with a million and one ways to solve it given the cloud hype over recent years. need some kind of federated ad model or something.
 
thanks, thought you might reply to this :)

so basically theres no voodoo solution to this, you basically require a VPN in place between the hosted boxes and the internal network.

you'd have thought that this is a common problem with a million and one ways to solve it given the cloud hype over recent years. need some kind of federated ad model or something.

There's a variety of ways to be sure, extending your forest to cover the hosted boxes is one way, setting up trusts, etc etc...choose what works best.

AD traffic is generally not something I'd allow out into the wild unencrypted though, so a VPN is a given no matter which way you go really. AWS supports VPNs in their VPC product...I presume some others do too.
 
I think with cloud based services that require AD, I would guess that you just sync your AD across the internet to their systems. I know this is how we sync with mimecast, so I would imagine it would use the same concept.

The main problems I have with "going to the cloud".

Data ownership considerations, who owns the data, if you want it all can you take it all, if the company close down and go bust what happens to your data, where do the clients of your client stand in terms of data protection. This is a big deal for the likes of law firms.

Then there is the bandwidth considerations. Going to cloud would essentially be going from a lan which would have generally 100mbit clients and gigabit infrastructure. To a lan going out over the internet which is also shared with daily internet usage. Then there is the cost of serious bandwidth and can you put everything in the cloud or only some things, then you will have to maintain a local server farm anyway.

There are some benefits like backup and redundancy, lack of server costs, lack of support management costs etc.

I think at the moment i would only use the cloud for some functions and not others, depending on the requirements of the client and the amount of users etc. If there are only 10 users then it may be more economical to put them on a cloud shared exchange solution, rather than buy a whole exchange set up locally.

Generally speaking though I am apprehensive about going to cloud for the reasons above and I just think that there are more opportunities for improving local lans, like ssd and gigabit to the clients etc and internet bandwidth is just not that fast yet, when we have gigabit to the premises for under £10k a year then we could see the cloud realy becoming viable.
 
Generally speaking though I am apprehensive about going to cloud for the reasons above and I just think that there are more opportunities for improving local lans, like ssd and gigabit to the clients etc and internet bandwidth is just not that fast yet, when we have gigabit to the premises for under £10k a year then we could see the cloud realy becoming viable.

95% of office use is email, webapps and office files less than 1Mb in size, you can run that on wet string. You can certainly run a couple of dozen users on a symmetric 10Mbit circuit (and we have dozens on offices all over europe provisioned on that basis).

And working out who owns the data is a matter of reading the contract generally.

People are scared of the cloud for no good reason, no it doesn't work always, we don't outsource any IT function but we have absolutely no servers in any of our offices, they belong in proper datacenter space and that's where they are. It's made zero difference to the way people work at all, off site infrastructure, whether in house or outsourced is merely a matter of sensible planning.
 
If you have a point to point to a data center i don't realy see that as using the cloud. To me using the cloud is when you use your standard internet line to make use of server services often offered within a shared environment. ie the cloud provider can offer a good price on 10 exchange users because they have 50 clients all using 10 users on one server etc. But I guess that is just semantics? Trying to run exchange, dms, accounts software and more all across a 10mbit line would not equal the kind of performance you can get from running it all locally. We make use of datacenter over 100mbit point to point for DR and some sites half of their servers are off site with a point to point. I do notice that the sites that use the offsite datacenter don't have the best performance, it also creates a single point of failure, if the line goes down to the datacenter etc.

All the benefits of using a datacenter aside would it not just create an unnecessary bottleneck due to the 10 or 100mbit point to point?

I guess this is kind of going a bit off the topic from which I thought it was originally. As i don't consider making use of point to point lines and renting rack space as using the cloud, i have no problems with that and my original comment about being apprehensive does not apply to making use of a datacenter with a point to point. I was specially referring to what I considered to be cloud services. Where you pay an internet solution provider to give you 10 exchange user access or your make use of a sharepoint server which hardware resources are shared with other companies. I have seen some previously dedicated server and colocation providers offering cloud based virtual machines. Where you just rent a platform that allows you to make virtual machines on a shared hardware environment. Like opennebula, cloud stack etc.
 
Last edited:
If you have a point to point to a data center i don't realy see that as using the cloud. To me using the cloud is when you use your standard internet line to make use of server services often offered within a shared environment. ie the cloud provider can offer a good price on 10 exchange users because they have 50 clients all using 10 users on one server etc. But I guess that is just semantics? Trying to run exchange, dms, accounts software and more all across a 10mbit line would not equal the kind of performance you can get from running it all locally. We make use of datacenter over 100mbit point to point for DR and some sites half of their servers are off site with a point to point. I do notice that the sites that use the offsite datacenter don't have the best performance, it also creates a single point of failure, if the line goes down to the datacenter etc.

All the benefits of using a datacenter aside would it not just create an unnecessary bottleneck due to the 10 or 100mbit point to point?

I guess this is kind of going a bit off the topic from which I thought it was originally. As i don't consider making use of point to point lines and renting rack space as using the cloud, i have no problems with that and my original comment about being apprehensive does not apply to making use of a datacenter with a point to point. I was specially referring to what I considered to be cloud services. Where you pay an internet solution provider to give you 10 exchange user access or your make use of a sharepoint server which hardware resources are shared with other companies. I have seen some previously dedicated server and colocation providers offering cloud based virtual machines. Where you just rent a platform that allows you to make virtual machines on a shared hardware environment. Like opennebula, cloud stack etc.

Nail hit head.

Agree with Groen on this 100%.

Real world example:
Some councils up North made the same mistake and one still is (Sunderland city council). They paid for it dearly. I mean millions of pounds for the bottleneck. People couldn't work, information was not reaching other departments in the same building in time (literally took HOURS for mail to reach an office 100 ft away).

http://www.sunderland.gov.uk/index.aspx?articleid=6088

Most of it is still ongoing, they cannot backout quickly. Think of it this way; 1000mbit or even 100mbit pipe is ideal for Exchange and AD. If you artificially create a limit on a lane or pipe for 10mbit EVERYTHING slows down, unless you enable QoS. But QoS cannot determine which service is more "important" at a specific time or which email is more "important" than the other important emails. You essentially create an artificial bottleneck on your own services which is very hard to backout of after implementation.

Even now they're burning hundreds and thousands of pounds throwing money at the problem on higher grade lines and equipment trying to get their network back to the same level it was in 2010.

Only if you dramatically increase your connection throughput to your outside world AND ensure redundant (fully redundant failovers over multiple lines) does the cloud make sense for business critical services. Even then if your throwing all that money at the problem, why not negate the entire problem and do it internally anyway?

I am pro-cloud as part of my job, but moving your internal services to the cloud sounds like problem creation (do you work for a council? :p) where there really isn't a problem. The cloud can be used and utilised effectively to improve performance and productivity but this isn't one of them. Just my 2p.
 
Nail hit head.

Agree with Groen on this 100%.

Real world example:
Some councils up North made the same mistake and one still is (Sunderland city council). They paid for it dearly. I mean millions of pounds for the bottleneck. People couldn't work, information was not reaching other departments in the same building in time (literally took HOURS for mail to reach an office 100 ft away).

http://www.sunderland.gov.uk/index.aspx?articleid=6088

Stupid implementation isn't a reason not to use the cloud. You can screw up a local implementation just as good, you don't stop driving BMWs because a couple of owners drove theirs like morons and stuffed them into a wall do you?

Most of it is still ongoing, they cannot backout quickly. Think of it this way; 1000mbit or even 100mbit pipe is ideal for Exchange and AD. If you artificially create a limit on a lane or pipe for 10mbit EVERYTHING slows down, unless you enable QoS. But QoS cannot determine which service is more "important" at a specific time or which email is more "important" than the other important emails. You essentially create an artificial bottleneck on your own services which is very hard to backout of after implementation.

Again, bad implementation, you size your circuits appropriately, you *can* run 10-20 users off 10Mbit just fine. I know this because we do it in dozens of locations worldwide, not a single one of our offices has even one server in it, they're better in a datacenter.

If you have 500 users then you might need Gigabit circuits, but fortunately, if you have 300 users then you can likely afford the £60k that a pair of resilient Gig circuits will cost.



Only if you dramatically increase your connection throughput to your outside world AND ensure redundant (fully redundant failovers over multiple lines) does the cloud make sense for business critical services. Even then if your throwing all that money at the problem, why not negate the entire problem and do it internally anyway?

resilient connections are a given for any half way sane company anyway, if a company is of any decent size and don't have resilient connectivity then their IT department needs shooting.

The other benefits are multiple - it doesn't matter where your office is, where your users are. Our users get the same experience whether they login in London or Vancouver, it's seemless, local office IT can't provide that (unless you spend vast amounts on the circuit to the office to turn it into a datacenter - which is pointless). If the office burns down, the staff can work from home or you can move into new office, there's no invoking DR plans which might or might not work to new hardware or whatever, it's business as usual (which is exactly what you need in that situation).

I am pro-cloud as part of my job, but moving your internal services to the cloud sounds like problem creation (do you work for a council? :p) where there really isn't a problem. The cloud can be used and utilised effectively to improve performance and productivity but this isn't one of them. Just my 2p.

There is a problem, the previous way of doing things didn't work today, it's not flexible, can't adapt, it was uninspired and guided entirely by what was easiest. There's a lot of people who think the cloud is a bad idea because they don't like change, don't understand it and aren't good enough at their jobs to implement it correctly.

End of the day, IT is a service, cloud services are usually cheaper and present as OPEX rather than CAPEX, they will win, so I'd get used to implementing them right.
 
Sorry bigred, that sounded more of a personal dig there. Implementation has a lot to do with any IT deployment cloud and non-cloud but the cloud is not a solution 100% of the time (as with any solution for that matter).

The cloud has a lot of advantages over a datacentre or even local hosting but the standard "milage may vary" is important. Everyone gets their own use of the their own system or design.

resilient connections are a given for any half way sane company anyway, if a company is of any decent size and don't have resilient connectivity then their IT department needs shooting.

An IT department might not have a say in their ongoing operation costings for multiple lines. That decision would normally be far above an IT department, usually that would rely on a finance team and various meetings involving management who have no idea about IT. Sometimes on outsourced platforms the outsourced company takes over operational costs of the client.

In short tailored solutions based on real world tasks work better than a one size fits all approach. End to end solutions work a lot better than covered or limited solutions based on proper implementation and scoping of real world tasks. Surely we can agree on that?

Personally I'm not scared of the cloud, heck I manage systems in our own private and public cloud on a daily basis. But keeping on point and on topic from the OP would be to effectively weigh up on what real world tasks he needs to perform and then build around that?

My concern and something which is always my concern for every contract I cover is that the cloud is suggested as THE FINAL SOLUTION to every problem regardless to actual implementation or any foresight or without any scoping to see if there was an actual problem to begin with. Trying to spare the OP needless problems in future and maybe sway him to double think just to be entirely sure before he goes down this route from my experience.
 
BRS said:
The other benefits are multiple - it doesn't matter where your office is, where your users are. Our users get the same experience whether they login in London or Vancouver, it's seemless, local office IT can't provide that (unless you spend vast amounts on the circuit to the office to turn it into a datacenter - which is pointless). If the office burns down, the staff can work from home or you can move into new office, there's no invoking DR plans which might or might not work to new hardware or whatever, it's business as usual (which is exactly what you need in that situation).
There's no excuse for single site failure these days, that includes a single DC.
The recent AWS failures have highlighted this. Being in 'the cloud' offers no more protection than doing things locally unless you're getting a redundant solution.
Many websites were knocked out despite being 'in the cloud' because their region's AWS went down, while other areas were totally uneffected by the outage. Here's a good blog post outlining the issues: http://blog.fitocracy.com/post/26245878403/getting-fitocracy-back-online

There is also nothing wrong with running circuits to 'local' offices. 100mbit leased lines are available for £700pcm, and 1gbit for marginally more.
This is just a private cloud in a tier 1/2 datacentre. I hate the term cloud, it's a load of fluffy nonsense. :)

BRS said:
There is a problem, the previous way of doing things didn't work today, it's not flexible, can't adapt, it was uninspired and guided entirely by what was easiest. There's a lot of people who think the cloud is a bad idea because they don't like change, don't understand it and aren't good enough at their jobs to implement it correctly.

End of the day, IT is a service, cloud services are usually cheaper and present as OPEX rather than CAPEX, they will win, so I'd get used to implementing them right.
I'm not sure what previous way you're talking about, but usually handing over IT projects to a single provider 'at the lowest possible cost' is a fast way of getting a bad solution. Standard public sector approach.
The cloud is a good idea, but it does not replace due diligence in understanding your services, how resilient they are and what parts you can afford to lose in the event of an outage - if at all.
The Capex/Opex argument varies from firm to firm, but cloud services do benefit from economies of scale, being able to spin up an entirely new hyper-scalable service in under an hour is a boon.
 
Back
Top Bottom