Clustering / Fail over + Web servers + SQL 2008 + Server 2008

Izi

Izi

Soldato
Joined
9 Dec 2007
Posts
2,718
Hi all,

Really need some help and direction for my next set up.

I run a couple webservers, and these run simple set ups and are not fail safe. I run Windows 2003, and SQL 2000. Although they run raid (and backed up daily), if one server is down, there is no back up server to take that role whilst down.

I have read in to Windows 2008 and it looks like some impressive kit. I like the new Hyper-v stuff, essentially meaning less need for different servers for different rolls.

Very simple, what i want to know is how do I set up a server to have a fail safe server running if one fails? I want a server which is mirrored on another server and if one fails one automatically takes over. I assume this is possible, but how does it work? Is this clustering? If one server goes down because of a failed HDD, it is taken off line fixed and put back online, how does it re-calibrate with the server which was working while it was down?

Sorry for the very newbesk questions, but I always have stuggled with microsoft jargon and just need some one to speak layman to me and point me in the right direction!

A packet of crips and a pint to any one who helps :)

Many thanks!
 
Associate
Joined
20 Oct 2002
Posts
1,127
Location
Redcar
Your right that you will need to cluster the web servers so that if web server A is down or off for upgrade web server B can take all the traffic.

I've done this with 2x physical SQL servers in 2003 but not yet in 2008. You can add complexity to this as well by visualising the servers to be clustered on one physical box.

Best thing to do would be is get hold of some spare kit and the software and setup a test rig. It doesn't have to be server grade hardware. We run all or basic hardware testing on some cheap office grade PCs we didn't use anymore, just make sure the test machines have 64-bit capable CPUs and 4GB of RAM or testing becomes very slow.
 
Man of Honour
Joined
30 Jun 2005
Posts
9,515
Location
London Town!
Your right that you will need to cluster the web servers so that if web server A is down or off for upgrade web server B can take all the traffic.

I've done this with 2x physical SQL servers in 2003 but not yet in 2008. You can add complexity to this as well by visualising the servers to be clustered on one physical box.

Best thing to do would be is get hold of some spare kit and the software and setup a test rig. It doesn't have to be server grade hardware. We run all or basic hardware testing on some cheap office grade PCs we didn't use anymore, just make sure the test machines have 64-bit capable CPUs and 4GB of RAM or testing becomes very slow.

Some of us are blessed with racks full of HP kit for lab testing...but regardless it's the way to go. Full clustering on SQL is complex stuff, database mirroring (and consequently manual failover) in the event of a primary server failover is simpler.

It depends what you really want to achieve. Clustering web servers and SQL is one options like youv'e idetified. Virtualising the web and SQL servers under esxi or similar and having two hosts for them to run on is another option. That will protect you against hardware failure without the complexity of windows clustering and give you the oppurtunity to add more VMs if you need something else. Ultimately it depends exactly what you want...
 

Izi

Izi

Soldato
OP
Joined
9 Dec 2007
Posts
2,718

Thanks for the replies both.

I suppose I should maybe explain my situation, and therefore you could possibly suggest something easier / better that I could implement.

Currently I have 3 web servers and 2 mail servers.

What is good about this is that if one server goes down, it only effects 1/5th of my clients as I have 5 separate servers.

Now if I upgraded to Windows server 2008, I could potentially get rid of these servers and run them under Hyper-v. The problem being now if the server went down it would effect 100% of my clients.

The current servers we have set up are nothing special in terms of spec either, duel core, 2gb - 4gb ram, 160gb raided 15,000 scsi disks so are relativly cheap.

I was thinking that I would minimise servers and just have one more powerful setup. IE 8 cores, 16gb ram, raid 10 6 drives. Thus I could dedicated more cores and ram to the web / sql VMs than the mail.

One thing I definatly want to do is upgrade to SQL server 2008 and Windows 2008 as the next set up we do I want to last for 5 years. Have to move with the times and that!

Thoughts?

Cheers.
 
Man of Honour
Joined
30 Jun 2005
Posts
9,515
Location
London Town!
Thanks for the replies both.

I suppose I should maybe explain my situation, and therefore you could possibly suggest something easier / better that I could implement.

Currently I have 3 web servers and 2 mail servers.

What is good about this is that if one server goes down, it only effects 1/5th of my clients as I have 5 separate servers.

Now if I upgraded to Windows server 2008, I could potentially get rid of these servers and run them under Hyper-v. The problem being now if the server went down it would effect 100% of my clients.

The current servers we have set up are nothing special in terms of spec either, duel core, 2gb - 4gb ram, 160gb raided 15,000 scsi disks so are relativly cheap.

I was thinking that I would minimise servers and just have one more powerful setup. IE 8 cores, 16gb ram, raid 10 6 drives. Thus I could dedicated more cores and ram to the web / sql VMs than the mail.

One thing I definatly want to do is upgrade to SQL server 2008 and Windows 2008 as the next set up we do I want to last for 5 years. Have to move with the times and that!

Thoughts?

Cheers.

Personally given that setup, I'd probably virtualise it all but run it on two reasonably powerful boxes (at least quad core with 8GB RAM, maybe 2x Quad cores). The downside is to get the ability to move the VMs between the two nodes you'll need to put them on some kind of back end storage (NAS or SAN of some kind). You should be able to do it with some decent HP hardware and get substancial change from £10k easily...depends if that fits your budget...
 

Izi

Izi

Soldato
OP
Joined
9 Dec 2007
Posts
2,718
Personally given that setup, I'd probably virtualise it all but run it on two reasonably powerful boxes (at least quad core with 8GB RAM, maybe 2x Quad cores). The downside is to get the ability to move the VMs between the two nodes you'll need to put them on some kind of back end storage (NAS or SAN of some kind). You should be able to do it with some decent HP hardware and get substancial change from £10k easily...depends if that fits your budget...

you mean run two boxes one which is fail over if one fails?

Not sure of budget yet... been looking at dell and i can put together an amazing spec for 4k + VAT - so would need 2 of these to 'cluster'
 
Don
Joined
5 Oct 2005
Posts
11,154
Location
Liverpool
you mean run two boxes one which is fail over if one fails?

Not sure of budget yet... been looking at dell and i can put together an amazing spec for 4k + VAT - so would need 2 of these to 'cluster'

Yup thats what he means, we use this method in work, its really good...

Stelly
 

Izi

Izi

Soldato
OP
Joined
9 Dec 2007
Posts
2,718
And a back end "shared" storage.

Ok..

So a couple of questions: If a server A goes down, server B takes over. When server A is brought back online how is it re-calibrated. Files / Databases etc?

Why would i need a separate storage device? If i had raid 10 4x450gb hdd that woud give 900 gb of sotrage which is more than enough for us.
 
Soldato
Joined
8 Nov 2002
Posts
9,128
Location
NW London
Why would i need a separate storage device? If i had raid 10 4x450gb hdd that woud give 900 gb of sotrage which is more than enough for us.

The virtual hard drives for the server are stored on shared storage, otherwise it one node went down, the data would not be accessible (as the hard drives would be down as well). The memory and cpu resources come from the nodes
 

Izi

Izi

Soldato
OP
Joined
9 Dec 2007
Posts
2,718
The virtual hard drives for the server are stored on shared storage, otherwise it one node went down, the data would not be accessible (as the hard drives would be down as well). The memory and cpu resources come from the nodes

this has confused me, mainly i think, because i dont understand how nodes work.

I am thinking of the set up like a mirrored pair of disks which is obsiously wrong.

can you explain to me a little about nodes?

Are you saying there would be three components - two servers and one central storage device?
 
Soldato
Joined
8 Nov 2002
Posts
9,128
Location
NW London
this has confused me, mainly i think, because i dont understand how nodes work.

I am thinking of the set up like a mirrored pair of disks which is obsiously wrong.

can you explain to me a little about nodes?

Are you saying there would be three components - two servers and one central storage device?

A node is just a server with the Vmware OS installed (actually usually referred to as a 'host')

The hosts run the VMs (Virtual Machines), supplying CPU and memory resources

The "data" is stored on shared storage (eg a Fibrechannel or iSCSI SAN unit).

With the full version of ESX you can perform vmotion, this means that a VM can move from host to host with zero downtime.
 
Associate
Joined
30 Dec 2003
Posts
1,368
Location
BC, Canada
. been looking at dell and i can put together an amazing spec for 4k + VAT - so would need 2 of these to 'cluster'

What spec is that going to get you? I'd be very interested to know the hardware that Dell are going to sell you for 4K to do the job that you are asking of them

HA HA HA HA HA HA!!!!

NO! I mean that in a no not Dell kit (I am biased by the way) and HA as in High Availability. Meaning 2 physical boxes even if you vitrualise them, you need a SAN (such as the HP MSA 2000 series) or and ISCSI target box such as an AIO.

So you create a cluster with two physical servers and some shared storage (using HP Openview Storage Mirroring you can do pretty much instantaneous failover) and then you can have replication between the two and failover. If you use Virtual Machines, then just get the HA functionality which will do automatic failover and provisioning for you etc (using either of the above storage methods).

A node is just a server with the Vmware OS installed (actually usually referred to as a 'host')

The hosts run the VMs (Virtual Machines), supplying CPU and memory resources

The "data" is stored on shared storage (eg a Fibrechannel or iSCSI SAN unit).

With the full version of ESX you can perform vmotion, this means that a VM can move from host to host with zero downtime.

He speaketh the truth
 
Man of Honour
Joined
30 Jun 2005
Posts
9,515
Location
London Town!
you mean run two boxes one which is fail over if one fails?

Not sure of budget yet... been looking at dell and i can put together an amazing spec for 4k + VAT - so would need 2 of these to 'cluster'

Actually given you have 5 or so VMs, I'd run them split between the two under normal operation so you get as much performance as possible, then they all move to one host if the other has a problem. That way you don't just have a server sitting there doing nothing.
 
Associate
Joined
30 Dec 2003
Posts
1,368
Location
BC, Canada
Actually given you have 5 or so VMs, I'd run them split between the two under normal operation so you get as much performance as possible, then they all move to one host if the other has a problem. That way you don't just have a server sitting there doing nothing.

Isn't that what HA and automatic provisioning does for you? :confused:
 

Izi

Izi

Soldato
OP
Joined
9 Dec 2007
Posts
2,718
Depends if you want to buy those features. ESXi is free.

does esxi just do what hyper z does in 2008? if so whats better about esxi?

edit: just looked at their website. so its a small program (32 megs) which runs on a server which in turn virtualizes OS's?

IE you install vmware on the server then the OS in VMs on top?

Where as hyper-v needs windows first before virtualizing?
 
Last edited:
Soldato
Joined
14 Mar 2005
Posts
16,821
Location
Here and There...
this has confused me, mainly i think, because i dont understand how nodes work.

I am thinking of the set up like a mirrored pair of disks which is obsiously wrong.

can you explain to me a little about nodes?

Are you saying there would be three components - two servers and one central storage device?

I think you need to take a step back and go and do some serious reading about clustering over on Microsofts site, or head to VMwares site and have a read about esx and it's requirments. Don't jump into this lightly or based on a thread on a forum, you are getting into some quite complicated territory which is fine when it works but can be interesting if it fails. I would want to be very confident in what I was doing if I was deploying this for a customer base.
 
Back
Top Bottom