What Firewall do i need?

Izi

Izi

Soldato
Joined
9 Dec 2007
Posts
2,718
I am currently building a dell server to host web applications and other online software. (before now I have rented hardware)

In all appoximatly 100,000 users a day will be using the server to access web applications.

Can some one please recommend a simple to use firewall for this estimated traffic level?

I was told a Juniper SSG20 Hardware firewall would do what I needed. They are still £500 though which is quite expensive.

Many thanks
 
If you're genuinely getting a 100,000 users a day (on one server? That's an 'interesting' bit of design) then a) that sort of cost should be no big deal b) you're going to need at least an SSG20
 
If you're genuinely getting a 100,000 users a day (on one server? That's an 'interesting' bit of design) then a) that sort of cost should be no big deal b) you're going to need at least an SSG20


Its not one one server, its spread over three at the moment.

Just realised that I didnt really explain what I mean very well by 100,000 users. When I say 100,000 users, I mean that the server is visited 100,000 times over a day. These are not unique users, but rather a split of 50k unique users all coming back at least once.

As for cost, £500 quid is a lot when you consider software licensing (SQL server / windows / backup etc) and hardware + time to actually set it up.

Firewalls are obviously important so dont want to skimp on them for the sake of a few hundred quid.

EDIT....

Refering to you saying 'An interesting design', why is it not a good idea to serve that many user on one server? Say i have duel quad xn proc's, plenty of ram and vitualize three servers and alocate resources appropriatly, is that not a good way to go?

It saves on buying three servers, save money co locating, saves electricity and you get the same end result. Am i missing something?
 
Last edited:
to more accurately specify the capacity of firewall that you require you are going to have to break this '100000 users in a day' down into more specifics...

for example... the number of concurrent sessions you wish to be able to sustain, and the rate of new connections you expect.

as a guideline, an ssg20 will sustain either 8000 or 16000 concurrent sessions in either baseline or extended form, and will sustain 2800 new sessions per second in either form.

as a bare minimum you ought to be having two servers to handle that load with either some form of rudimentary dns round-robin, or preferably some form of application layer load balancer infront.

you should also be looking to run two firewalls in high-availability too.
 
why is it not a good idea to serve that many user on one server?

what happens when you have a hardware issue with this one server, that affects all three running in the virtualised environment... don't put all your eggs in one basket.
 
To be more helpful, it depends on your bandwidth too but for that kind of environment I'd go for at least 1 (preferably 2 running in active/passive HA) SSG140 for that kind of requirement. Unfortunately that's basically double the cost, even for only one...

I'd seriously make sure you have at least two servers though to cover redundancy and on a site that busy I would want to run the back end database on a physical box for best performance.
 
Not enough information in the OP to be honest.

The users for example. Are they internal (i.e. don't go through the firewall and are on a corporate LAN)? Or external?



M.
 
Its a web application server, so all users are routed through IIS. The server is serving web pages only. Its not 100,000 people connecting to exchange for example.

I get your points regarding redundancy. It is nice have the set up we currently have where should one server is down it only affects a third of our users. However, the most downtime we have had to do in 10 years is the time it takes to reboot the server. (Ok, and we had to replace a hard drive once, but that was done with in an hour)...

However, now you have all said that I dont know what to do!
 
do you have one physical server running three vm's in your environment? or was that just an example?

if this is the case then what i was getting at was what is your contingency plan for when (not if!) you have hardware issues with the physical server which then affects the virtual machines it is hosting?

sure you can have hot spare memory, and drives, and maybe teamed nic's, and multiple power supplies etc...but what if a mainboard craps out, or a raid controller? or something a bit more basic, like a power issue to the rack that's going to outlast the supply from the ups?

if it is a virtual environment do you more hardware and shared storage so you can simply move the vm's?

i know i'm going to the n'th degree here, and it is largely going to be dependant upon how mission critical this system is that dictates how far your going to go, but these are all important things to think about.

to answer your actual question:

will an ssg20 cut it for your existing environment...? probably...

is it the right way to go? well, think about everything that's been said, find out who's nuts are gonna be in a vice if the server becomes unavailable for x amount of time and then work out if its worth investing some more time and money to make the environment more scalable and ultimately more available.

edit: if you do go with a netscreen (and you should, it's really great kit! :)) and you need any help getting the unit configured then shout up, there are at least two people on here who work with the kit day in day out!
 
Last edited:

Thanks for the informative post.

I suppose you are right, it does depend how mission critical the data is. I am hosting websites which are e-commerce / news and generally business based. I dont host real finance apps where 100% uptime is needed. I do tell my clients that if they want redundant hosting then it will cost them and I can get a price.

If I were to purchase a dell, and I add the 3 year 4hr mission critical support, then technically they should be able to fix the server with in 4-6 hours at the most? This is assuming the worst and a motherboard fail or similar... I will be getting hot swop hard drives and ram.
 
yeah, i guess... at least your being upfront with your customers i suppose... they ought to do some sums to work out how much money they will lose on the e-commerce side with 4-6 hours downtime though...
 
yeah, i guess... at least your being upfront with your customers i suppose... they ought to do some sums to work out how much money they will lose on the e-commerce side with 4-6 hours downtime though...

Yeah, I am always upfront about it, and always give the client the options available to them.

I dont think any one of my clients would be willing to pay more than double for redundancy, baring in mind that in 10 years we have had 1 hour with out service on one server. Saving the monthly on hosting over 10 years will far out weight what SME e-commerce will lose if the server went down for a few hours.

Its still a tough decision though, and I need to spend more time thinking what I should do.

Out of interest, how often have other admins here had serious problems with hardware? How often have you had a main board go for example?
 
Yeah, I am always upfront about it, and always give the client the options available to them.

I dont think any one of my clients would be willing to pay more than double for redundancy, baring in mind that in 10 years we have had 1 hour with out service on one server. Saving the monthly on hosting over 10 years will far out weight what SME e-commerce will lose if the server went down for a few hours.

Its still a tough decision though, and I need to spend more time thinking what I should do.

Out of interest, how often have other admins here had serious problems with hardware? How often have you had a main board go for example?

We loose 15-20 main boards a year I guess, on an estate of coming close to 800 servers now, almost always on the older hardware though, G3 and early G4s account for the majority (90%) of serious failures. We loose a lot more disks and PSUs in terms of server hardware. What redundancy you choose is up to you in the end, but rest assured if you decide against it, it will choose the busiest day of the year to fail and the manufacturer will spectacularly fail to meet the 4 hour response time.

We lost the array controller on a primary file server for a FTSE100 company yesterday morning, the day they're announcing interim results of all days. Thank god for DFS eh? Wasn't an old server either, P800 controller on a G5...
 
I bet that your customer would change their mind about how mind redundancy was worth if they lost a days worth of commerce.

Don't forget that SLA's are generally average response times (Don't know about Dell) so they could take 6 hours to get to you because they took 2 hours to get to the last customer and are still within their SLA terms

You've been very lucky to have so little downtime in that many years
 
Back
Top Bottom