For a file server: more ram, faster hds, better cpu?

Associate
Joined
28 May 2004
Posts
531
Location
The US of A
We have two file servers at work that serve out files ranging between 100 to 1000 mb per file to 100's of users across the network. At times it slows to a stand still which I am trying to fix. I am not sure what I should do.

We have two servers with the following config:

Q9650 CPU, 8gb ram, 1 TB HD non-raid, 1 gigE connection running Ubuntu 9.1 Server.

What can I do to increase server capacity so that the machine(s) don't bog down? I was thinking about getting a XEON 554X CPU. Would that help if that was all I changed? What is more important in a file-server config to increase the number of connections?

Cheers.
 
I would be pretty sure its the hard drive that is the bottleneck. Is it just 1 HD per server?
And how much cash can you throw at it?
 
A single SATA drive is not going to be sufficient - I'd recommend going for SAS disks in RAID10, ideally. Your disk must be getting hammered with 100s of users. :o

Even better would be using something like ZFS with a hybrid storage pool - but is a little more complicated to configure. Basically you can have SSD cache devices that cache the most frequently accessed files.
 
We have more hard drives and a raid card that we can throw into a machine... would 5 X 1 tb drives in Raid 5 do better?
 
I'd go for RAID 10 if you can.

Normally i'd have said it was CPU (Samba on *nix seems to eat it) but obviously, it's disks this time!

Otherwise, check CPU loads and RAM usage when it's running slowly, see what it's eating.
 
Here is the screen shot of "top" in linux.

34y6ue9.jpg


From what I can tell the CPU is 97% idle, the load average is low, but this server is chugging right now to accept new connections.
 
As said, the bottleneck here is definitely the disk. For 100s of users a single sata drive really will struggle (as you've found).

Get that sorted and you'll find it a lot better, it has more than enough RAM I'd have thought and the CPU should be ok for just serving up files.

Also maybe think about doing something with another network card (I could be talking bs as I'm not too clued up on that kind of thing so apologies if anyone else can correct me!) to increase the capacity, team up the nics for a better/more resilient connection?
 
I'd go for RAID 10 if you can.

Normally i'd have said it was CPU (Samba on *nix seems to eat it) but obviously, it's disks this time!

Otherwise, check CPU loads and RAM usage when it's running slowly, see what it's eating.

IF I add 4 drives and install the raid 10 will I lose the data on the original 1 tb drive or can it be added safely into the raid?
 
607 tasks looks high.

Using 5 x 1tb is not going to be overly fast unfortunately. As others have stated you'll need some faster disks (SCSI, SAS) or look into some form of SAN.
 
Are people accessing lots of the same files? It's possible that if they are accessing the files in a certain way, they need to wait to gain a lock on that file which will slow down access considerably.

The IO wait looks very low - i would have expected it to be higher...
 
Creating an array without losing data is dependant on your controller, and even then i wouldn't chance it.

RAID10 on SATA isn't ideal, but it's still an improvement. You shoudl look at SAS and SCSI disks.

100 users is more than enough to justify it, we've got less than 10 users in our office and i'm running RAID-Z across 4 disks. I'd be looking to upgrade that if we had more users.
 
As well as performance you should also be considering redundanancy, which is the other reason for using RAID. Point out to managment how long the system would be out of commision and how much data would be lost if you had to rebuilt from your most recent backup.

Our servers are setup with two raid luns. A two drive raid 1 to hold the OS and a raid 5 with between three and 12 drives for data.

Another thing to check, do you have gigabit uplinks to all your switches or are they 100 m/b? If you've got nasty old switches this could be causing issues (or ancient fibre converters). If they're unmanaged it's very hard to tell.
 
Hard drive is your problem for sure.

Either get a good RAID volume setup i.e. RAID 5 or RAID 10 (dont forget a decent PCI-E RAID Card) and set it up correctly using correct block and stripe sizes etc or just invest in SAS :) (I know the latter isnt possible).
 
Aside from all the correct advice already given in the thread, Q9650?

Are these proper servers or desktop machines that you are using as servers?

If its the latter, I'd be wary of bunging loads of mission-critical stuff for a 100 person company on that!

Time to invest in proper kit for the job, perhaps?
 
Serve out files ranging between 100 to 1000 mb per file to 100's of users across the network

i would argue you have a problem there worst case scenario is 100* 1000mb being accessed at one time

through a single gigE connection i would experiment with adding new nics

search intel pro 1000 pt quad then toy with assigning ip's to various departments for an easy way to fix


oh and maybe invest in some faster hdds (a few ssds in a raid array maybe your answer here)
 
Last edited:
oh and maybe invest in some faster hdds (a few ssds in a raid array maybe your answer here)
At around £500 (after discount) for a 50GB SSD from Dell, I don't think that's such a good idea personally...

The OP's suggestion of a 5 drive RAID5 would be fine for this.
He says the traffic is one way (out) and that would be absolutely fine for a RAID5.
You will need a decent controller though, otherwise you may as well not bother.

RAID10 is nice but only necessary for large IOPs with (relatively) balanced R/W scenarios.
 
At around £500 (after discount) for a 50GB SSD from Dell, I don't think that's such a good idea personally...

The SSDs from Dell Servers I believe are SLC based models. These are very high end (if they're not Intel X25-E they're very similar) and are designed for very high IOPs in critical environments. For something like this, he may want to consider using consumer grade MLC SSDs which aren't quite as fast.
 
Only on OcUK could someone be seriously suggesting SSD for this application, blind and with only the vaguest of vague hints at a sizing exercise.

For much less than the cost of enterprise-grade SSD kit, he could implement a seriously fast storage back-end that would do the job adequately. Given that the whole lot is running off a SINGLE SATA disk at the moment, even moving to a few 15k SAS drives in an appropriate RAID config is going to give him dramatic speed increases.

I cant even begin to imagine the budget meetings that must result if people seriously suggest that SSD is the only way to go!

EDIT: My question about the environment has still gone unanswered - are you using desktop kit here?
 
Only on OcUK could someone be seriously suggesting SSD for this application, blind and with only the vaguest of vague hints at a sizing exercise.

For much less than the cost of enterprise-grade SSD kit, he could implement a seriously fast storage back-end that would do the job adequately. Given that the whole lot is running off a SINGLE SATA disk at the moment, even moving to a few 15k SAS drives in an appropriate RAID config is going to give him dramatic speed increases.

I cant even begin to imagine the budget meetings that must result if people seriously suggest that SSD is the only way to go!

I take it you missed my first post? I don't think any body is suggesting to him to buy 1TB worth of solid state storage... I suggested using SSDs as part of a ZFS hybrid storage pool.
 
Back
Top Bottom