Home Server Board

Associate
Joined
3 Jan 2007
Posts
462
Location
London
Apologies for the double post (this also in Hardware>Motherboards) but I forgot this section was here :)

Right, I'd go into detail, but it would start as a sob story about my worst fortnight with hardware in my life, and probably descend into a full-blown rant until I get to the part about nVidia and their shoddy chipsets at which point something noisy and probably messy would happen.
Suffice to say I find myself in need of a motherboard.
Needs:
S775/Core2 support
ICH10R southbridge

Would be nice:
Integrated graphics so I don't have to pranny about with finding a new card, so I've been looking at the G45 chipset.
£120 max. but this can be adjusted for the perfect bit of hardware

A server-class motherboard would be best as it is to be used in a 24/7 home server. I've seen things close, like the Supermicro C2SEA, but that needs DDR3, and the Asus P5Q-EM, but that isn't a server-class mobo with it's non-solid state caps and limited expansion).
Any suggestions?

Also, any experience with using the RAID function on an ICH10R, or would the software RAID from Server 2003 do me fine?
 
If you are seriously considering server class board for the reliability then surely you aren't going to rely on the ICH Raid or software raid.

I will use it for my games rig where just RAID0 some drives but wouldn't rely on it for anything I wanted to keep.
 
If you are seriously considering server class board for the reliability then surely you aren't going to rely on the ICH Raid or software raid.

I will use it for my games rig where just RAID0 some drives but wouldn't rely on it for anything I wanted to keep.

For the time being, yes. I can afford £100-150 on a reasonable board, but can't spring £300+ for a decent RAID card on top at the mo as I've just bought two other mobos (fortnight of hardware hell, remember?) :D
What I want is a solid core. A mobo that'll stay up and stable for weeks at a time, and aside from the odd driver update I might do, still be reliable in 2-3 years time. After I've got that, then I'll sort out dedicated RAID later on in the year.
Having said that, how reliable is dedicated hardware RAID anyway? At least with software RAID if the board dies I can drop the drives in another S2k3 machine and recover. If a £400 Adaptec card dies and it's out of warranty, what do I do then?
 
Blunt answer, you've not really got any need for a server class board.
In many ways, you'd do better to buy two identical cheap boards and if one dies simply swap them over.

For what it's worth, my server has been running basically 24/7 for nearly 4 years now. This is on a cheap and nasty Epox S754 motherboard, and it's never missed a beat.
If you're planning on running a server, I'd also avoid driver updates like the plauge, find a set that works, and ignore them completely :)

-Leezer-
 
Exactly - server boards are all about multiple CPU sockets, full size PCI-X expansion, ECC RAM support (with more sockets than the average desktop board)

Depending on what you want to do, you might not even need a "server" at all - if you're just storing files, thats not a server, its a NAS :)
 
"In many ways, you'd do better to buy two identical cheap boards and if one dies simply swap them over."

That was the plan two years ago. Unfortunately, I picked two nVidia 680i SLI boards; one for the server with a RAID5 array, and one for the gaming rig for mad overclocks. The list of issues with this quite frankly 'beta-at-best' chipset and weekends filled with harsh language, multiple reinstalls, lost arrays, and RMAs mean that this is their last gasp. After I've recovered my current array, I'm getting shot of both boards.
I do take the point that reliability is what I need, rather than a fully fledged server board. I have no need any more for PCI-X slots and don't have the cash or the need for dual processors and multiple banks of memory, I just figured that a 'proper' server board would be built to be the most reliable. In that vein, I'm thinking the Supermicro C2SEA is a good halfway house; a board from a company with a good rep for reliability, according to the reviews a very stable mobo, low cost, just with the small penalty of demanding DDR3. Opinions?
 
Kind of in the same situation as you enigmo, my requirements are for a server to host some VMs.

Keep bouncing back an forward between server spec hardware or desktop spec.

Think I've settled for middle ground of a x48 chipset board as it support ECC memory and a socket 775 xeon such as the x3220.

Still can't decide on sas requirements though!
 
Kind of in the same situation as you enigmo, my requirements are for a server to host some VMs.

Keep bouncing back an forward between server spec hardware or desktop spec.

Think I've settled for middle ground of a x48 chipset board as it support ECC memory and a socket 775 xeon such as the x3220.

Still can't decide on sas requirements though!

What the hell VMs are you going to be running that need SAS Xeons and ECC RAM?


OP: Harware RAID is far better because it's not reliant on anything else and most adapters have the ability to export the raid config which can be re-loaded on a replacement Card to restore the array nice and quickly. Soft raid offers the same kind of thing in that the raid config is stored with the OS, however if it's the OS that goes you're screwed as the raid config is lost, and anything stored in software or reliant on software/hardware interfacing is more prone to breaking.
 
Don't tar every desktop board with the same brush ;)
Yes, there are some which are an utter POS, but the vast majority will run 24/7 day in, day out.
I would never let a gaming/ overclocking oriented board anywhere near a server, as this is just asking for trouble.

-Leezer-
 
What the hell VMs are you going to be running that need SAS Xeons and ECC RAM?

Nothing production. Just a dev environment running 4-5 persistent VMS with potentially 4 more test VMs.

I guess I don't really know the hardware requirements so I'm err'ing on the side of extra power as I would hate to fire it all up and it be dog slow.

I also think if you're running 24/7 ECC ram is a negligible extra expense.

You think SAS is over kill then?
 
Nothing production. Just a dev environment running 4-5 persistent VMS with potentially 4 more test VMs.

I guess I don't really know the hardware requirements so I'm err'ing on the side of extra power as I would hate to fire it all up and it be dog slow.

I also think if you're running 24/7 ECC ram is a negligible extra expense.

You think SAS is over kill then?

I regularly run several 2003/XP VMs on my laptop as a demonstration rig. Tends to be for Citrix demonstrations - 2 x Server, 2 X client and it runs just fine. C2D 2.0Ghz, 2.0gb ram and a 160gb laptop drive........ nuff sed.
 
Nothing production. Just a dev environment running 4-5 persistent VMS with potentially 4 more test VMs.

I guess I don't really know the hardware requirements so I'm err'ing on the side of extra power as I would hate to fire it all up and it be dog slow.

I also think if you're running 24/7 ECC ram is a negligible extra expense.

You think SAS is over kill then?

Unless you're developing really disk intensive databases then yes SAS is overkill, and will most likely bankrupt you :p CPU and RAM are what VMs crave. 6-8GB RAM and a half decent C2Q would run 8 VMs no problem. Probably with a bit of headway too.
 
Eriedor:
I have to agree with the others here: At work we play with VMs every now and again and get by fine on some pretty slow C2Ds with less than 4GB RAM. Unless you're running several concurrently I doubt you'll need the kind of spec you're looking at.

Skidilliplop:
Good points well made, but for the time being it's the initial set-up costs that mean I can't go with hardware RAID. Seems to me that hardware RAID is quicker and more reliable, but has larger initial costs (to the tune of £100s extra) and when the hardware fails you will need either a spare card on standby, time to wait for an RMA assuming it's still under warranty, or cash for a replacement card, and if the model you were using is no longer available what do you do then? Software RAID is more prone to breaking and a lot slower, but at least with a Server 2K3 RAID, if the machine breaks you can drop the drives in another machine and rebuild from there.
Out of interest, anyone know if the ICH10R chipset is proper hardware RAID or this 'hardware-assisted software RAID'?
 
Ta. Probably won't bother with it then. After spending two years fighting with nVidia's RAID, it just seems to add an extra layer to the whole shebang, and an extra layer is an extra thing to break. Entirely software or antirely hardware RAID or nothing, I think.
Was keen on the Supermicro C2SEA, as all the official reviews are good, but after reading the user reviews at Newegg I'm not too sure...
 
This boils down to the same old end game.
Your decision hinges on how much your time and data is worth to you. If it's valuable then it's worth the investment to safeguard it. If it's not then take a chance on a less reliable but cheaper solution.

Thinking outside the box briefly, do you need RAID at all? Does your data change that much througout a day?
If redundancy is the primary concern and the above two answers are No, a shceduled backup to USB HDD or NAS would suffice and then you could use whatever raid you wanted.

Many ways to skin a cat, some require time+effort other's require money. Either way decent redundancy WILL require investment.
 
Ordinarily, I'd say no, but considering I had/have (unknown until mobo replaced) a RAID5 array on my server syncing to a RAID5 array on my NAS and between hardware failures still managed to lose both at the same time, I'd say I'd prefer to have them rather than not. If in spite of the hardware and redundancy I've thrown at my data I can still lose the lot, I'd say moving from two syncing arrays to two syncing JBODs will only increase the frequency of the losses so I'll stick with RAID.
 
Ordinarily, I'd say no, but considering I had/have (unknown until mobo replaced) a RAID5 array on my server syncing to a RAID5 array on my NAS and between hardware failures still managed to lose both at the same time, I'd say I'd prefer to have them rather than not. If in spite of the hardware and redundancy I've thrown at my data I can still lose the lot, I'd say moving from two syncing arrays to two syncing JBODs will only increase the frequency of the losses so I'll stick with RAID.

Live Sync != backup in the same way RAID != backup. If your array degrades it's possible the errors could be sync'd. If it only syncs on a schedule i.e Daily then i can't see how you'd manage to lose both at once. JBOD will only lose the faulting disk from the data set.
Either way i was suggesting a backup that's not part of a sync/array and is thus resilient to issues with them. Daily incremental backup would mean you lose max 24 hours of changes as opposed to everything.

Out of curiosity what were you using to sync the Arrays?
 
Back
Top Bottom