Custom Servers

Permabanned
Joined
28 Dec 2009
Posts
13,052
Location
london
I have been looking in to building custom servers with non server hardware and server hardware and utilising the latest technologies. For example if we take a vmware infrastructure server.

GIGABYTE GA-X58A-UD7 (2 x gbit lan, 8 x SATA 3Gb/s, 2 x SATA 6Gb/s, 7 PCIe 2.0 x16)
Intel Xeon X5650 2.66GHz
dual pci-e 10gbit lan
24gb Corsair DDR3 2000
2x OCZ Z-Drive R2 1tb (1gbit/s, 135000 iops) in raid 1
1u or 2u server chassis with redundant psu

The major limitation in comparison to mainstream server hardware would be the lack of second cpu and the amount of support memory, only going up to 24gb while mainstream servers can go up to 192gb. Gigabyte does offer a server range with support of two cpu and 192gb memory, but the speed of the memory and pci-e speed and amount of pci-e is not as good.

For a custom nas or san setup

GIGABYTE GA-X58A-UD7 (2 x gbit lan, 8 x SATA 3Gb/s, 2 x SATA 6Gb/s, 7 PCIe 2.0 x16)
Intel Xeon X5650 2.66GHz
dual pci-e 10gbit lan
24gb Corsair DDR3 2000
3u or 4u server chassis with redundant psu

using onboard controllers example:
2x 3TB SATA 6Gb/s 7200rpm
8x 3TB 4Gb/s 64mb cache

sata 6gbps supported 3tb hard drives (fiber channel controllers are only 4gbps and there drives run at 15000rpm but these new drives running at 5400rpm have fast sustain write speeds and seek times. I would like to see a comparison between the drives themselves and not within raid setup.)

Run the freenas and you could use software based raid if you wanted. With support for rsync so you could use two identical boxes.

If you used the gigabyte server mb, you could either use the onboard sata controllers which support of six 6gbps sata or if you took the single core cpu you could get many pci-e sata / sas controllers with support of up to 16 sata+ with advanced raid configurations etc.

Has anyone implemented this in corporate environments and do you think there is any future in the custom server market. What do you think about the use of standard atx motherboards in small enterprise environments and say if you put a vm infrastructure on the ocz z-drive (raid 1) could use rsync within the vmware to backup the drives to another identical machine. Essentially have redundant pc setup. Mirroring the raid 1 across the network to another identical pc in raid 1.

In cost comparisons you could see that you could get better performance benchmark wise from custom server hardware. Where the mainstream enterprise server market you pay for the quick support and the guarantee on the package. Plus corporations like spending too much money on server hardware.
 
Last edited:
Not to mention that proper server hardware is far more reliable and compatible.

In my experience these kind of builds are great for the home set up but Id never suggest to anyone to put these in a working enterprise environment. Even in a small business I dont think it's worth it.

Also, how would you work redundant PSU in?
 
For psu, there is a bit of an issue because atx psu and rackmount chassis have not realy been put together. You can get single psu rackmount chassis from antec and you can get redundant psu that might fit in to the chassis. But what we realy need is a chassis with two of these that connect to a controller that controls the fail over, with one set of cables going to the m/b.
 
Last edited:
No and no.

Don't waste your time.

Seriously custom servers for anything but home use is not an good move.

You would need to compete with them on all levels and offer a seriously good warranty which would be a nightmare. Do you really want to put yourself in a position whereby you have to try and restore a failed array that the customers failed to back up?
 
I can see where you are coming from, however when we spend £5,000 on a HP 380G6 with 48Gb ram, 8 x 146Gb 15k SAS disks, dual Xeon E5540's not to mention the 2u or 3u case they come in, along with the 3 year on site warranty, and the redundant power supplies and rack mount fittings.

You would be hard pushed to get all that in a custom build, it would also take several hours of my time to build, where as I can pick up the phone make a call and its all here in a day or so.

Also you do not broach the subject of blades, most VM environments use blade servers, those you can not build in a custom.

Also I believe Google use to or still do just use off the shelf desktop PCs, thought they are probably moving to rack servers, and since you can buy a rack mounted google search appliance, it would not be a shock to me that they can build their own servers at a massive reduced cost.

Also there is no way a 5400rpm drive will be able to beat a 15k rpm SAS drive, these drives are designed for the job is high speed, also two SSDs is not enough for an ESXi server.

Also you would have ot make sure every component is on the VMWare HCL otherwise there is no point.

Kimbie
 
Yea i was only putting it out there as a consideration. I think the mainstream server market is very reasonably priced, especially at the £2000 level. But if you do a cost by cost comparison the custom server market does seem very competitive, if only recently. Because i can not paste competitor urls i could not post examples that i did in comparison. But i did a few and i would not have made the post if i did not think it was at least competitive. Of course there are benefits going mainstream server and those i have pointed out.

My point is that for £5000 you can probably get more hardware for the money if you went custom. Bar the warranty and the fact that is all put together and ready.

Controller speeds from wikipedia:
Fibre Channel 2GFC (2.125 GHz)[43] 1,700 Mbit/s 212.5 MB/s
Serial ATA 2 (SATA-300)[44] 2,400 Mbit/s 300 MB/s
Serial Attached SCSI (SAS)[44] 2,400 Mbit/s 300 MB/s
Ultra-320 SCSI (Ultra4 SCSI) (16 bits/80 MHz DDR) 2,560 Mbit/s 320 MB/s
Fibre Channel 4GFC (4.25 GHz)[43] 3,400 Mbit/s 425 MB/s
Serial ATA 3 (SATA-600)[44] 4,800 Mbit/s 600 MB/s
Serial Attached SCSI (SAS) 2[44] 4,800 Mbit/s 600 MB/s
Ultra-640 SCSI (16 bits/160 MHz DDR) 5,120 Mbit/s 640 MB/s
Fibre Channel 8GFC (8.50 GHz)[43] 6,800 Mbit/s 850 MB/s

Disk Benchmark

I think Sata 6gbps disks would be very competitive in terms of bandwidth and you would get more size for the price and for data that requires fast seek times, you could use solid state.

I do not think the hardware compatibility would be an issue.
 
Last edited:
It's not just the warranty.

Ring up vmWare to say you have an ESXi problem and wait for when they ask if your hardware is on the HCL...
 
I am not suggesting that there is a custom server market that will take over the enterprise server market. I was only pointing out that custom servers available are technically competitive. I was not suggesting that you should start trying to sell custom servers to enterprise clients. Like i said they like spending more money on the full package and added extras.
 
Who who are you trying to sell a vmware 'custom' PC to then?
To a random client market who will spend thousands on a solution they don't need and don't require support for?


I don't get the point of your entire OP. It seems to be suggesting you can get better hardware than mainstream server companies offer for the same price.... We know this, but you don't get compatibilty and support which is much if not most of what you are actually paying for.
 
From experience, something like a supermicro barebones will offer much better pricing on very high end / bespoke solutions then a Dell or HP server whilst still retaining the server grade and thus reliable hardware and most importantly compatibility!

Vmware and home made are not two words that should be used together, you will struggle hugely trying to build a 'server' with desktop grade hardware as the compatibility just won't be there as Vmware target the corporate market with their ESX product (which is where driver support is derived from) who buy off the shelf solutions from HP/Dell/IBM etc.

Lastly, your knowledge (or lack there of it) regarding hard disks further underlines why you're wasting your time as you do not understand the market at all. Companies do not want to use cheap desktop grade SATA disks in most server operations due to the relatively low I/O they offer when compared to their SAS counterparts and then of course there is the fact that non enterprise class disks are not designed to run 24x7 or continued high loads.
 
We did this for a short while.

Big mistake.

The main reason we did was because we wanted a single box to do high performance 3D graphics as well as server level RAID and whatnot.

In-house built boxes were a right pain, all sorts of compatibility issues with chips, drivers, BIOS's etc. Not to mention, trying building the same box a few months later, can you get the same parts? Not a chance. At least with IA stuff they usually have long term availability (ie 10yrs+)

We do get some stuff custom built but thats industrial automation grade kit which you really do pay for.

Now we buy from Dell/HP. It's much easier.
 
Not to mention that proper server hardware is far more reliable and compatible.

In my experience these kind of builds are great for the home set up but Id never suggest to anyone to put these in a working enterprise environment. Even in a small business I dont think it's worth it.

Also, how would you work redundant PSU in?
Ditto.
 
The cost benefits of Build-Your-Own are negligable at the low end, which is the only place you might find it.

As for Google, they have extremely sophisticating automatic noding, load balancing and redundancy software. They know their needs inside out and can design their system around that.
 
From experience, something like a supermicro barebones will offer much better pricing on very high end / bespoke solutions then a Dell or HP server whilst still retaining the server grade and thus reliable hardware and most importantly compatibility!

Vmware and home made are not two words that should be used together, you will struggle hugely trying to build a 'server' with desktop grade hardware as the compatibility just won't be there as Vmware target the corporate market with their ESX product (which is where driver support is derived from) who buy off the shelf solutions from HP/Dell/IBM etc.

Lastly, your knowledge (or lack there of it) regarding hard disks further underlines why you're wasting your time as you do not understand the market at all. Companies do not want to use cheap desktop grade SATA disks in most server operations due to the relatively low I/O they offer when compared to their SAS counterparts and then of course there is the fact that non enterprise class disks are not designed to run 24x7 or continued high loads.

Exactly. Comes down to cost/supply chain economics that drive the current server models - ok apart from Dell who took 100's of millions off of Intel to fubar AMD when they first released Opterons and better chips.

Google use the most simplistic servers - basically a tea tray with X or y or Z componants depending on the work load in the farm.
They also leave dead hardware in place untill X % of a rack and or asile has failed then fork lift out and in with new kit.
 
Back
Top Bottom