Permabanned
- Joined
- 28 Dec 2009
- Posts
- 13,052
- Location
- london
I have been looking in to building custom servers with non server hardware and server hardware and utilising the latest technologies. For example if we take a vmware infrastructure server.
GIGABYTE GA-X58A-UD7 (2 x gbit lan, 8 x SATA 3Gb/s, 2 x SATA 6Gb/s, 7 PCIe 2.0 x16)
Intel Xeon X5650 2.66GHz
dual pci-e 10gbit lan
24gb Corsair DDR3 2000
2x OCZ Z-Drive R2 1tb (1gbit/s, 135000 iops) in raid 1
1u or 2u server chassis with redundant psu
The major limitation in comparison to mainstream server hardware would be the lack of second cpu and the amount of support memory, only going up to 24gb while mainstream servers can go up to 192gb. Gigabyte does offer a server range with support of two cpu and 192gb memory, but the speed of the memory and pci-e speed and amount of pci-e is not as good.
For a custom nas or san setup
GIGABYTE GA-X58A-UD7 (2 x gbit lan, 8 x SATA 3Gb/s, 2 x SATA 6Gb/s, 7 PCIe 2.0 x16)
Intel Xeon X5650 2.66GHz
dual pci-e 10gbit lan
24gb Corsair DDR3 2000
3u or 4u server chassis with redundant psu
using onboard controllers example:
2x 3TB SATA 6Gb/s 7200rpm
8x 3TB 4Gb/s 64mb cache
sata 6gbps supported 3tb hard drives (fiber channel controllers are only 4gbps and there drives run at 15000rpm but these new drives running at 5400rpm have fast sustain write speeds and seek times. I would like to see a comparison between the drives themselves and not within raid setup.)
Run the freenas and you could use software based raid if you wanted. With support for rsync so you could use two identical boxes.
If you used the gigabyte server mb, you could either use the onboard sata controllers which support of six 6gbps sata or if you took the single core cpu you could get many pci-e sata / sas controllers with support of up to 16 sata+ with advanced raid configurations etc.
Has anyone implemented this in corporate environments and do you think there is any future in the custom server market. What do you think about the use of standard atx motherboards in small enterprise environments and say if you put a vm infrastructure on the ocz z-drive (raid 1) could use rsync within the vmware to backup the drives to another identical machine. Essentially have redundant pc setup. Mirroring the raid 1 across the network to another identical pc in raid 1.
In cost comparisons you could see that you could get better performance benchmark wise from custom server hardware. Where the mainstream enterprise server market you pay for the quick support and the guarantee on the package. Plus corporations like spending too much money on server hardware.
GIGABYTE GA-X58A-UD7 (2 x gbit lan, 8 x SATA 3Gb/s, 2 x SATA 6Gb/s, 7 PCIe 2.0 x16)
Intel Xeon X5650 2.66GHz
dual pci-e 10gbit lan
24gb Corsair DDR3 2000
2x OCZ Z-Drive R2 1tb (1gbit/s, 135000 iops) in raid 1
1u or 2u server chassis with redundant psu
The major limitation in comparison to mainstream server hardware would be the lack of second cpu and the amount of support memory, only going up to 24gb while mainstream servers can go up to 192gb. Gigabyte does offer a server range with support of two cpu and 192gb memory, but the speed of the memory and pci-e speed and amount of pci-e is not as good.
For a custom nas or san setup
GIGABYTE GA-X58A-UD7 (2 x gbit lan, 8 x SATA 3Gb/s, 2 x SATA 6Gb/s, 7 PCIe 2.0 x16)
Intel Xeon X5650 2.66GHz
dual pci-e 10gbit lan
24gb Corsair DDR3 2000
3u or 4u server chassis with redundant psu
using onboard controllers example:
2x 3TB SATA 6Gb/s 7200rpm
8x 3TB 4Gb/s 64mb cache
sata 6gbps supported 3tb hard drives (fiber channel controllers are only 4gbps and there drives run at 15000rpm but these new drives running at 5400rpm have fast sustain write speeds and seek times. I would like to see a comparison between the drives themselves and not within raid setup.)
Run the freenas and you could use software based raid if you wanted. With support for rsync so you could use two identical boxes.
If you used the gigabyte server mb, you could either use the onboard sata controllers which support of six 6gbps sata or if you took the single core cpu you could get many pci-e sata / sas controllers with support of up to 16 sata+ with advanced raid configurations etc.
Has anyone implemented this in corporate environments and do you think there is any future in the custom server market. What do you think about the use of standard atx motherboards in small enterprise environments and say if you put a vm infrastructure on the ocz z-drive (raid 1) could use rsync within the vmware to backup the drives to another identical machine. Essentially have redundant pc setup. Mirroring the raid 1 across the network to another identical pc in raid 1.
In cost comparisons you could see that you could get better performance benchmark wise from custom server hardware. Where the mainstream enterprise server market you pay for the quick support and the guarantee on the package. Plus corporations like spending too much money on server hardware.
Last edited: