Homelab disk performance help and advice

Soldato
Joined
14 Apr 2014
Posts
2,586
Location
East Sussex
Hi all

I've consolidated all of my systems down into a single box for convienience, I didn't appreciate that though I've got enough memory and CPU to do what I need at the moment, I definitely don't have enough disk performance and I'm not sure which way to go next.

At the moment I'm using HyperV with most of my VMs storage on a 4 drive RAID 10 array of 7200 rpm X300 Toshiba hdd's - this is fine for most of my Linux VMs that I use for sevices in to the lab (DNS, proxy, spacewalk/puppet, ldap, monitoring etc) however it's absolutely crap for any decent sized DB or other disk intensive VM, so for these machines I'm using a 2 drive striped array of Samsung 850 Evos.

Though the performance on the SSD's is good, I've only got 512gb of space to play with, and this is proving quite limiting, performance could also be better tbh

Both of the arrays are using the X399 RAID from the motherboard. I've got no free SATA ports at the moment. I don't need to worry about redundany for anything new I get RAID wise, I use the big HDD array to backup the SSD stripe and my non RAID NVME drive atm.

I've got about £500 to play with, and the easiest upgrade option seems to be to add a single M2 or U2 drive, I have 1 spare port of each, if I go for a Samsung M2 drive I can get more space and bandwidth, but if I go for the Intel 900p U2 drive I can have many many more IOPS - and probably a longer life - im not sure how big a benefit this may have to the VMs using that drive vs higher total bandwidth from the Samsung drive - any thoughts on this?

Other option would be to use the ASUS x16 quad M2 adapter PCIE card, with 4 250 GB 850 Evos (or similar) - this is about £50, and with the drives will be about £500. I would be able to theoretically RAID stripe the 4 drives with X399 chipset to provide a better performing 1tb drive than either of the other 2 options. Major problem with this option is that I can find no mention of X399 support on any of the Asus x16 Hyper M2 cards that are available in the UK - only Intel VROC X299, and no one online seems to know if the X299 and X399 editions cards are the same - so would be a big risk to take for something that might not work unless anyone here knows differently?

Last option would be to raid stripe £500's worth of Samsung SSD's on a spare 8 port LSI Sata raid controller, I'm not even sure how I would begin to calculate where this would sit performance wise, I'm guessing less performance than NVME unless I can get a very large number of drives.

The machines I'm working with that have high disk usage requirement are elastic indexes and mysql datbases that are typically over 100gb - so they can't be entirely placed in memory (pretty sure it won't be cheaper to upgrade machine to 128gb of RAM - and that might not quite be enough anyway...).

Any help appreciated, open to other options if anyone has any other ideas.

Cheers
 

Deleted member 138126

D

Deleted member 138126

I recently bought a (very expensive) 1TB 960 Pro NVMe and couldn't be happier. It's in a PCIe to M.2 adapter card (my system doesn't have a slot), so it's not bootable, but I don't need it to be bootable. I would much rather the single 960 Pro than the 4 x 250GB Evo franken-RAID. TBH I only went for the Pro out of perfectionism, the Evo would've been just as good (perception-wise) and is currently over £100 cheaper.

Edit: it looks like 900p pricing is at least TRIPLE the 960 Pro -- 100% not worth it unless you won the euromillions and have nothing better to spend the money on.
 
Associate
Joined
25 Jun 2004
Posts
1,276
Location
.sk.dkwop.
I approached this slightly differently, though within ESXi. I had a small ish Centos VM which I presented as much local storage to as possible (SSD). I then used http://www.quadstor.com/ to create a pool of disks that has global dedupe and compression, while supporting VAAI. I presented this back to the ESXi host as iSCSI.

This meant that I could spin up all machines on SSD storage, enjoy very fast VM provisioning due to inline dedupe and compression. Not sure how this would work with Hyper-V, but worked great for my lab where I was spinning up and destroying machines constantly.
 
Soldato
OP
Joined
14 Apr 2014
Posts
2,586
Location
East Sussex
Just to update this thread, in the end I went for the Asus Hyper x16 card option, with 2x250GB 960 Evos

This has given me more performance that an Intel 900p (480gb) or Samsung 960 Pro (512gb) with similar usable capacity - but at about half the cost of the Intel solution (same'ish price as single drive samsung)

On the downside I obviously have a more complex solution, that will consequently have many more interesting ways to fail - but I'm not after enterprise level availability, so I think I've got the best bang for my buck as such. I also still have space to add 2 more M2 drives to the Hyper card for even more performance and capacity further down the line as needed.


Some notes following a bit of testing with some borrowed kit:

  • Intel 900p drives are just awesome from a performance perspective - but they are not economical at all when compared to the perf of a Samsung 960 Pro/Evo (I mean yeah the Intel is better - but not THAT much better!) - capacity choices are also poor. I tested a drive with a 100GB elasicsearch index and saw performance increase by 1/3 over my current 950 Evo for some workloads.

  • Intel 750 U2 NVME drives do much better than the 900p on cost per GB, but the performance seems to be poorer than almost any Samsung NVME - better choices on the capacity front though than the 900p, seems no be no point picking one of these as long as the Samsung drives are available. When testing this drive I got between 1/2 and 2/3 of the performance of a 950 Evo for nearly all tests.

  • SATA SSD's - It takes a 6 SSD stripe of 850 Evo's to match the perf figures for 1 950 Evo NVME, so not a decent choice either, though offers a good way to get extremely large and good performing volumes for a reasonable cost, but my aim is perf not capacity at the moment so this choice is uneconomical.
 
Back
Top Bottom