8 x 64GB SSD or 4 x 128GB ?

Permabanned
Joined
22 Nov 2010
Posts
187
Yo there!

Im putting together an SSD SAN, with RAID 5.

Whats going to work out best, 8 x 64GB SSDs or 4 x 128GB SSD with the option to upgrade to 8 drives in the future?

Should I just get the 64GB ones now and enjoy?
 
Yikes, what are you putting on to it that's sending you down the SSD avenue?

What sort of controller are you intending using? You're going to have to pay a heck of a lot of money for something that can make use of the available bandwidth.
 
SSD's are only going to get cheaper and faster. I would get what you need now (be that 64 or 128GB drives) and either add to it later or replace the whole lot!
 
its for a home SAN for about 20 VMs.

I got 4 raptors in RAID 5 just now, in an HP tower.

I cant fit 8 drives in, as they are too thick, whereas 8 SSDs will fit nicely..

Got an 512MB HP SMart array controller, so should be fast enough to handle..

If I get 8 x 64GB now, thats it, long term.. no upgrades. However, if I get 4 x 128GB SSDs, the whole 128GB disk will be "wasted" in RAID 5 for teh parity. Whereas with 64GB discs, just 64GB is set aside for parity. So Im losing less space.

Dont get me wrong, the setup is Insanely fast just now, but anymore than 3 VM's going and the drives are a proper noise fest, and it winds me up.

having lots of SSDs should help the IOPS wont they? Surely 8 x 64 GB SSD will have more IOPS than 4 x 128GB ?


Cheers for any advice.... but tho... im using MLC SSDs, as the SLCs are ludicrous money!
 
What kind of a home hobby requires 8 ssds??? I mean, you gotta think of what kind of benefit you are getting for the money you are spending (opportunity cost and all that..).
 
Yes, IOPS will be much better with 8 x 64 than 4 x 128 but either will be better than your current config with headroom to grow.

You should be able to do dynamic array expansion when you add additional drives (if you have the battery backed cache option IIRC) but I'd only rely on that if I was using HP SSD's and even then with a backup ;)
 
It's likely the prices of ssds will tumble within the year, easily to a quarter of what you would pay at the moment. Maybe it would be worth investing in some hard drive silencers instead? These are supposed to be fairly good but quite hard to find, and you will need a case with 8 5.25" bays. Or if you get a bigger case and do some diy silencing and get some ssds later. Either way you are likely to save a fair whack.
 
its a SAN, for Vmware virtual machines.

Im gonna have about 20 Vms running, including Server 2008 and sharepoint etc....

Well yes, but why do you need to run 20vm's ? What kind of home application would 20vm's serve??!

I'm not being sarcastic or negative, just really curious!

I just think of it too much money spent on something that could be unecessary. Obviously it's your money and you know how to spend it best - I'm not trying to tell you what you're doing is wrong or anything, just trying to get my head round it.
 
If its a SAS controller why don't you just use SAS drives? a bunch of 15k SAS drives will annihilate the performance of the raptors your using atm. I may be off here but for the load your going to be putting on them you will require commercial SLC based SSD's as the MLC ones they sell to the generic PC user wont be up to the task, which means your looking at roughly £500 per 64gb drive so £4k for the setup you want, id go for SAS personally
 
MLC SSD's have a limited lifespan, the more you use them the shorter their life this is fine for a home user as they wont be hammering them but in business where the is high usage SLC ssd's are preferable but cost a lot more, for instance putting MLC ssd's in a database server would be like signing their death warrant. The OP seems to be going for a raid array to tackle a high usage scenario and I don't think MLC ssd's would be a good choice for such a task
 
You are going to be limited throughput wise by your network, and you might actually find yourself IOPS limited by your RAID controller, especially if you plan on running RAID5.

I'd advise you look at building a ZFS setup, with a pool made up of mirrored 15K SAS drives for the bulk of your storage and a couple of MLC SSD's as L2ARC caching drives, and an SLC SSD or two for the intent log (basically write caching). Combined with the De-duplication facility most of your data should be cached on the SSD's or RAM most of the time (20VM's are going to have a lot of duplicated files), but it's kept safe on the mechanical drives so you aren't wasting money or write cycles keeping parity or mirrored data on the SSD's themselves.
 
Last edited:
WEll its like this..

The server im installing this to, can only take 8 of these SSD drives.

So thats the limit I am working with.

15K SAS are going to be too noisey, and for IOPS will get obliterated by an SSD surely?

In addition, I understood the new MLC SSDs could handle having their entire capacity written too every day for at least 7 years or soemthing?

I probably wont even have the SSD's in 7 years, so surely I will be ok?!

20 VMs is for my home lab, so I want server 2008 clusters, a couple of exchange boxes, some Linux servers, etc etc...
 
yes, 15k SAS have less IOPS, but with a good caching setup the SSD's are doing most of the workload so it doesn't matter too much and could save you a lot of money. As for noise, can you not put the server in the loft or somewhere else where it can't be heard?

Lifespan wise, SSD's with low write amplification, like the Intel drives, are rated at around 20GB/day for 5 years. A lot depends on how much free space is on the drives however.
 
Back
Top Bottom