Samsung SSD vs HPE SAS drives in HP DL380p Gen8 Server?

Associate
Joined
19 May 2014
Posts
290
I have just purchased a HP DL380p Gen8 server to replace our current Dell T320 server and having already purchased, realised I'd bought the model with 8 x 2.5" bays instead of 3.5" bays...doh!

Anyway, rather than returning the server or trying to sell it (as the specs are pretty good for the price paid), I am going to suck it up and use the server as intended, however, looking at new 2.5" enterprise drives, they are incredibly expensive and seem to max out at 1.92TB.

So, I'm setting my sights on a lower overall capacity (our current server only uses about 400GB but wanted the new server to do additional tasks like backup the Windows client PC's), and so have been looking at the following drives...

Option 1: 8 x 240GB Samsung SM863a SSD's (these are used drives and costing £30 each). In RAID6, this would give me 1.4TB or in RAID10 I would get 960GB. Total cost £240.

Option 2: 4 x 480GB Samsung SM863a SSD's (these are used drives and costing £90 each). In RAID6/10 this would give me 960GB but would allow for further expansion down the road. Total cost £360.

Option 2: 8 x HPE 600GB 10k SAS drives (these are brand new and costing £45 each). I wouldn't go for RAID6 on mechanical drives so RAID10 would give me 2.4TB. Total cost £360.

Usage wise, our server acts as our Domain Controller and File Server (currently as two vms on HyperV but considering simplifying it to a single Windows Essentials setup as the server is being used purely as a way of accessing/sharing files). We often work on files about 1GB in size direct from the server and sometimes these files are really slow to open but I suspect that it as much to do with our gigabit network as anything else.

Anyway, I am just looking for opinions on which drive configuration would be best. I have also looked at Samsung EVO's which I've read of people using in server hardware but I'd rather stick to enterprise drives if possible as the server is the lifeblood of our business.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
I have just purchased a HP DL380p Gen8 server to replace our current Dell T320 server and having already purchased, realised I'd bought the model with 8 x 2.5" bays instead of 3.5" bays...doh!

Anyway, rather than returning the server or trying to sell it (as the specs are pretty good for the price paid), I am going to suck it up and use the server as intended, however, looking at new 2.5" enterprise drives, they are incredibly expensive and seem to max out at 1.92TB.

So, I'm setting my sights on a lower overall capacity (our current server only uses about 400GB but wanted the new server to do additional tasks like backup the Windows client PC's), and so have been looking at the following drives...

Option 1: 8 x 240GB Samsung SM863a SSD's (these are used drives and costing £30 each). In RAID6, this would give me 1.4TB or in RAID10 I would get 960GB. Total cost £240.

Option 2: 4 x 480GB Samsung SM863a SSD's (these are used drives and costing £90 each). In RAID6/10 this would give me 960GB but would allow for further expansion down the road. Total cost £360.

Option 2: 8 x HPE 600GB 10k SAS drives (these are brand new and costing £45 each). I wouldn't go for RAID6 on mechanical drives so RAID10 would give me 2.4TB. Total cost £360.

Usage wise, our server acts as our Domain Controller and File Server (currently as two vms on HyperV but considering simplifying it to a single Windows Essentials setup as the server is being used purely as a way of accessing/sharing files). We often work on files about 1GB in size direct from the server and sometimes these files are really slow to open but I suspect that it as much to do with our gigabit network as anything else.

Anyway, I am just looking for opinions on which drive configuration would be best. I have also looked at Samsung EVO's which I've read of people using in server hardware but I'd rather stick to enterprise drives if possible as the server is the lifeblood of our business.

What SAS controller card? p410? p4xx? That would help.
 
Associate
OP
Joined
19 May 2014
Posts
290
What SAS controller card? p410? p4xx? That would help.
It’s a P420i controller card.

I had all but decided on SSDs but having just looked up a raid performance calculator, I’m now questioning myself again and reconsidering the HPE 600GB SAS drives.

According to a RAID performance calculator I found, 8 x 600GB 10k SAS drives in RAID10 should give me about 925MB/s and although the SSD drives will give 2150.82 MB/s, I’m only running gigabit network so max out at 125MB/s anyway....unless I upgrade to 10Gb as well but think that’s a bit overkill for our needs.

Argggggggggh
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
It’s a P420i controller card.

I had all but decided on SSDs but having just looked up a raid performance calculator, I’m now questioning myself again and reconsidering the HPE 600GB SAS drives.

According to a RAID performance calculator I found, 8 x 600GB 10k SAS drives in RAID10 should give me about 925MB/s and although the SSD drives will give 2150.82 MB/s, I’m only running gigabit network so max out at 125MB/s anyway....unless I upgrade to 10Gb as well but think that’s a bit overkill for our needs.

Argggggggggh

If you are not IOPS bound which it doesn't look like you are go with the sas 10k drives, they will also be HP supported which is always handy. The server supports teaming anyway so you could potentially team all the adapters to increase bandwidth back when multiple connections are open to the server. Alternatively your switch might have some SFP's also so might be worth making use of them. To be honest if it were me I tend to get cover on my disks as I have hundreds of them and am running 144x 10k 600gb sas in an EVA as well as some flash and a ton of NAS storage I think to cover them all on 4 hour turn around is only around 2k PA but they will only offer it on HP disks.
 
Associate
OP
Joined
19 May 2014
Posts
290
If you are not IOPS bound which it doesn't look like you are go with the sas 10k drives, they will also be HP supported which is always handy. The server supports teaming anyway so you could potentially team all the adapters to increase bandwidth back when multiple connections are open to the server. Alternatively your switch might have some SFP's also so might be worth making use of them. To be honest if it were me I tend to get cover on my disks as I have hundreds of them and am running 144x 10k 600gb sas in an EVA as well as some flash and a ton of NAS storage I think to cover them all on 4 hour turn around is only around 2k PA but they will only offer it on HP disks.

thanks Vince. While I love the high performance of SSD drives (I’ve got SSDs in every client pc we have), we are certainly not IOPS bound. I would really like files to open faster which I think they will regardless but that’s certainly not my main priority (I’ll probably upgrade the storage in the next 12 months anyway).

My thinking was also to look at a disk shelf of some sort in the future and populate that with SAS/Enterprise SATA drives but that’s out of my depth at the moment.

I’ll have a look at our switch and see what that offers :)
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
thanks Vince. While I love the high performance of SSD drives (I’ve got SSDs in every client pc we have), we are certainly not IOPS bound. I would really like files to open faster which I think they will regardless but that’s certainly not my main priority (I’ll probably upgrade the storage in the next 12 months anyway).

My thinking was also to look at a disk shelf of some sort in the future and populate that with SAS/Enterprise SATA drives but that’s out of my depth at the moment.

I’ll have a look at our switch and see what that offers :)

SSD's are great but the only properly supported SSD drives are silly money and it isn't entirely uncommon for drives not to work entirely as intended especially if your using something like an EVA where drive firmware etc comes into play, honestly I think a lot depends how "critical" the data is.

I've just decommissioned 5x DL380 G7's a g6 and a 2 g8's so now my setup is a bit simpler and looks a bit like:

3x HP DL385 G10 - Epyc Rome each with 256gb memory.
2x brocade FC SAN switches
P6500 EVA with dual controllers (one on each switch) so we have multiple routes from servers to disks. Currently with 8x12 drive trays. Connected to the hosts over the San fabric. (probably the next thing in line for an upgrade)
2x StoreOnce 4500 24TB (dedi backup devices with dedupe etc)

Given the current network config you have over there you won't see massive differences in speed for simple opening of files etc. If you can you can improve the performance over the standard metrics by setting up the raid cache, If you don't have one I may even have one you can have but the p420 does allow for a battery operated FBWC which can supplement the speed of the drives.

In fact I do have one laying around... if your server doesn't already have one then you could do worse than adding a FBWC card.

They look like this one I just found in my draw:

 
Associate
OP
Joined
19 May 2014
Posts
290
Wow, your setup sounds amazing!

When I started out planning this upgrade, the number one thing on my list was a heap of storage (I initially planned on just building a FreeNAS box with 24 or so drives in), but common sense kicked in and thought a) I should do this properly as it’s client artwork at stake and b) I REALLY don’t need as much storage as I intended to have, so trying to stay sensible but give myself enough room to not need to upgrade the storage for another year at least (December is typically our busy period where we triple usual sales so try and do any upgrades at the end of the year). The only thing I’m not sure of, because we don’t do it at the moment, is how much space Windows backups will require but if all else fails I can set up our current Dell T320 as a backup server with some NAS drives installed.

Anyway, I’ve just checked and my server has a 1GB P420i cache module and a P420i capacitor. A quick look on fleabay also brings up the 2GB cache modules for about £35 so that might be a worthwhile upgrade down the line.
 
Don
Joined
19 May 2012
Posts
17,154
Location
Spalding, Lincolnshire
Anyway, I’ve just checked and my server has a 1GB P420i cache module and a P420i capacitor. A quick look on fleabay also brings up the 2GB cache modules for about £35 so that might be a worthwhile upgrade down the line.

The 2GB modules have a higher failure rate than the 1GB modules for some reason. TBH as long as you have got a capacitor and the 1GB module I wouldn't worry - I doubt there's much real world performance difference for your use case.

With regard to SSDs in a G8 - I've tried standard consumer SATA SSDs in some of our "test" G8's with no issues - they work fine in RAID1, and I think the Cruical ones I tried even listed their health in SmartArray Configuration, but for anything business critical I'd be going with genuine HP SSD's even if you buy second hand ones from ebay or similar.
 
Associate
OP
Joined
19 May 2014
Posts
290
The 2GB modules have a higher failure rate than the 1GB modules for some reason. TBH as long as you have got a capacitor and the 1GB module I wouldn't worry - I doubt there's much real world performance difference for your use case.

With regard to SSDs in a G8 - I've tried standard consumer SATA SSDs in some of our "test" G8's with no issues - they work fine in RAID1, and I think the Cruical ones I tried even listed their health in SmartArray Configuration, but for anything business critical I'd be going with genuine HP SSD's even if you buy second hand ones from ebay or similar.
Thanks, very handy to know about the 2GB modules...glad I mentioned it now haha.

Ok, so will give consumer SSD's a miss as I want the most reliability possible, so my options are as follows, although I'm leaning more towards the HPE 600GB 10K SAS drives:

Option 1: 8 x 240GB Samsung SM863a SSD's (these are used drives and costing £30 each). In RAID6, this would give me 1.4TB or in RAID10 I would get 960GB. Total cost £240.
Option 2: 8 x HPE 600GB 10k SAS drives (these are brand new and costing £45 each). I wouldn't go for RAID6 on mechanical drives so RAID10 would give me 2.4TB. Total cost £360.

I also noticed that the HPE drives come with the caddies whereas I'd need to buy the caddies in addition to the cost of the SSD's if I went that route.

Also, out of interest, if I populate the 8 x 2.5" bays with the SAS drives, is there any way I can install Windows Server to another drive (preferably an SSD) so that it's completely separate to the storage array?
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
Thanks, very handy to know about the 2GB modules...glad I mentioned it now haha.

Ok, so will give consumer SSD's a miss as I want the most reliability possible, so my options are as follows, although I'm leaning more towards the HPE 600GB 10K SAS drives:

Option 1: 8 x 240GB Samsung SM863a SSD's (these are used drives and costing £30 each). In RAID6, this would give me 1.4TB or in RAID10 I would get 960GB. Total cost £240.
Option 2: 8 x HPE 600GB 10k SAS drives (these are brand new and costing £45 each). I wouldn't go for RAID6 on mechanical drives so RAID10 would give me 2.4TB. Total cost £360.

I also noticed that the HPE drives come with the caddies whereas I'd need to buy the caddies in addition to the cost of the SSD's if I went that route.

Also, out of interest, if I populate the 8 x 2.5" bays with the SAS drives, is there any way I can install Windows Server to another drive (preferably an SSD) so that it's completely separate to the storage array?

Yes the g8 has some sata connections inside if i remeber rightly.
 
Don
Joined
19 May 2012
Posts
17,154
Location
Spalding, Lincolnshire
Also, out of interest, if I populate the 8 x 2.5" bays with the SAS drives, is there any way I can install Windows Server to another drive (preferably an SSD) so that it's completely separate to the storage array?

I wouldn't bother separating it as otherwise you would end up with say 2x146gb in raid1 for the os and then the rest for data. Keep it as one big raid 10 array then you won't ever have to worry about running out of space for the os e.g during updates etc, and also only have to keep one cold spare drive as opposed to two.

Yes the g8 has some sata connections inside if i remeber rightly.

It has at least 1 for the optical drive (if installed), but you'd need somewhere to mount the drive (e.g. a slimline optical to HDD adapter or pcie bracket adapter)

Edit:
The sata controller is also fairly poor iirc
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
I wouldn't bother separating it as otherwise you would end up with say 2x146gb in raid1 for the os and then the rest for data. Keep it as one big raid 10 array then you won't ever have to worry about running out of space for the os e.g during updates etc, and also only have to keep one cold spare drive as opposed to two.



It has at least 1 for the optical drive (if installed), but you'd need somewhere to mount the drive (e.g. a slimline optical to HDD adapter or pcie bracket adapter)

If a zip tie and a dream :p
 
Associate
Joined
31 Aug 2017
Posts
2,209
I have the same box and same issues, the server and card only seem to be happy with compatible stuff which is crap.
I have had things working, sort of, with another raid card totally different from the HP one plus some mixtures of drives but i am prob just going to run a jbod box attached to the server instead of using its inbuilt space.

Not done owt with this for a few months... couldnt be bothered lol.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
I have the same box and same issues, the server and card only seem to be happy with compatible stuff which is crap.
I have had things working, sort of, with another raid card totally different from the HP one plus some mixtures of drives but i am prob just going to run a jbod box attached to the server instead of using its inbuilt space.

Not done owt with this for a few months... couldnt be bothered lol.

Ive probably got 50 or so 250 2.5 drives hanging around somewhere all hp and pulled when upgrading to 600gb drives.
 
Associate
Joined
31 Aug 2017
Posts
2,209
I did have a source of a load of the smaller HP drives but when i thought about it i didnt think it was worthwhile using dozens of wee ones all using shed loads of power and needing a home to sit in.
I was using the 380 with hyper v and before that esxi but both threw lots of errors on different drive groups with various raid configs. However it worked fine with single ssds as the main system drive holding the vm's which was fine as i was going to just add lots of boxes to it for storage as i mentioned.

Just need to get on with it lol.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
I did have a source of a load of the smaller HP drives but when i thought about it i didnt think it was worthwhile using dozens of wee ones all using shed loads of power and needing a home to sit in.
I was using the 380 with hyper v and before that esxi but both threw lots of errors on different drive groups with various raid configs. However it worked fine with single ssds as the main system drive holding the vm's which was fine as i was going to just add lots of boxes to it for storage as i mentioned.

Just need to get on with it lol.

esxi, single drives for vms, jesus you are brave. I probably lose 5 or more drives a year on my datastore volumes. More if im moving masses of data around or if i need to do a big array rebuild. Leaving stuff on just one would be testing my nerves i think.

My g10's dont have a single drive in them , they boot esxi from an internal micro sd and that really is the lot.
 
Associate
OP
Joined
19 May 2014
Posts
290
I have the same box and same issues, the server and card only seem to be happy with compatible stuff which is crap.
I have had things working, sort of, with another raid card totally different from the HP one plus some mixtures of drives but i am prob just going to run a jbod box attached to the server instead of using its inbuilt space.

Not done owt with this for a few months... couldnt be bothered lol.
Hmmm that’s worrying to hear. Out of interest what raid card did you try?

I guess another option would be to use the internal storage of the DL380 for os and then a disk shelf for the rest of the storage but at the moment wouldn’t know where to start with something like that.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
Hmmm that’s worrying to hear. Out of interest what raid card did you try?

I guess another option would be to use the internal storage of the DL380 for os and then a disk shelf for the rest of the storage but at the moment wouldn’t know where to start with something like that.

Fwiw I have had an array internal to a dl380 on a p410 for years literally 6 years or more. Never had an issue or a drive failure on that array.
 
Don
Joined
19 May 2012
Posts
17,154
Location
Spalding, Lincolnshire
Hmmm that’s worrying to hear. Out of interest what raid card did you try?

There is nothing wrong with the p420i - I've currently got around 10 gen8 dl360/380 running at work - never had an issue.
My database servers are the 25bay dl380p variant and have 3 separate arrays (raid1 for os, raid10 for mysql, and a raid1 for backup files) - never had an issue.

I've also run G5, G6 and still run G7 and their associated smart array controllers no issues.



I guess another option would be to use the internal storage of the DL380 for os and then a disk shelf for the rest of the storage but at the moment wouldn’t know where to start with something like that.

Hp D2600/D2700 disk shelf depending on whether you want 2.5" or 3.5" drives, combined with a P421 external sas smart array card (and a required cable)
 
Associate
Joined
31 Aug 2017
Posts
2,209
esxi, single drives for vms, jesus you are brave.

Not really, this is a home server which is just for me to play with.
It runs all my network via pfsense and ... once i have it working right... storage in a form of a nas (or even a windows server build but i think thats ott)
If it goes down it wont be hard to fix or replace with something quickly to get back working, i will of course back up the vms and all so that its not going to be a big deal.

Of course i would never have a production server running with a single point of failure.
 
Back
Top Bottom