Dell SSDs, RAID 10 and SQL Server

Associate
Joined
11 Mar 2008
Posts
460
I'm looking at server options for a client just now and could use some advise.

This is a replacement for an existing live server that is due for replacement. The server will be running SQL Server, will have approx 6 DBs to run, with the largest being around 50GB in size currently (the others range from 1GB to 20GB at present). The database get's hit heavy during peak time (which ranges from 9am to 8pm).


We are intending of getting 4 x "Dell 200GB SSD SATA Value MLC" in RAID10 configuration (not bothering with the SAS versions of the drives, which come in at almost 3 times the price of those ones!!). This will have the OS and the SQL Databases on it.
Does anyone have any experience with Dells SSD drives? If so, any information on their reliability and performance?

As for the RAID 10 - does anyone know if this operates using TRIM? (will be using Windows Server 2012)
I know I could ask my Dell contact this, but it will take a while to actually get a proper answer (and I'll no doubt have to explain it 6 times as usual!)

We'll also have a couple of 2TB HDDs in to handle backup files and other misc bits and pieces.
 
Associate
OP
Joined
11 Mar 2008
Posts
460
I don't have IOPS figures, though a check on the SQL stats just now indicates approx 36 reads/writes per second (currently a quiet time) on the main DB. I'll need to tweak my query to seperate out number of Reads and Writes and also get the amount read and written. Not sure how these would translate to actual HDD IOPS.

Will try and get a measure during a busy period as well during the week.
 
Soldato
Joined
18 Oct 2002
Posts
4,034
Location
Somewhere on the Rainbow
One of my guys recently put in a Dell server with DAS array, think it was an MD3200 with SSD's. Its quick, brought one specific report down to 15mins to run from the old server which was a few hours. Its been in and running, getting hammered pretty much 24/7 for 6 months and so far no failures, it also the most power hungry of the server we have. The IOPs were good, if you want faster we are now looking at FusionIO cards internal to the servers, which are even better performing that the SSD's in the storage. Worth a look too.
 
Soldato
Joined
26 Feb 2009
Posts
14,814
Location
Exeter
Get some proper stats before doing this

Is HIGHLY unlikely you're going to need SSDs to satisfy the IO requirements. SSDs and server specs in general aren't like desktop PCs - if an application doesn't have certain IO usage, SSDs won't just make it quicker. Likewise, memory and CPU are all about careful scaling rather than just chucking resources at things. Have you looked at current memory usage, page life expectancies and things to see how much memory you should get?

So lets assume this app does actually need the thousands of IOPS that a raid10 will be able to deliver. MLC "value" flash on a SATA interface? Oh hell no. SLC enterprise drives on SAS... Maybe, but most SQL applications which have such high IO demands will also have greater capacity in resilience demands than a single server with local storage will be able to satisfy. I'm sure you've also heard about how SSDs fail after a certain number of writes. What really sucks, is that in a RAID array, you're subjecting them to the SAME writing. So when one fails, the others are almost certainly going to fail before you can even get one swapped out.

There are odd cases when such a config will work, but 99% of the time it's not a good idea. Performance issues with SQL should really be dealt with properly - look at the memory configuration, check indexes are sensible and working etc etc. Most slow reports etc can often be fixed with configuration

So, a single server SQL server setup - here's how I'd do it. Arrays of SAS disks for OS/tempdb, databases and transaction logs - 3 arrays appropriately sized in capacity and IO for the application. Another array of SATA disks for plenty of local backups and test/dev environments.

What does worry me is that you say this is for a "client" which implies you're selling your "professional" services but don't know all this. With respect, it's maybe best to consider what services you want to be offering people, you don't want to be the one responsible for data loss at someone's business because you didn't do the job properly
 
Last edited:
Associate
OP
Joined
11 Mar 2008
Posts
460
What does worry me is that you say this is for a "client" which implies you're selling your "professional" services but don't know all this. With respect, it's maybe best to consider what services you want to be offering people, you don't want to be the one responsible for data loss at someone's business because you didn't do the job properly

We do the software development and we've to put forward what we'd like to see in the new server.
Existing server is 8GB memory and the HDD is under heavy load (as is CPU) - while having vastly increased memory should help SQL keep more of the DB in memory instead of hitting the 'HDD', SSD seemed like a route to look at for improving IO, however I know I don't know a great deal about SSDs for servers hence why I'm asking about it

At £6k for a 4 x 200GB set up with the SSD SAS option, I pretty sure the client will not want to spend quite that much and would probably just go with SAS 15k HDDs instead, however if the 'value' options were going to be up to the job for both performance and reliability then that would be an option we could suggest.

Good point on the end of life in the RAID array, does make the idea of mirroring redundant in that capacity. Will need to consider that if the SSD route proves to be otherwise suitable.

I'll have to have a look at the FusionIO cards as well and see what they offer.

Of course, the client may just want to stick with SAS HDDs anyway since they are a known quantity.

Thanks for the feedback.
 
Soldato
Joined
26 Feb 2009
Posts
14,814
Location
Exeter
How do you know how hard the current disks are getting hit if you don't have the figures? Fusion IO are incredibly fast and expensive, designed for very specific purposes. Given the size of the DBs though - the simplest answer would be to use more RAM. However, you need to look at and understand the PLEs in order to do it properly

For a db of that size to saturate iops on a properly configured 15k sas array would take some doing!
 
Associate
OP
Joined
11 Mar 2008
Posts
460
I don't know the exact spec, but I believe the it currently has a single 4 core Xeon (don't know the model, but probably would have been more or less the top spec 4.5 years ago), 8GB RAM and 4x74GB 15k SAS HDDs in RAID5.

The drive load info it just what I've been told, will be during the week before I can get actual figures on IO performance
 
Associate
Joined
3 Oct 2008
Posts
1,890
Location
South London
I also agree with iaind.
If it's likely to need the iops of SSDs, you should probably make sure it actually does. Also make sure the the rest of the setup is up to that kind of punishment.
in addition I have these points to make.
1. RAM is as important as disk I/O for SQL. You have to get your data into memory before you can do anything with it. If you have hundreds of users, stick as much in as you dare.
2. Don't put everything on one RAID10. Raid10 increases IOPS, but it doesn't increase the number of I/O queues. SQL performance is as much about keeping down queue depth as much as it is anything else. Raw IOPS don't solve this because the latency of each read and write isn't uniform on an SSD, and depending on the controller the variation can be big. You might get an average IO/s of 10,000, but at times your queue can still spike and degrade performance.
This is especially prevalent when you have your DB writes, xlog writes, pagefile writes, OS and SQL binary reads sharing one hardware IO path. You'd be better off with 3x RAID1 mirrors than a 6 drive RAID10 shared. This is why SQL loves MPIO :)
3. Don't just look at quoted IOPS on drives. Often these are higher than real life because they're based on averages, usually in sequential IO conditions. Dont' get me wrong it still pastes spinning disk IOPS by factors of 10 but not by as much as often claimed. Also it's pointless having a RAID10 with theoretical 100,000 IOPS if your HBA/RAID controller soils itself at 7000. Stuff only goes as fast as the slowest component.

Deploying SQL is simple. Deploying it well is a real science. Or to software devs, a black art ;)
 
Associate
OP
Joined
11 Mar 2008
Posts
460
Still waiting on getting figures through on the IO - hopefully have them tomorrow.

Memory wise, we're going to be recommending going large on that, probabably 128GB should be enough - allows plenty of room for growth over the next few years (and of course can be upgraded later if it becomes necessary) to allow the complete DBs to be held in memory

I light of the feedback provided so far, we'll probably not suggest using the 'value' SSDs anyway, but will need to see the IO data to determine is the SAS SSD would really be necessary or not.
 
Associate
OP
Joined
11 Mar 2008
Posts
460
Current total sizes Live DB: 75GB
Approx growth over last 4 years: 25GB
Anticipating 50% greater growth over coming 4 years -> additional 37.5GB

Expected DB size in 4 years: 112.5GB

128GB will allow for the live DBs to be held in memory as much as possible for the next 4 years.

If growth is greater than expectations, then more memory can be added as a later stage.


Kelkef - ideally they would upgrade to 2012, however I suspect they'll be sticking with 2008.

Lanz, thanks for that - sounds like a very useful tool.
 
Soldato
Joined
26 Feb 2009
Posts
14,814
Location
Exeter
Since when do the DBs need to be held in memory entirely? Makes me wonder how I used to run 200gb DBs with 4gb ram and 36gb 15k u320scsi drives, back before cheap ram and the existence of SSDs..... Oh yeah that's right, proper configuration and sizing.

I don't know why you don't just log in and get the data you need - dpak is a nice tool but is not needed in this case. Read about and understand how SQL uses memory and how to scale it - db size = memory size is the lazy/stupid way to do it. What happens when your Dbs are into the terabytes? Then, understand how to measure disk performance, how to interpret the results and how to configure raid arrays to properly support SQL.

Finally, learn how the SQL memory settings work and how they are optimally configured, how to optimise indeces etc.

Sizing servers by guesswork is the act of an amateur, do it properly
 
Soldato
Joined
25 Mar 2004
Posts
15,746
Location
Fareham
Since when do the DBs need to be held in memory entirely? Makes me wonder how I used to run 200gb DBs with 4gb ram and 36gb 15k u320scsi drives, back before cheap ram and the existence of SSDs..... Oh yeah that's right, proper configuration and sizing.

I don't know why you don't just log in and get the data you need - dpak is a nice tool but is not needed in this case. Read about and understand how SQL uses memory and how to scale it - db size = memory size is the lazy/stupid way to do it. What happens when your Dbs are into the terabytes? Then, understand how to measure disk performance, how to interpret the results and how to configure raid arrays to properly support SQL.

Finally, learn how the SQL memory settings work and how they are optimally configured, how to optimise indeces etc.

Sizing servers by guesswork is the act of an amateur, do it properly

All perfectly valid, but if the OP has the budget then piling RAM onto the Server is a fairly cheap way of getting the performance he needs without having to size it for RAM usage.

Disks on the other hand, how many you use, what Raid config, and how fast they are could change the cost of the setup very quickly.

Properly sizing servers for anything is a skill that separates the professionals from the amateurs. This holds true for whatever you are designing for, but SQL is a complex beast, so understanding how to size for it is really the question at hand here.
 
Back
Top Bottom