Dell R620 and consumer SSDs

They sent me this :)

SQL.png


Unfortunately it only covers the main box and doesn't include the VMs but might give an idea as to what's going on?

It looks like the MB/Sec is way too low, even for these drives? And what happened at about 7am I have no idea.

Any thoughts as I don't really know how to read this at all :(
 
That looks very much like a DPACK report. The CPU isn't being tickled, you're barely touching your storage apart from around 7am when it looks like something happens (index build? backup taken? - bear in mind the timezone might be incorrect on the report).

But if you're seeing horrific performance of this system I'd wager that it's the way the database is designed because those graphs don't indicate areas that hardware would necessarily fix. The read latency is reasonably peaky but if every action in this application is generating thousands of queries then SSDs won't fix that for you.
 
I'm not overly surprised you've said that as I think I was coming to similar conclusion.

The bit I don't get though is that peak - it's boosted into the 100+ IOPS for a period of time I would say is greater than what the RAID controller cache (512MB) can help with and all the calculations I've done say the drives in that config should only be able to do about 125 IOPS - the 95th percentile is presumably being boosted by the cache, or my calculations are wrong or these are super drives lol.

I'm so confused :(
 
I have a better report now that shows the actual VM that the database runs off of, I think it tells more of a disk related bottleneck story?

DPACK.png


The latency rarely drops below 10ms and during work hours has periods of 80+ and 100+ :(
 
An update on this one - after lots more to-ing and fro-ing with not much help from the supplier, we've ordered new hardware - A Dell R630.

Biggest change is the disk speed and count - we've stumped up the cash for 8x 600GB 15k drives which will be configured OBR10 - so a theoretical 7x increase in disk performance over the current 3 drive 7.2k RAID 5.

RAM has been upped a little, but processors have been brought down a bit to help offset the cost of the drives. Although the newer E5-26xxv3 processors look like they are a reasonable improvement on the existing first gen ones currently so even if the proc usage picks up with the improvement in storage we shouldn't have created a bottleneck.

I fear the drives may be overkill now but without much assistance from the supplier we just had to look at the cost of the hardware and compare it with the time lost when we're all sitting twiddling our thumbs and that helped justify the gamble. I am bricking it though!
 
Last edited:
No one ever got fired for over speccing a server :)

though if you didn't need the space, replacing 2 of the drives for ssd's to be used as flash cache may have helped a bit more but that really depends on the db usage patterns.
Every one loves a bit of retrospective advice :p
 
Need the space unfortunately - total available GB is down a bit on the current one - which is never a good thing lol but there were a few bits we could move to other storage which gave enough buffer to hopefully make the new amount ok for 3 years.

Splunking nearly £10k when we could possible have just renewed the warranty on the existing one for about £1500 migh get me whinged at a bit if it doesn't work :D
 
Worst case then you have at least improved reliability/removed risk by moving away from Raid5. Even if it is no faster, at least you have eliminated one possibility - next will be to point the finger back at the software.

Yes 10k is expensive if you are wrong, but so would getting someone to analyse the software only to find the hardware is wrong.

Not sure what you are doing with the old server, but could be an ideal time to get it set up as a test box, so you can play about and prove any theories without affecting your production server.
 
The old server will replace the current old server (R410 IIRC) - they do use the old one as a test bed currently so they will end up with a better one. Not sure what the 410 will end up doing. It's not very special.
 
Installed the new server in the rack today, including a 10Gbe link between the new and old. R730 not 630 as above. Bit annoyed that the cable management arm that was supposed to be with it never arrived so have had to lay the cables out approximately until they send it. Over to the software people now...
 
Bit annoyed that the cable management arm that was supposed to be with it never arrived so have had to lay the cables out approximately until they send it. Over to the software people now...

Whilst nice in principle, have always found cable management arms actually end up making the cabling worse and trap heat at the back of our racks.

They're teasing me and won't do the swap until the weekend! Understandable, but want it now lol!

Hope it goes ok and makes a noticable differerence.
 
We only have 4 servers in the rack - each has its arm and there's plenty of space round them so they work for us - but they are a pain so can easily understand people not using them.

I'm still bricking it not doing anything!
 
The newer Dell cable arms with the flat tray thing on them are thousands of times better than the old ones as well.
 
Back
Top Bottom