SSD Drive health reported as 92% - why?

Associate
Joined
19 Jun 2009
Posts
56
Hello,

Both HD Sentinel Pro and HD Tune Pro report that my Crucial M225's health is 92% though they don't display anything to be wrong with it!

Why would that be? How can I check why it's not 100%?
Is this reason for an RMA? I've been having some problems with the computer during high disk usage and I'm starting to think it's because of the SSD drive but how do I prove it?

Thanks
Georgios
 
OK, I'm sure now the SSD is at fault.

If I run a VirtualBox VM on the SSD drive, writing to the SSD makes VBox freeze at some point. Moving the VM to another drive makes it work fine!

Is this fixable? Detectable? Do I go for the RMA?
 
This is not looking good:

m225bench.png
 
Not sure I'm afraid - I don't know how those tools calculate "health".
What I do know is that these SSD use SMART parameters in a non-standard way, so it's possible that these tools could be mis-interpreting the data. If you could post a screenshot of the hdtune health tab for the drive it would help.
Re the transfer rates, are you using win7 or alternatively did you manually align your partitions to 4k boundaries. Also what transfer mode is being used (shown on the hdtune info tab)?
 
Health is ok, I have an SSD at 87%. As said above it's just the way things are calculated, get worried if it starts dropping.

Re the delay, is your one that suffers from a write delay? If so it's just a design issue, and not an actual fault. My Kingston has a very bad write delay.
 
This is like chasing a ghost! And I'm not talking about the health anymore but the write speed!

I've tried several different configurations (intel vs gigabyte controller, intel vs MS driver) and sometimes it would work, then minutes later on the same controller it could just slow down to a halt! I mean, under 5MB/sec transfer rates!

Just to figure out what the hell is going on, I've ended up cloning Windows onto a SATA disk I had laying around, connecting the SSD to the Intel controller (RAID mode) and install Intel's latest Rapid Storage Technology drivers.

Then, I formatted the SSD (default allocation size), Wiped it and wrote a program that writes 512MB at a time from memory (so it doesn't depend from another disk).

It seems that the drive really slows down when it gets to under ~25GB. See the log below.

That's a defect right?? I can't quite believe that 20GB of 128GB worth some 300 GBP is unusable??

Code:
Wrote 512MiB at 167.53MiB/sec - 107.52GiB free space left
Wrote 512MiB at 160.74MiB/sec - 107.02GiB free space left
Wrote 512MiB at 164.09MiB/sec - 106.52GiB free space left
Wrote 512MiB at 165.31MiB/sec - 106.02GiB free space left
Wrote 512MiB at 166.22MiB/sec - 105.52GiB free space left
Wrote 512MiB at 164.94MiB/sec - 105.02GiB free space left
Wrote 512MiB at 162.12MiB/sec - 104.52GiB free space left
Wrote 512MiB at 165.42MiB/sec - 104.02GiB free space left
Wrote 512MiB at 160.49MiB/sec - 103.52GiB free space left
Wrote 512MiB at 166.17MiB/sec - 103.02GiB free space left
Wrote 512MiB at 166.22MiB/sec - 102.52GiB free space left
Wrote 512MiB at 166.39MiB/sec - 102.02GiB free space left
Wrote 512MiB at 165.69MiB/sec - 101.52GiB free space left
Wrote 512MiB at 158.46MiB/sec - 101.02GiB free space left
Wrote 512MiB at 166.01MiB/sec - 100.52GiB free space left
Wrote 512MiB at 166.39MiB/sec - 100.02GiB free space left
Wrote 512MiB at 166.06MiB/sec - 99.52GiB free space left
Wrote 512MiB at 166.17MiB/sec - 99.02GiB free space left
Wrote 512MiB at 166.49MiB/sec - 98.52GiB free space left
Wrote 512MiB at 166.82MiB/sec - 98.02GiB free space left
Wrote 512MiB at 166.22MiB/sec - 97.52GiB free space left
Wrote 512MiB at 154.44MiB/sec - 97.02GiB free space left
Wrote 512MiB at 165.21MiB/sec - 96.52GiB free space left
Wrote 512MiB at 166.60MiB/sec - 96.02GiB free space left
Wrote 512MiB at 166.01MiB/sec - 95.52GiB free space left
Wrote 512MiB at 165.74MiB/sec - 95.02GiB free space left
Wrote 512MiB at 165.85MiB/sec - 94.52GiB free space left
Wrote 512MiB at 165.74MiB/sec - 94.02GiB free space left
Wrote 512MiB at 166.33MiB/sec - 93.52GiB free space left
Wrote 512MiB at 166.44MiB/sec - 93.02GiB free space left
Wrote 512MiB at 166.12MiB/sec - 92.52GiB free space left
Wrote 512MiB at 166.01MiB/sec - 92.02GiB free space left
Wrote 512MiB at 149.61MiB/sec - 91.52GiB free space left
Wrote 512MiB at 150.23MiB/sec - 91.02GiB free space left
Wrote 512MiB at 166.22MiB/sec - 90.52GiB free space left
Wrote 512MiB at 166.12MiB/sec - 90.02GiB free space left
Wrote 512MiB at 166.39MiB/sec - 89.52GiB free space left
Wrote 512MiB at 166.39MiB/sec - 89.02GiB free space left
Wrote 512MiB at 167.04MiB/sec - 88.52GiB free space left
Wrote 512MiB at 166.71MiB/sec - 88.02GiB free space left
Wrote 512MiB at 166.49MiB/sec - 87.52GiB free space left
Wrote 512MiB at 166.55MiB/sec - 87.02GiB free space left
Wrote 512MiB at 166.17MiB/sec - 86.52GiB free space left
Wrote 512MiB at 166.71MiB/sec - 86.02GiB free space left
Wrote 512MiB at 165.53MiB/sec - 85.52GiB free space left
Wrote 512MiB at 166.06MiB/sec - 85.02GiB free space left
Wrote 512MiB at 166.22MiB/sec - 84.52GiB free space left
Wrote 512MiB at 165.69MiB/sec - 84.02GiB free space left
Wrote 512MiB at 165.58MiB/sec - 83.52GiB free space left
Wrote 512MiB at 166.06MiB/sec - 83.02GiB free space left
Wrote 512MiB at 165.90MiB/sec - 82.52GiB free space left
Wrote 512MiB at 157.67MiB/sec - 82.02GiB free space left
Wrote 512MiB at 149.09MiB/sec - 81.52GiB free space left
Wrote 512MiB at 143.65MiB/sec - 81.02GiB free space left
Wrote 512MiB at 144.75MiB/sec - 80.52GiB free space left
Wrote 512MiB at 147.29MiB/sec - 80.02GiB free space left
Wrote 512MiB at 166.82MiB/sec - 79.52GiB free space left
Wrote 512MiB at 166.55MiB/sec - 79.02GiB free space left
Wrote 512MiB at 166.71MiB/sec - 78.52GiB free space left
Wrote 512MiB at 166.77MiB/sec - 78.02GiB free space left
Wrote 512MiB at 166.06MiB/sec - 77.52GiB free space left
Wrote 512MiB at 166.12MiB/sec - 77.02GiB free space left
Wrote 512MiB at 166.17MiB/sec - 76.52GiB free space left
Wrote 512MiB at 166.55MiB/sec - 76.02GiB free space left
Wrote 512MiB at 160.34MiB/sec - 75.52GiB free space left
Wrote 512MiB at 166.49MiB/sec - 75.02GiB free space left
Wrote 512MiB at 143.41MiB/sec - 74.52GiB free space left
Wrote 512MiB at 166.49MiB/sec - 74.02GiB free space left
Wrote 512MiB at 165.31MiB/sec - 73.52GiB free space left
Wrote 512MiB at 165.95MiB/sec - 73.02GiB free space left
Wrote 512MiB at 165.95MiB/sec - 72.52GiB free space left
Wrote 512MiB at 165.79MiB/sec - 72.02GiB free space left
Wrote 512MiB at 165.85MiB/sec - 71.52GiB free space left
Wrote 512MiB at 165.47MiB/sec - 71.02GiB free space left
Wrote 512MiB at 165.63MiB/sec - 70.52GiB free space left
Wrote 512MiB at 165.58MiB/sec - 70.02GiB free space left
Wrote 512MiB at 165.74MiB/sec - 69.52GiB free space left
Wrote 512MiB at 134.80MiB/sec - 69.02GiB free space left
Wrote 512MiB at 166.12MiB/sec - 68.52GiB free space left
Wrote 512MiB at 165.90MiB/sec - 68.02GiB free space left
Wrote 512MiB at 124.29MiB/sec - 67.52GiB free space left
Wrote 512MiB at 165.90MiB/sec - 67.02GiB free space left
Wrote 512MiB at 166.01MiB/sec - 66.52GiB free space left
Wrote 512MiB at 165.69MiB/sec - 66.02GiB free space left
Wrote 512MiB at 166.01MiB/sec - 65.52GiB free space left
Wrote 512MiB at 165.95MiB/sec - 65.02GiB free space left
Wrote 512MiB at 165.21MiB/sec - 64.52GiB free space left
Wrote 512MiB at 165.47MiB/sec - 64.02GiB free space left
Wrote 512MiB at 141.23MiB/sec - 63.52GiB free space left
Wrote 512MiB at 166.06MiB/sec - 63.02GiB free space left
Wrote 512MiB at 164.99MiB/sec - 62.52GiB free space left
Wrote 512MiB at 165.10MiB/sec - 62.02GiB free space left
Wrote 512MiB at 140.96MiB/sec - 61.52GiB free space left
Wrote 512MiB at 164.20MiB/sec - 61.02GiB free space left
Wrote 512MiB at 164.36MiB/sec - 60.52GiB free space left
Wrote 512MiB at 165.31MiB/sec - 60.02GiB free space left
Wrote 512MiB at 163.67MiB/sec - 59.52GiB free space left
Wrote 512MiB at 163.73MiB/sec - 59.02GiB free space left
Wrote 512MiB at 164.25MiB/sec - 58.52GiB free space left
Wrote 512MiB at 164.04MiB/sec - 58.02GiB free space left
Wrote 512MiB at 164.67MiB/sec - 57.52GiB free space left
Wrote 512MiB at 165.26MiB/sec - 57.02GiB free space left
Wrote 512MiB at 121.81MiB/sec - 56.52GiB free space left
Wrote 512MiB at 164.78MiB/sec - 56.02GiB free space left
Wrote 512MiB at 165.26MiB/sec - 55.52GiB free space left
Wrote 512MiB at 165.15MiB/sec - 55.02GiB free space left
Wrote 512MiB at 165.10MiB/sec - 54.52GiB free space left
Wrote 512MiB at 164.73MiB/sec - 54.02GiB free space left
Wrote 512MiB at 165.69MiB/sec - 53.52GiB free space left
Wrote 512MiB at 165.10MiB/sec - 53.02GiB free space left
Wrote 512MiB at 165.37MiB/sec - 52.52GiB free space left
Wrote 512MiB at 165.21MiB/sec - 52.02GiB free space left
Wrote 512MiB at 165.63MiB/sec - 51.52GiB free space left
Wrote 512MiB at 165.79MiB/sec - 51.02GiB free space left
Wrote 512MiB at 165.42MiB/sec - 50.52GiB free space left
Wrote 512MiB at 165.15MiB/sec - 50.02GiB free space left
Wrote 512MiB at 165.85MiB/sec - 49.52GiB free space left
Wrote 512MiB at 165.85MiB/sec - 49.02GiB free space left
Wrote 512MiB at 165.10MiB/sec - 48.52GiB free space left
Wrote 512MiB at 164.67MiB/sec - 48.02GiB free space left
Wrote 512MiB at 164.52MiB/sec - 47.52GiB free space left
Wrote 512MiB at 164.62MiB/sec - 47.02GiB free space left
Wrote 512MiB at 164.83MiB/sec - 46.52GiB free space left
Wrote 512MiB at 165.21MiB/sec - 46.02GiB free space left
Wrote 512MiB at 164.46MiB/sec - 45.52GiB free space left
Wrote 512MiB at 65.04MiB/sec - 45.02GiB free space left
Wrote 512MiB at 149.79MiB/sec - 44.52GiB free space left
Wrote 512MiB at 164.46MiB/sec - 44.02GiB free space left
Wrote 512MiB at 150.62MiB/sec - 43.52GiB free space left
Wrote 512MiB at 164.67MiB/sec - 43.02GiB free space left
Wrote 512MiB at 164.83MiB/sec - 42.52GiB free space left
Wrote 512MiB at 164.25MiB/sec - 42.02GiB free space left
Wrote 512MiB at 165.05MiB/sec - 41.52GiB free space left
Wrote 512MiB at 164.41MiB/sec - 41.02GiB free space left
Wrote 512MiB at 148.53MiB/sec - 40.52GiB free space left
Wrote 512MiB at 163.94MiB/sec - 40.02GiB free space left
Wrote 512MiB at 164.83MiB/sec - 39.52GiB free space left
Wrote 512MiB at 148.57MiB/sec - 39.02GiB free space left
Wrote 512MiB at 163.67MiB/sec - 38.52GiB free space left
Wrote 512MiB at 164.30MiB/sec - 38.02GiB free space left
Wrote 512MiB at 163.78MiB/sec - 37.52GiB free space left
Wrote 512MiB at 165.26MiB/sec - 37.02GiB free space left
Wrote 512MiB at 164.30MiB/sec - 36.52GiB free space left
Wrote 512MiB at 164.57MiB/sec - 36.02GiB free space left
Wrote 512MiB at 164.41MiB/sec - 35.52GiB free space left
Wrote 512MiB at 164.36MiB/sec - 35.02GiB free space left
Wrote 512MiB at 164.94MiB/sec - 34.52GiB free space left
Wrote 512MiB at 164.57MiB/sec - 34.02GiB free space left
Wrote 512MiB at 164.04MiB/sec - 33.52GiB free space left
Wrote 512MiB at 165.10MiB/sec - 33.02GiB free space left
Wrote 512MiB at 164.30MiB/sec - 32.52GiB free space left
Wrote 512MiB at 164.83MiB/sec - 32.02GiB free space left
Wrote 512MiB at 164.73MiB/sec - 31.52GiB free space left
Wrote 512MiB at 164.04MiB/sec - 31.02GiB free space left
Wrote 512MiB at 165.15MiB/sec - 30.52GiB free space left
Wrote 512MiB at 150.76MiB/sec - 30.02GiB free space left
Wrote 512MiB at 157.87MiB/sec - 29.52GiB free space left
Wrote 512MiB at 164.78MiB/sec - 29.02GiB free space left
Wrote 512MiB at 165.10MiB/sec - 28.52GiB free space left
Wrote 512MiB at 153.79MiB/sec - 28.02GiB free space left
Wrote 512MiB at 152.37MiB/sec - 27.52GiB free space left
Wrote 512MiB at 154.81MiB/sec - 27.02GiB free space left
Wrote 512MiB at 141.70MiB/sec - 26.52GiB free space left
Wrote 512MiB at 115.91MiB/sec - 26.02GiB free space left
Wrote 512MiB at 107.97MiB/sec - 25.52GiB free space left
Wrote 512MiB at 93.02MiB/sec - 25.02GiB free space left
Wrote 512MiB at 74.45MiB/sec - 24.52GiB free space left
Wrote 512MiB at 63.73MiB/sec - 24.02GiB free space left
Wrote 512MiB at 24.07MiB/sec - 23.52GiB free space left
Wrote 512MiB at 9.71MiB/sec - 23.02GiB free space left
Wrote 512MiB at 1.75MiB/sec - 22.52GiB free space left

I stopped the benchmark here, it was just too slow by now.
 
remember that SSD's can write to one block at a time, but it needs to delete blocks 512 blocks at a time, i think
so rather than actually deleting the data , it usually writes to a fresh part of the disk and pretend that the old data is gone, and writes happen quickly

but if there is not enough fresh area, after the drive has been through a majority of its blocks, the actual deleting has to happen for more data to be written, and this slows down writes

and if the data being deleted is meant to be saved, i.e. you are running out of space, the data has to be read and saved in a cache or something before the 512 block delete can occur, then the saved data is combined with the new incoming data to write to the 512 block which has just been deleted... so now you have to wait for the reads to occur as well as writing more blocks than you actually intended


new drives have a 'trim' feature which is like defragmenting the data when the drive is not in use, to make as many 512 block sections available as possible... but it only works when the drive is idling, and your benchmarking is not giving it time to defragment itself

also, these drives have a limited amount of deletes, so all that writing you are doing for the benchmark would probably have shortened its life ...
 
Last edited:
and to answer your 92% question, that is probably the percentage of functioning blocks in the drive, including the reserved blocks which only get used when blocks which were actually being used have died... i think it goes down to 75% or something before you start losing capacity in the drive

the drives would be much more expensive if every one of them came with a 100% 'health'...
 
new drives have a 'trim' feature which is like defragmenting the data when the drive is not in use, to make as many 512 block sections available as possible... but it only works when the drive is idling, and your benchmarking is not giving it time to defragment itself

also, these drives have a limited amount of deletes, so all that writing you are doing for the benchmark would probably have shortened its life ...

Perhaps it wasn't clear in my original post - before running the benchmark, I used Wiper on the whole thing so it was fully *trimmed*! In which case it shouldn't have to recover any blocks!
 
Have a look at smart parameter D9 - this will give you an estimate of %age life remaining based upon erase count (hopefully a conservative estimate).
How did your program write to the disk? If for example it was opening a file and repeatedly appending 512MiB on an NTFS file system, then maybe the MFT means the disk isn't starting off as empty as you think, and there may be more writes going on behind the scenes in the MFT and journals etc.
IIRC, the "garbage collection" in the latest 1916 firmware runs in the background when the SSD is idle, although I'm not sure how this interacts with TRIM (if used). So it would be interesting if you tried resuming your benchmark test after giving it a "rest" to see if the transfer rate picked up again.
Personally, though, I suspect this is down to running the SDD too close to "full", so I'd move some stuff onto another disk and not worry about it.
 
Have a look at smart parameter D9 - this will give you an estimate of %age life remaining based upon erase count (hopefully a conservative estimate).
How did your program write to the disk? If for example it was opening a file and repeatedly appending 512MiB on an NTFS file system, then maybe the MFT means the disk isn't starting off as empty as you think, and there may be more writes going on behind the scenes in the MFT and journals etc.
IIRC, the "garbage collection" in the latest 1916 firmware runs in the background when the SSD is idle, although I'm not sure how this interacts with TRIM (if used). So it would be interesting if you tried resuming your benchmark test after giving it a "rest" to see if the transfer rate picked up again.
Personally, though, I suspect this is down to running the SDD too close to "full", so I'd move some stuff onto another disk and not worry about it.

It wrote continuously to one file to save all the trouble you describe.

Remember that I noticed the slow down on Windows too when the drive became close to full! Now the drive disappeared from the OS a couple of times.

I'm sending it back.

FYI - there's a thread over a Crucial about this that I've just discovered.

http://www.forum.crucial.com/t5/Sol...w-speed-when-M225-128GB-is-70-full/td-p/11433
 
Perhaps it wasn't clear in my original post - before running the benchmark, I used Wiper on the whole thing so it was fully *trimmed*! In which case it shouldn't have to recover any blocks!
it was trimmed before the first few gigs or whatever was written, everytime you rewrite the data the drive moves onto the next part of its storage, and when you have moved across the entire storage, the deleting-for-real has to happen and it starts to slow down

and as i said, trim is like a defragmenting feature, fragmentation builds up and takes time to get rid of... in the case of SSD's, every single write you do is likely to cause more fragmentation and so the drive becomes less and less "trimmed" as your write benchmark goes on
 
and as i said, trim is like a defragmenting feature, fragmentation builds up and takes time to get rid of... in the case of SSD's, every single write you do is likely to cause more fragmentation and so the drive becomes less and less "trimmed" as your write benchmark goes on

TRIM is nothing like a defragmenting feature, TRIM just erases blocks after files are deleted so that in the future a write operation won't have to erase then write in sequence thus slowing down the write.
 
TRIM is nothing like a defragmenting feature, TRIM just erases blocks after files are deleted so that in the future a write operation won't have to erase then write in sequence thus slowing down the write.

Exactly. So after formatting the drive and running Wiper on it, the drive should perform at full speed.
 
It wrote continuously to one file to save all the trouble you describe.

Remember that I noticed the slow down on Windows too when the drive became close to full! Now the drive disappeared from the OS a couple of times.

I'm sending it back.

FYI - there's a thread over a Crucial about this that I've just discovered.

http://www.forum.crucial.com/t5/Sol...w-speed-when-M225-128GB-is-70-full/td-p/11433
Hi georgiosd, if the drive is disappearing then the rest of the discussion is becoming a bit academic (at least until the replacement starts doing the same thing :D).
Re your test, even writing continuously to a single file, I still suspect that there would be things going on behind the scenes in the file system that could be playing a part, e.g. as clusters are allocated there would be many changes to metafiles that, when combined with carmatic's point about small writes causing a larger erase and rewrite, could be having an effect. If you're still trying to work out what is going on, it might be interesting to look at the number of sectors written since running the wiper before it starts to slow down. SMART parameter C7 looks like it might help from an extract of a post from another forum:
Indilinx SSD’s

ID Hex Attribute
----------------------
01 (01) Read error rate
09 (09) Power on Hours
0C (12) Device power cycle count
C7 (199) Write Sectors Total Count
CD (205) Max PE Count Spec
CE (206) Min Erase Count
CF (207) Max Erase Count
D0 (208) Erase count average
D1 (209) Remaining drive life in % by Erase count

There is a really useful post that explains the above.
To find out total writes: ID# 199 x 512
To find out the % of write cycles used: ID# 208 x 100/ID# 205 = % of write cycles used.
To estimate remaining hours of use: ID# 9 (100/7.26) – 1 (x ID# 9)
To convert the above to years: / 8760
Hope this helps.
 
also, these drives have a limited amount of deletes, so all that writing you are doing for the benchmark would probably have shortened its life ...

Most SSDs are rated to last around 5 years of normal use, at least for the Intels. So yes, he has shortened its life... by all of 10 minutes.

People are way too hung up on this. You will be using your 2TB SSDs by the time these current generation SSDs stop working!
 
Back
Top Bottom