How to measure how much data is written on drive?

Associate
Joined
13 Feb 2008
Posts
1,038
Location
Glasgow
How to measure how much data is written on drive? How soon my SSD will die?

Is there any way to measure of amount of data that is written during the day?
I would like to know how much data I write to my drive in daily basis, to check if I won't kill SSD drive too soon by throwing a lot of stuff on it every day...

Is there any software to help to measure that?
 
Last edited:
Code:
:~$ iostat -m -d /dev/sdc
Linux 2.6.35-22-generic  	18/11/10 	_x86_64_	(4 CPU)

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdc              17.28         0.24         2.12        589       5163
I'm kinda surprised I've written 5GB to the drive since bootup (45 mins ago). Will have to keep an eye on this.

No idea if there is a similar windows command.

On the OCZ Vertex 2 Drives there are lifetime read/write attributes in the SMART info.

I don't know if the SMART figures take into account write amplification. Anything you run from the OS won't.
 
Code:
:~$ iostat -m -d /dev/sdc
Linux 2.6.35-22-generic  	18/11/10 	_x86_64_	(4 CPU)

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdc              17.28         0.24         2.12        589       5163
I'm kinda surprised I've written 5GB to the drive since bootup (45 mins ago). Will have to keep an eye on this.

No idea if there is a similar windows command.

On the OCZ Vertex 2 Drives there are lifetime read/write attributes in the SMART info.

I don't know if the SMART figures take into account write amplification. Anything you run from the OS won't.


Uuu.. 5Gb that is sagnificant amount I say...
See, that's the problem - SSD has around 5K write cycles, that means if I write 5000 times full drive, it will die. So I guess I won't be using SSD for database work drive...

I wonder if there is anything like that in Windows system?
 
Uuu.. 5Gb that is sagnificant amount I say...
See, that's the problem - SSD has around 5K write cycles, that means if I write 5000 times full drive, it will die. So I guess I won't be using SSD for database work drive...
Looking at it more closely 4GB of that seems to be initialising my swap partition at startup. On linux we have a separate partition rather than the PageFile file windows uses.

I may move the swap partition onto a HDD. I've yet to run out of memory so I won't see any performance drop.

Since 9:45am I've written 500MB to the drive. That looks like a more reasonable amount.

Code:
Partition:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
Boot              0.01         0.00         0.00          0         10
Swap              0.00         0.00         0.23          0       4096
System            1.95         0.03         0.06        556       1094
User Files        1.72         0.00         0.03         76        463
Total             3.71         0.04         0.31        633       5663
 
Last edited:
On reflection I wouldn't trust the write data figures I've posted from iostat.

Having a play round copying, moving and deleting files it appears deleting a file adds the file's size to the data written.

Obviously deleting a file doesn't physically overwrite the entire file and it doesn't exhibit this behaviour when deleting files from an HDD so I'm guessing it must be something to do with iostat not interpreting TRIM commands correctly.
 
Looking at it more closely 4GB of that seems to be initialising my swap partition at startup. On linux we have a separate partition rather than the PageFile file windows uses.

I may move the swap partition onto a HDD. I've yet to run out of memory so I won't see any performance drop.

Since 9:45am I've written 500MB to the drive. That looks like a more reasonable amount.

Code:
Partition:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
Boot              0.01         0.00         0.00          0         10
Swap              0.00         0.00         0.23          0       4096
System            1.95         0.03         0.06        556       1094
User Files        1.72         0.00         0.03         76        463
Total             3.71         0.04         0.31        633       5663

500MB in couple of hours.. that is scary:)
I wonder if that ssd running database is still good idea.... Maybe two Veloci drives in raid0 would be better...
 
500MB in couple of hours.. that is scary:)
500MB is not really much.

Think about it this way. You have a 10MB Word document you are working on throughout the day. You make an edit at least once every 10 mins (the default autosave frequency). That alone will generate 420MB if you work on it for 7 hours.

Intel for its X25-M says:
3.5.4
Minimum Useful Life
The drive will have a minimum of 5 years of useful life under typical client workloads
with up to 20 GB of host writes per day.

ftp://download.intel.com/design/flash/NAND/mainstream/mainstream-sata-ssd-datasheet.pdf

OCZ doesn't appear to publish similar information but the technology is similar so we shouldn't be talking orders of magnitude variation. They would have a 5 year lifetime at 5GB/day even if they were 4 times worse than Intel.

I wonder if that ssd running database is still good idea.... Maybe two Veloci drives in raid0 would be better...
I guess it depends on your database, and usage patterns. There is no denying if you DB is IO limited on reads you'll see a vast improvement in performance.

If you will be heavily writing to the drive on a continual basis then yes a rotational drive or enterprise class SLC nand drive may be a better fit.
 
I gave it some time to think about and I think I should be ok even with MLC SSD drives.

I work with few, maybe 3-4 different 2GB files a day. How many times I'm going write the whole 60GB Vertex 2E drive a day? Two? Four times a day? I don't think so. But lets say I will fill up that drive 4 times a day. In this case it is: working 5 days a week, that is 260 days a year. 260x4=1040. If those MLC drives have around 5000 write cycles, then I'm safe for next 5 years, before drive will die. It is still too much I think - I won't be writing 240GB a day...

Is that correct? Is my thinking correct?
 
So with all this in mind does this mean i should not get a SSD for postgres database use. Also is a fact that a larger drive would last longer since there is more free space for all the allocation/trim/etc?
 
Back
Top Bottom