Long Term analysis of Intel Mainstream SSD's

Soldato
Joined
11 Jul 2007
Posts
2,524
http://www.pcper.com/article.php?aid=669

Very interesting article, to summarise, when used as a main/OS drive the Intel drives internally fragment and it can really hurt performance (especially writes), to make matters worse you can't reliably defragment from within your OS, in some cases you'll need to ghost your drive, format, and ghost it back on to get performance back to as-new status.

I'm wondering if this is what's behind OCZ's comments about their Vertex drive not performing as well once it has a OS installed, I'd put money on it that the new Vertex controller uses the same clever write combining techniques that seem to be both a blessing and a curse.

Can anyone who's been running an X-25 for a few months do an ATTO benchmark, and let us know if the results mirror the articles findings?
 
As far as I can tell it's Intel only at the moment, but looks like it's going to be an all SSD issue, the method of improving small file write performance inevitably causes rapid fragmentation.
The way I understand it is that it isn't fragmentation in the normal sense, it's that when the drives remap the sectors, for wear levelling etc, the previously used cells arn't cleared, and because SSD's (unlike HDD's) need an erased cell before they can write on it, the write process takes longer because they have an extra erase step in there.

That said, according to the article, Intel are working on a defragging app.
 
Hmm, seems like the defrag tool isn't going to be the cure all solution. The extra writes for moving stuff about is seriously going to compromise the life span of the drive. Especially when you consider:

While this activity normally takes weeks to months to bring the drive to the point of no return, the same can be achieved with less than an hour under IOMeter configured to mimic a typical file server access pattern.

That's put a bit of a downer on SSD technology for me really, I was getting quite excited about it but this performance/lifespan cut problem is a bit of a show stopper in my opinion.
 
Hmm, seems like the defrag tool isn't going to be the cure all solution. The extra writes for moving stuff about is seriously going to compromise the life span of the drive. Especially when you consider:



That's put a bit of a downer on SSD technology for me really, I was getting quite excited about it but this performance/lifespan cut problem is a bit of a show stopper in my opinion.

The defragger wouldn't affect the lifespan of the drive, since it doesn't matter to SSD's that your files aren't in sequential blocks, all it would do would be to run the erase cycle on the blocks that arn't in use any more, it wouldn't have to move anything (and thus use up write cycles)

Fragmentation isn't really the right word for what's going on, since it's a different issue to HDD fragmentation, but i can't think of anything better.
 
Last edited:
Ah right I see, I was looking at it from a traditional standpoint (silly boy, it's not rotational is it?!). Either way, you would have thought that their production tests would have shown this performance degradation.

Is there any official word from them besides: "we're releasing a tool"?
 
This is a standard flash issue. I've written SPI flash drivers before - this is the next level of fun for SSD & OS vendors to sort out.

I'd expect an SSD friendly filing system standard to appear at some point too..
 
The defragger wouldn't affect the lifespan of the drive, since it doesn't matter to SSD's that your files aren't in sequential blocks, all it would do would be to run the erase cycle on the blocks that arn't in use any more, it wouldn't have to move anything (and thus use up write cycles)

Fragmentation isn't really the right word for what's going on, since it's a different issue to HDD fragmentation, but i can't think of anything better.

The defrag is not like a normal defrag. This is copying the 60GB off, erasing the drive using the erase command then performing 60GB of sequential writes to the flash drive.
This means the drive does suffer a drive wide 1 write hit for each cell (out of your MLC 10,000.. got to love SLC!).

It's the mapping system in the drive itself (not the OS filing system) that gets fragmented.
 
I saw that article too, a couple of days ago, quite disturbing actually.

Yeah, the fragmentation they're talking about is the physical level fragmentation that goes on with wear levelling and other 'optimizations' performed by the controller.

The more commonly known file fragmentation occurs at the file system level. I believe that SSDs do not require the 'normal' file system defrag like HDDs do, but benefit from free space consolidation and forcing of sequential writes like the Diskeeper Hyperfast is supposed to do. (Hyperfast is present in my copy of DK2009 but since I don't own an SSD I haven't purchased it/tried it.)

What I take away from all this is that SSDs, despite all the hype, are not ready for primetime yet. If I am paying good money as well as sacrificing space and putting up with write-erase cycle limitations, the SSD has better perform damn well throughout. I don't want to deal with performance deterioration after a few weeks.
 
Once the major HD players start selling them in reasonable numbers (+1million units/month) and then wait approx 2 years to calculate a real true to life MTBF rate for the amount sold - then you know it's time to buy ;)

Seagate has been looking into SSD drives for the last 6 years - and placed about 80million $ into their testing/developement last year alone - but due to the problems highlighted above (which they pinpointed about 2 years ago) - they have held back on their mass production until they become more economically viable and reliable.
 
Last edited:
I must admit the more I read about SSD's the more it seems like it's a technology released into the public domain a little too early. I was considering one but now I think I'll wait a while
 
Early Adopters - at the bleeding edge and so have the joy of problems and bugs too.

Having said that for many the issue isn't a problem and they get their performance benefit. Once in use they don't notice the slow down as it's gradual..

Probably better at the moment running with additional memory..
 
Quality SLC SSD`s don`t have this problem, my mtron works as well now as it did 6 months ago.

It`s only MLC SSD`s that use write combining wich suffer from this specific problem.

Intel failing to recognise or otherwise properly deal with this, OCZ`s buggy SSD`s and the inferior JMicron controler aren`t doing SSD`s image any favours.
 
SLC all the way ;)

Pity - the person that produced a 128GB SLC for general consumption would clear up the market in the current mess..
 
Or in a few years time (if not before) we will have drives based on Phase Change Memory drives (or PRAM) and wonder why we did all this messing about with NAND gates in the first place...


Surely anybody who is really serious gets an ioDrive anyway. ;) But that's still NAND gate technology...
 
Ive been reading loads of threads on the web since SSD drives came out a came to the conclusion that they just are not ready for general usage yet.

looks like my scsi u320 array is staying a bit longer !!!
 
the thing is, from his fresh install of a os after erasing the drive in dos, the performance "hit" still leaves most reads somewhere between 115-250mb/s when you get to files sized that could "normally" read at max speed, the average is probably still 180mb/s or more meaning while its not quite a constant 250mb/s you might think you'll be getting with the drive, its still MASSIVELY higher than any single, platter based drive can provide.

Essentially the argument here is that the drive is way faster in writes whatever happens, and thats what it was always supposed to be, writes aren't bad, but aren't great on SSD's anyway, but you don't always install stuff and a little slower installing won't kill most people. you install a game once and read from the data dozens if not hundreds or thousands of times, in this case the reads never once get below the read speeds of a normal drive and access times are still infinately better.

Its a bit too scare mongery for my liking, it really needs to state quite clearly that, SSD's are still uber, just not quite as uber as you thought. Still ludicrously overpriced and ridiculous though. None of that article changes my mind at all, its still, ok lets see, you can get a 64gb samsung ssd that performs way above its spec, similar writes to the intel, about 150mb on reads, blows a normal hdd away but its £100. when you can get a 640gb easily for £50 its still 20x the price per gb and on a drive thats really too small for an OS and all your apps.

HOwever saying all that, to get a little bit of performance back there should be a "smart" option in OS's that lets you turn off the write combining for the OS install, and when installing a game or any big thing. It will take a little longer to install but you'll get a perfect install of sequential files that can then be "locked" into place, then take blocks of say 4Gb(the average swapfile size) enable write combining to that section, and every few days the drive automatically changes which 4GB block it uses for the pagefile to even out the usage.

REally the only time write performance is horrible on the "cheap ssd's" when its actually effecting you constantly is mostly when writing date to pagefile and things anyway. Given how long a windows install lasts does it matter if it was horribly long and took an extra 30mins, no.

But again i'll point out, the performance is by no means horrible, it can just be optimised further. I'll jump on the bandwagon when you can get maybe a 200gb drive for £100-150, till then its too small and to expensive.

Whats the point of spunking £100 on a 64gb drive, just to have most of my games load off a hdd anyway losing almost all the speed benefit.

remember also that hdd's vary wildly in reading speed aswell, an intel with max read 250 and minimum read at 110mb/s, compared to a hdd doing sequential reads of 100mb/s max and 50/60mb's minimum.
 
Last edited:
I was actually thinking about it again earlier, my brother was saying he would consider getting a SSD and will probably not bother upgrading till he can get one and do a nice full upgrade and was asking me when they should be "affordable".

I was thinking a good comprimise would be, well because of how easy SSD's are in terms of ruggedness, no platters, no moving parts and so on. It "should" be incredibly easy to simply make a 3.5" form factor SSD with a detachable part, in the main compartment you have a sequentially read/write slightly slow writing but uber read performance section for the majority of the long term files, make is SLC mem and capable of lasting years even with the odd defrag thrown in. Then have a range of different "addon's", ranging from a bog standard 4gb pagefile MLC addon, to a video rendering constant use 128gb industry standard addon. Have these use write combining and most importantly, be replacable.

That way you buy a ridiculously expensive, fantastic read performance low power and silent 128gb SLC drive, then for pagefile, you have a write combining uber fast write performance cheapo 4GB addon that won't last the same length of time but due to size is cheap to replace. So the 128gb will last 5 years easily, and the 4GB might depending on use last between 6 months, a year or 2 years, but when it starts to fail a £15 replacement brings you back up to speed.

This is the major downfall at the moment, theres no flexibility, but there should be. its not like it needs airtight chambers for the platters to go into, plug and play, as simple as usb flash drives, theres no reason not to make them completely configurable for a range of different types of use. If you write constantly changing files all the time, buy a cheaper MLC addon, if you barely write files and need read performance for a database, stick it on a SLC uber fast addon.

I can see the industry moving that way in the future, have the best of both worlds in the same drive, why the heck not?
 
Back
Top Bottom