WD Green 'Load Cycle Count' Common Failure

Soldato
Joined
8 Oct 2008
Posts
2,688
Location
Hull, East Yorkshire
Just heard about this common failure and checked my drives. Apparently the max threshold is 300,000 - 1,000,000.

Mine are going up every second why do some have low Load Cycle Counts but more hours. The 1tb has way less cycle counts than the 2tb and it's about two years older.

1.5tb
Power on hours 9431
'Load Cycle Count' 119816

1tb
Power on hours 14477
'Load Cycle Count' 4398

2tb
Power on hours 6672
'Load Cycle Count' 33705

500gb
Power on hours 16345
'Load Cycle Count' 4544

Found this useful post written by an 11 year old. http://www.geekstogo.com/forum/topic/285453-whats-the-optimal-time-in-seconds-for-hard-drives-to-park-heads/page__p__1896472#entry1896472

Shouldn't have to do that but changed the park time from 8 seconds to 5 minutes. Shouldn't have to worry about it now.
 
Last edited:
Hi Nate,

I'd be interested in any links to actual failures that this has caused - I've got a stack of the WD green drives (22 at last count :eek:) and have been following this topic for a while.

The high LCC count has been an issue that seems to have been blown out of all proportion, and lots of people have worried about it without there being any evidence of it causing a significant reliability problem. WD has said that the drives are rated as good for 1000000 LCCs - the original design spec was 300000 but that didn't mean the drive will fail when it reaches that figure or 1000000 for that matter.

On your 1.5tb drive something is accessing it more often than the other drives, on average every 283 seconds. By setting the idle3 to 5 minutes it will mean that the the drive will not unload the heads and do it's power saving thing and other parts of the drive will probably then wear out faster instead. If it has taken 2 years for your 1.5TB LCC count to reach circa 113K then it would take about 17 years to reach 1000000 - personally I'd be expecting any drive to wear out or fail by that point.

The wdidle3 tool is not described as being for use on the green drives, and although it obviously works there is question over what it does to your warranty. IIRC there is a post by a WDC rep on their forums saying it was ok to use on thegreen drives, but that's all I found.

So in your case, I'd have left well alone :).
 
Hi James,
I'd say that you have either been incredibly unlucky or there is something about the way they have been handled or their environment that is causing them all to fail. (E.g. dropped in transit, run too hot, dodgy power supply, etc.)

Individuals only buy such small number of drives that you can't form a statistically significant view of component reliability. It doesn't stop us all having our own favorite (and hated) brands though :)
 
They are green drives, so I'd guess at energy saving :)
Edit - it's actually the unloading of the heads that happens after 8s of inactivity - I'm not too sure about spinning down
 
They are green drives, so I'd guess at energy saving :)
Edit - it's actually the unloading of the heads that happens after 8s of inactivity - I'm not too sure about spinning down

Yeh it doesn't spin down. The other thread shows normal start/stop counts.
 
Hi James,
I'd say that you have either been incredibly unlucky or there is something about the way they have been handled or their environment that is causing them all to fail. (E.g. dropped in transit, run too hot, dodgy power supply, etc.)

Individuals only buy such small number of drives that you can't form a statistically significant view of component reliability. It doesn't stop us all having our own favorite (and hated) brands though :)

I think i handle hdds well also i have had a 320GB drive in the same machine for about 2 Years prior.
 
I think i handle hdds well also i have had a 320GB drive in the same machine for about 2 Years prior.

I wasn't trying to suggest you mishandled them - just that there could be a hidden common cause contributing to the failures.

Out of interest, how did they fail? Were they DoA or did they start misbehaving after a while?
 
Hi Nate,

I'd be interested in any links to actual failures that this has caused - I've got a stack of the WD green drives (22 at last count :eek:) and have been following this topic for a while.

The high LCC count has been an issue that seems to have been blown out of all proportion, and lots of people have worried about it without there being any evidence of it causing a significant reliability problem. WD has said that the drives are rated as good for 1000000 LCCs - the original design spec was 300000 but that didn't mean the drive will fail when it reaches that figure or 1000000 for that matter.

On your 1.5tb drive something is accessing it more often than the other drives, on average every 283 seconds. By setting the idle3 to 5 minutes it will mean that the the drive will not unload the heads and do it's power saving thing and other parts of the drive will probably then wear out faster instead. If it has taken 2 years for your 1.5TB LCC count to reach circa 113K then it would take about 17 years to reach 1000000 - personally I'd be expecting any drive to wear out or fail by that point.

The wdidle3 tool is not described as being for use on the green drives, and although it obviously works there is question over what it does to your warranty. IIRC there is a post by a WDC rep on their forums saying it was ok to use on thegreen drives, but that's all I found.

So in your case, I'd have left well alone :).

Didn't see any articles or comments blaming this for a failing drive. All I saw was the 300,000 statement and the fact WD rep said it can cause wear and tear and these drives aren't designed for desktops.

That 1.5TB LCC count might be because my music is on that drive but it was on the 500GB drive for two years prior and that LLC count is very low. Since the LCC is so high on this drive I think five minutes is best as it's getting used a lot as I'm always playing music. As the WD rep said these drives are made to store and not be accessed.

I would rather set it to five minutes just to be on the safe side. The 2TB has movies stored on it but is also my download drive. The 1TB is for my games which gets used the least, maybe I should set this back to 8 seconds.. Maybe I should set different times for different drives is that possible?
 
Last edited:
I wasn't trying to suggest you mishandled them - just that there could be a hidden common cause contributing to the failures.

Out of interest, how did they fail? Were they DoA or did they start misbehaving after a while?

They failed after a while WHS (windows home server) stopped doing backups so i tested the HDDs with WD Diagnostics and they came back as failed. I have now done that WDIDLE thing on the ones i have now.
 
On your 1.5tb drive something is accessing it more often than the other drives, on average every 283 seconds. By setting the idle3 to 5 minutes it will mean that the the drive will not unload the heads and do it's power saving thing and other parts of the drive will probably then wear out faster instead. If it has taken 2 years for your 1.5TB LCC count to reach circa 113K then it would take about 17 years to reach 1000000 - personally I'd be expecting any drive to wear out or fail by that point.

Not saying the drives do fail or what the problem is, but a huge portion of electrical devices die turning on or off rather than from being in constant use.

There's far more change in state, movement, current when things start and stop than as they continue, IE a drive will use 2amp's and a lot more power to start spinning than to continue spinning.

Same way, I can't remember the last lightbulb that went while it was already on, almost every single one that's ever popped in my house happened when being turned on.

Unnecessary amount of turning heads on and off, starting and stopping is almost certainly going to cause more problems and eventually failures than normal continuous usage.
 
Not saying the drives do fail or what the problem is, but a huge portion of electrical devices die turning on or off rather than from being in constant use.

There's far more change in state, movement, current when things start and stop than as they continue, IE a drive will use 2amp's and a lot more power to start spinning than to continue spinning.

Same way, I can't remember the last lightbulb that went while it was already on, almost every single one that's ever popped in my house happened when being turned on.

Unnecessary amount of turning heads on and off, starting and stopping is almost certainly going to cause more problems and eventually failures than normal continuous usage.

In general I agree, but the question is surely whether it matters. If the drive manufacturers are any good at reliability engineering (and I bet WDC and other major manufacturers are) they will have analysed each failure mode and designed the drive such that there are no weak points. As long as the way that the heads have been designed to unload doesn't introduce a dominant cause of failure then I'm happy.

Just to play devil's advocate, the unloading of the heads reduces friction and the load on the motor. It could be that never unloading the heads increases the motor failure rate more than the not unloading reduces failure rate.

Now I've no idea whether this is the case or not - all I'm trying to say is that it won't be as simple as "using wdidle3 is a good thing". These drives have been out for a few years, and if the manufacturer thought that the head unloading was a fundamental problem that could cost them a fortune in future warranty claims then don't you think they would have changed it by now?
 
Just used this to stop the annoying clicking but by disabling the timer it actually means park instantly whenever possible, setting it to 300 seconds has fixed it clicking
 
Since this thread has been revived and since my last WD Green 1.5TB drive has just died I will add my own observations.

I bought 5 WD Green 1.5TB drives (first gen I would guess as it was close to the release date). They were bought in two batches, three months apart, from two different sources.

After putting them in a well ventilated case I built a raid 5 array with mdadm (Linux). All 5 drives failed within 3 months with bad sector errors. After the first two drives failed, the remaining 3 were used as separate drives (i.e. no more array) but still they failed. I sold off all but one of the RMA replacements. That replacement has also just failed on me with bad block errors. This replacement has only ever been used as a standalone drive and I have probably had it for a couple of years now.

Note I have a pretty old 320GB WD Blue drive that is running and running without and issue and also have a couple of 2TB WD Green drives which are also fine. I would suggest that it is a problem with that gen of WD Green 1.5TB drives from my own observations, especially if they are put in to raid arrays.

Personally I now stick to Seagate Barracudas or WD Blacks whenever possible. WD Green 2TB drives I would also consider but I would not personally touch 1.5TB WD Green drives again.

RB
 

Yep, although the effect of Green drives aggressive power management was not at the time known to react with raid arrays so badly. It came to light around the time I saw issues :mad:. Short overview here.

This however does not explain why the RMA replacement has also now failed with the same error and it has never been in an array. All drives were 1.5TB Green EARS models IIRC.

RB
 
Back
Top Bottom