I'm done with Seagate.

That is bad, they shouldn't be swapping drive types on RMA.

I have been recommending them in build advice lists a fair bit lately, and used a couple of them in builds myself, so far (fingers crossed with no problems1) .... their cost is what attracts people.

I have always in the past used Seagate, and i've had a few, the only one's i've had fail were 2 inside Freeagent Extreme external boxes, the notorious 1.5gb models - they both now have 2TB drives in them, one of them a hitachi i think :lol:
That said, i have had a lot of enterprise clsas drives though, which may be different in terms of reliability.

Hitachi are getting better, Toshiba are, i think still a little way behind, WD are pretty steady, but they've had their fair share of issues in the past, just looks like it's Seagates turn this time around.


Lately i've been liking WD Blacks, but, last drive i sent back was one of those, but had no hassles, it's all done off the serial number as it should be, no screwing around with proof of purchase etc, they just honour the warranty on the product, period, they do send back refurbished drives though, with a different label.

Latest one's ive bought have been the RE enterprise class ones, the 2TB models are at a nice price point


How do you ID the 2 vs 3 platter versions ?
 
I have a number of WD/Seagate/Hitachi drives running, I like all three.






Meaningless graph is meaningless, backblaze are an online backup company that maximize profits by using desktop HDD's in ridiculously abusive enviroments and running them to death.

All that chart shows is that Seagate desktop drives don't stand up as well as the others when it comes to being mistreated in ways no enterprise user would, nevermind the target audience.

Storage_Pod.jpg



I disagree, the drives are in temp controlled environments, and just ran constantly that's all, the way they are mounted is fine, yeh they use desktop drives, but have proven that desktops are not that much different to enterprise drives! :D
 
Just for the record - and to put a few points straight.....;)
Back in the pre-flood days... the hard drives they were kicking out were ok.

Since the new factories... they are abysmal... whereas WD and Hitachi appear to have recovered back to their old standard pretty well.
1.) Seagates plants were completely UNAFFECTED by the flooding. At no point was the assembly, head or components/PCB factories underwater. And the "new" Seagate plants which have come online since 2010 have nothing to do with mechanical HDDs.

2.) The reason the Seagate drives failed so badly in the Backblaze study was for a number of reasons, as was pointed out above - the most important of which, was that the drives they were using were not designed for data-centers. They were bog standard Barracudas for desktops (and not the mission critical range) - as was explained by the company not long after the report.

3.) The reason more drives are failing now, by WD and Seagate - and warranties are reducing, is due to the amount of heads / HGAs / and platters in the newer drives. Most of the 4TB / 6TB drives now have 4 or 8 heads / HGAs - and more likelihood of mechanical failures. No other reasons.
 
Last edited:
After my drives of choice became to small (Samsung F4 2TB) all still faultless years on might I add.

I replaced them with 4TB Seagate Nas drives about a year ago and (touch wood) all eight have been rock solid and powered 24x7. I can confirm backblaze and say at least mine have been good. It would be nice to think they are getting reliability sorted because there 3TB and under drives seem pants. Here hoping the Seagate 8TB is reliable, it's a lot of data to lose !

My 8 Samsung 2TB F4 bought for £50 4years ago are all fine [touch wood]. Had to send back my Samsung 3TB..its 1 of 2 drives to fail on me out of 30 over last 10 years. I think temperature or on/off cycle must have a significant effect on longevity..because mine almost never fail..there aare on 24/7.
 
Just for the record - and to put a few points straight.....;)

1.) Seagates plants were completely UNAFFECTED by the flooding. At no point was the assembly, head or components/PCB factories underwater. And the "new" Seagate plants which have come online since 2010 have nothing to do with mechanical HDDs.

2.) The reason the Seagate drives failed so badly in the Backblaze study was for a number of reasons, as was pointed out above - the most important of which, was that the drives they were using were not designed for data-centers. They were bog standard Barracudas for desktops (and not the mission critical range) - as was explained by the company not long after the report.

3.) The reason more drives are failing now, by WD and Seagate - and warranties are reducing, is due to the amount of heads / HGAs / and platters in the newer drives. Most of the 4TB / 6TB drives now have 4 or 8 heads / HGAs - and more likelihood of mechanical failures. No other reasons.

was it not the case that all the drives were not not enterprise drives...so like was being tested with like..even if being used out of spec..the other manufacturers faired better.

i struggle to see how a disk in a cool datacentre can last less than time than the same disk in home users computer..even if it is used out of spec...its not like they were bouncing it on the ground..
 
i struggle to see how a disk in a cool datacentre can last less than time than the same disk in home users computer..even if it is used out of spec...its not like they were bouncing it on the ground..

It's the usage that data center / enterprise disks normally have, it's possible there could be data requests queued 24/7.

Of course a desktop PC could also have this usage, however typically they don't.
 
I disagree, the drives are in temp controlled environments, and just ran constantly that's all, the way they are mounted is fine, yeh they use desktop drives, but have proven that desktops are not that much different to enterprise drives! :D

There is a difference between between WD consumer drives and WD RAID drives. On the RAID drives the spindle is held at both ends, WD Blue / Green the spindle is only held at one end, there is other differences such as vibration protection.

http://www.wdc.com/en/products/products.aspx?id=580#Tab1

Think about HDD's as watch movements. There is the movement in a £200 watch, and the movement in a £5000 watch, both are essentially the same but one will be better.
 
Just for the record - and to put a few points straight.....;)

1.) Seagates plants were completely UNAFFECTED by the flooding. At no point was the assembly, head or components/PCB factories underwater. And the "new" Seagate plants which have come online since 2010 have nothing to do with mechanical HDDs.

2.) The reason the Seagate drives failed so badly in the Backblaze study was for a number of reasons, as was pointed out above - the most important of which, was that the drives they were using were not designed for data-centers. They were bog standard Barracudas for desktops (and not the mission critical range) - as was explained by the company not long after the report.

3.) The reason more drives are failing now, by WD and Seagate - and warranties are reducing, is due to the amount of heads / HGAs / and platters in the newer drives. Most of the 4TB / 6TB drives now have 4 or 8 heads / HGAs - and more likelihood of mechanical failures. No other reasons.

Regardless they don't seem like they are trying to retake their old reputation :( up until the 7200.11s I got through 100s of seagate drives between my own builds, builds for other people and work related stuff with to my knowledge only 1 of those drives failing before they were retired naturally. Since the 7200.11s I've just avoided them like the plague as the failure rate went through the roof and my 1-2 ventures back have seen closer to 1 in 3 drives failing prematurely or having other not easily overlooked issues.
 
WD also were quite dodgy with their WD1003FZEX (1Tb Caviar Black) where they basically rebranded WD10EZEX (1Tb Caviar Blue) drives and stuck a Caviar Black label on them resulting in worse access times. I unfortunately ended up with one the rebranded ones (though the seller I bought it from partially refunded me).

The later 1TB Blacks were single platter, hence the spindle was only held at one end (no benefit holding both ends on single platter). Because of this they shared the same case of the Blues, however I expect there will be differences inside (no ramp technology, dual processor, shared cache).

Like you I own 2 of the single platter 1TB blacks (operating 24/7 365 on a windows server), and I'm not worried about them.
 
Last edited:
was it not the case that all the drives were not not enterprise drives...so like was being tested with like..even if being used out of spec..the other manufacturers faired better.

i struggle to see how a disk in a cool datacentre can last less than time than the same disk in home users computer..even if it is used out of spec...its not like they were bouncing it on the ground..

Hit the nail on the head !

Non of the drives are enterprise but only Seagate has this failure rate.

Even if the environment is harsh ALL the drives are subject to these conditions. maybes Seagate 3TB has a particular weakness that may only show in this usage but the other manufacturers don't seem to suffer from it. :(
 
It's down to Seagate's business model of "piling 'em high and selling 'em cheap". Quality suffers, but that's okay because they've shortened the warranty, so it doesn't affect them too much.
 
Well... Seagate took a turn for the worse in reliability after the flood... odd timing ;)

Regardless of their expected usage scenario, the drives from the other manufacturers had the same expected usage scenario and faired so much better the comparison looks like a joke.

The comparison is of course valid as if they are more likely to fail under that environment then they are also more likely to fail in a home environment, as I have personal experience.
 
was it not the case that all the drives were not not enterprise drives...so like was being tested with like..even if being used out of spec..the other manufacturers faired better.

i struggle to see how a disk in a cool datacentre can last less than time than the same disk in home users computer..even if it is used out of spec...its not like they were bouncing it on the ground..

If the picture above is an accurate example of how they are being used the vibration of 35 other drives + large (probably cheap) fans is highly likely shorten the lifespan of a drive, particularly if it's made for home/office usage.
 
Back
Top Bottom