Oh Geez... backblaze again... any discussion of backblaze in relation to consumer drives is simply irrelevant. When a "source" takes consumer dives and puts them in a service contrary to manufacturers recommendation the data is irrelevant. When a server farm is a series of PC cases on flimsy shelving with the drives held in place by rubberbands the data is irrelevant. When you place consumer drives, which features such as "head parking" and the manufacturer advises not to use any drives with this feature in a server environment, the data is irrelevant. How many people would bother to read an article "here's the latest scoop on reliability of devices when installed in direct conflict with manufacturer's specifications?
Many consumer drives include a feature called head parking. What this means is that when the HD is not in use, the head is moved "off platter" and "parked". The feature serves well in a consumer an office environments in instances for exqample, a colleague bumps ya desk when carrying a box of copy paper, or plops it on your desk while loading the machine ... when ya dog, napping under ya desk jumps up when the doorbell rings. When the heads are parked, no damage will occur from the vibration. Consumer HDs are rated for between 250k and 500k parking cycles. When the HD is idle or with writes being in RAM or to the HD cache, the head will move to the parked position. A typical consumer drive might see as much as 25 - 50,000 parking cycles per year....maybe as much as 100k for an enthusiast box... in which case you hopefully didn't cheap out in your HD selection.
Now if ya take that same exact physical drive and use it in a server environment, it will have a different firmware and it will not have the head parking "feature". This is because server drives are almost get many times more data access requests. They can therefore use up those rated parking cycles in a matter of months. Because of economies of scale, that same drive might be sold as a consumer device for $70 ... That same drive as a server drive is much more expensive. What backblaze does, with no worries about data protection given redundancies, they buy consumer drives instead of server drives because they are cheaper. Because they are replaced so often, they were secured in place only by rubberbands, .. tho hopefully they moved away from this silliness by now Backblaze sells their service based upon price so proper server room design is just something that isn't there. Instead of a building designed with thick concrete floors and all racks firmly secured in place to prevent vibration, Backblaze does none of these things.
So what happens is ... the very feature which extends the life of a consumer drive is what's actually killing these drives when inappropriately placed in a server environment. Alternately, we do have available actual published RMA data readily available telling us what % of consumer drives are actually being RMA'd. The data is collected and published every 6 months and they report drives that failed during 6 and 12 months of operation. While this data doesn't tell is what % of drives might fail during their warranty periods, it is statistically relevant as all mechanical drives should follow the same failure / time curve. In addition what value is lifetime data ? By the time ya get it, it's irrelevant as those drives are not on the market anymore. And, it also eliminates DOAs which can result from issues outside the manufacturer's control such as user error and mishandling.
To avoid statistical anomalies, I always look at the data for the last two periods... and ya know what ... there's not a lot of difference between manufacturers ... there are huge differences between models. If ya look at storagereview.com's historical database, you will see that Seagate has the honor of delivering the most reliable and worst reliable drives. Anyway here's the combined data for last 2 reporting periods 12 months):
- HGST 0.975%
- Seagate 0.825 %
- Toshiba 0.93%
- Western 1.15%
Not exactly a Secretariat like win here ... So it's not so much a matter of which brand but which model. Just avoid the duds and your OK. Among the individual winners in the dud (> 2% Failures) category are.
- 10,00% Seagate Desktop HDD 6 To
- 6,78% Seagate Enterprise NAS HDD 6 To
- 5,08% WD Black 3 To
- 4,70% Toshiba DT01ACA300 3 To
- 3,48% Seagate Archive HDD 8 To
- 3,48% Hitachi Travelstar 5K1000 1 To
- 3,42% Toshiba X300 5 To
- 3,37% WD Red WD60EFRX 6 To
- 3,04% WD Black WD3003FZEX
- 3,06% WD Red Pro WD4001FFSX 4 To
- 2,95% WD Red 4 To SATA 6Gb/s
- 2,81% Seagate IronWolf 4 To
- 2,67% WD Green WD60EZRX
- 2,49% WD Purple Videosurveillance 4 To
- 2,39% Toshiba DT01ACA200
- 2,89% Toshiba DT01ACA300
- 2,37% WD Purple WD40PURX
- 2,29% Seagate Enterprise NAS HDD ST3000VN0001
- 2,23% WD Red Pro WD3001FFSX
- 2,18% WD Green WD30EZRX
- 2,02% WD Red WD40EFRX
If ya counting, that's 5 for Seagate, 10 for WD, 3 for Toshiba and just 1 for Hitachi
WD has about 40% market share and it produced 10 duds or 2.5 duds per 10% market share
SG has about 37% market share and it produced 5 duds or 1.3 dud per 10% market share
TS has about 23% market share and it produced 3 duds or 1.3 duds per 10% market share
Does that have any significance ? ... well, if ya avoid the duds, then no. The fact is, if ya avoid the duds, your chances are just about 1 in a 100 that you will experience a drive failure between 6 and 12 months. Over the last 8 reporting periods (4 years), manufacturers of consumer drives have broken the 1.00% failure rate ceiling only 17 out of 32 instances:
Seagate = 0 (0.60 - 0.95%
HGST = 5 (0.60 - 1.13%
Toshiba = 6 (0.80 - 1.54%
WD = 6 (0.90 - 1.26%
Now lets not look at this as a big win for Seagate, the range of numbers over those 8 periods are indicated in parenthesis. So, yet again, with regard to
consumer drives used in a consumer environment, there is no evidence which justifies any vast measure of superiority of any HD brand over another. While an argument, not a conclusive one mind you, could be made that over the last 4 years, Seagate has fared bettetr overall, from best to worst over the last year we are talking 8 failures a year versus 11 failures per 1,000 and that is not a big enough number so as to lie outside the realm of normal statistical variations.