Friday, February 2nd 2018
Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable
Overview
At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We'll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track.
Hard Drive Reliability Statistics for Q4 2017
At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.A few things to remember when viewing this chart:
Looking back over 2017, we not only added new drives, we "bulked up" by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.
2017 Hard Drive Failure Stats
Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.
Let's take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:
The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
While it looks like the failure rates for the 3 TB WDC drives have also decreased, you'll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That's impressive, but with only 45 drives in use for each model, not statistically useful.
The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.
A Few More Numbers
To save you countless hours of looking, we've culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.
Source:
Backblaze
At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We'll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track.
Hard Drive Reliability Statistics for Q4 2017
At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.A few things to remember when viewing this chart:
- The failure rate listed is for just Q4 2017. If a drive model has a failure rate of 0%, it means there were no drive failures of that model during Q4 2017.
- There were 62 drives (91,305 minus 91,243) that were not included in the list above because we did not have at least 45 of a given drive model. The most common reason we would have fewer than 45 drives of one model is that we needed to replace a failed drive and we had to purchase a different model as a replacement because the original model was no longer available. We use 45 drives of the same model as the minimum number to qualify for reporting quarterly, yearly, and lifetime drive statistics.
- Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure.
- AFR stands for Annualized Failure Rate, which is the projected failure rate for a year based on the data from this quarter only.
Looking back over 2017, we not only added new drives, we "bulked up" by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.
2017 Hard Drive Failure Stats
Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.
Let's take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:
The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
While it looks like the failure rates for the 3 TB WDC drives have also decreased, you'll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That's impressive, but with only 45 drives in use for each model, not statistically useful.
The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.
A Few More Numbers
To save you countless hours of looking, we've culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.
- 116,833 - The number of hard drives for which we have data from April 2013 through the end of December 2017. Currently there are 91,305 drives (data drives) in operation. This means 25,528 drives have either failed or been removed from service due for some other reason - typically migration.
- 29,844 - The number of hard drives that were installed in 2017. This includes new drives, migrations, and failure replacements.
- 81.76 - The number of hard drives that were installed each day in 2017. This includes new drives, migrations, and failure replacements.
- 95,638 - The number of drives installed since we started keeping records in April 2013 through the end of December 2017.
- 55.41 - The average number of hard drives installed per day from April 2013 to the end of December 2017. The installations can be new drives, migration replacements, or failure replacements.
- 1,508 - The number of hard drives that were replaced as failed in 2017.
- 4.13 - The average number of hard drives that have failed each day in 2017.
- 6,795 - The number of hard drives that have failed from April 2013 until the end of December 2017.
- 3.94 - The average number of hard drives that have failed each day from April 2013 until the end of December 2017.
68 Comments on Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable
Part of the reason why we should move faster towards all solid state drives.
Backblaze bought a bunch of enterprise drives a few years back and the enterprise drives had a higher failure rate so any "MUH CONSUMER DRIVES IN A DATACENTER" cry's are invalid
www.backblaze.com/blog/enterprise-drive-reliability/
And it's not just about reliability, but functions like TLER, SAS options etc.
Have an old 40GB Maxtor Fireball 3 you know the slim one (the hotplate series) still going in a friends old PC.
Then thing is like 60°C when it is operating and been like that for the past 12 years...
Many consumer drives include a feature called head parking. What this means is that when the HD is not in use, the head is moved "off platter" and "parked". The feature serves well in a consumer an office environments in instances for exqample, a colleague bumps ya desk when carrying a box of copy paper, or plops it on your desk while loading the machine ... when ya dog, napping under ya desk jumps up when the doorbell rings. When the heads are parked, no damage will occur from the vibration. Consumer HDs are rated for between 250k and 500k parking cycles. When the HD is idle or with writes being in RAM or to the HD cache, the head will move to the parked position. A typical consumer drive might see as much as 25 - 50,000 parking cycles per year....maybe as much as 100k for an enthusiast box... in which case you hopefully didn't cheap out in your HD selection.
Now if ya take that same exact physical drive and use it in a server environment, it will have a different firmware and it will not have the head parking "feature". This is because server drives are almost get many times more data access requests. They can therefore use up those rated parking cycles in a matter of months. Because of economies of scale, that same drive might be sold as a consumer device for $70 ... That same drive as a server drive is much more expensive. What backblaze does, with no worries about data protection given redundancies, they buy consumer drives instead of server drives because they are cheaper. Because they are replaced so often, they were secured in place only by rubberbands, .. tho hopefully they moved away from this silliness by now Backblaze sells their service based upon price so proper server room design is just something that isn't there. Instead of a building designed with thick concrete floors and all racks firmly secured in place to prevent vibration, Backblaze does none of these things.
So what happens is ... the very feature which extends the life of a consumer drive is what's actually killing these drives when inappropriately placed in a server environment. Alternately, we do have available actual published RMA data readily available telling us what % of consumer drives are actually being RMA'd. The data is collected and published every 6 months and they report drives that failed during 6 and 12 months of operation. While this data doesn't tell is what % of drives might fail during their warranty periods, it is statistically relevant as all mechanical drives should follow the same failure / time curve. In addition what value is lifetime data ? By the time ya get it, it's irrelevant as those drives are not on the market anymore. And, it also eliminates DOAs which can result from issues outside the manufacturer's control such as user error and mishandling.
To avoid statistical anomalies, I always look at the data for the last two periods... and ya know what ... there's not a lot of difference between manufacturers ... there are huge differences between models. If ya look at storagereview.com's historical database, you will see that Seagate has the honor of delivering the most reliable and worst reliable drives. Anyway here's the combined data for last 2 reporting periods 12 months):
- HGST 0.975%
- Seagate 0.825 %
- Toshiba 0.93%
- Western 1.15%
Not exactly a Secretariat like win here ... So it's not so much a matter of which brand but which model. Just avoid the duds and your OK. Among the individual winners in the dud (> 2% Failures) category are.- 10,00% Seagate Desktop HDD 6 To
- 6,78% Seagate Enterprise NAS HDD 6 To
- 5,08% WD Black 3 To
- 4,70% Toshiba DT01ACA300 3 To
- 3,48% Seagate Archive HDD 8 To
- 3,48% Hitachi Travelstar 5K1000 1 To
- 3,42% Toshiba X300 5 To
- 3,37% WD Red WD60EFRX 6 To
- 3,04% WD Black WD3003FZEX
- 3,06% WD Red Pro WD4001FFSX 4 To
- 2,95% WD Red 4 To SATA 6Gb/s
- 2,81% Seagate IronWolf 4 To
- 2,67% WD Green WD60EZRX
- 2,49% WD Purple Videosurveillance 4 To
- 2,39% Toshiba DT01ACA200
- 2,89% Toshiba DT01ACA300
- 2,37% WD Purple WD40PURX
- 2,29% Seagate Enterprise NAS HDD ST3000VN0001
- 2,23% WD Red Pro WD3001FFSX
- 2,18% WD Green WD30EZRX
- 2,02% WD Red WD40EFRX
If ya counting, that's 5 for Seagate, 10 for WD, 3 for Toshiba and just 1 for HitachiWD has about 40% market share and it produced 10 duds or 2.5 duds per 10% market share
SG has about 37% market share and it produced 5 duds or 1.3 dud per 10% market share
TS has about 23% market share and it produced 3 duds or 1.3 duds per 10% market share
Does that have any significance ? ... well, if ya avoid the duds, then no. The fact is, if ya avoid the duds, your chances are just about 1 in a 100 that you will experience a drive failure between 6 and 12 months. Over the last 8 reporting periods (4 years), manufacturers of consumer drives have broken the 1.00% failure rate ceiling only 17 out of 32 instances:
Seagate = 0 (0.60 - 0.95%
HGST = 5 (0.60 - 1.13%
Toshiba = 6 (0.80 - 1.54%
WD = 6 (0.90 - 1.26%
Now lets not look at this as a big win for Seagate, the range of numbers over those 8 periods are indicated in parenthesis. So, yet again, with regard to consumer drives used in a consumer environment, there is no evidence which justifies any vast measure of superiority of any HD brand over another. While an argument, not a conclusive one mind you, could be made that over the last 4 years, Seagate has fared bettetr overall, from best to worst over the last year we are talking 8 failures a year versus 11 failures per 1,000 and that is not a big enough number so as to lie outside the realm of normal statistical variations.