• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Backblaze's 2016 HDD Failure Stats Revealed: HGST the Most Reliable

Then whomever runs their excel document sucks at fomulas...
 
Here's some food for thought, if you stack 15 HDDs on top of each other, then surround them by more stacks of 15 HDDs then not only do the ones nearer the centre of each stack run much hotter but the stack in the middle also runs much hotter.

Considering that operating temps will vary so massively from drive to drive it basically invalidates any attempt to log the failures rates as although all the drives are being abused there are many which are being abused more and no common methodology at work at all.

I.E the two drives on the list with the highest failure are Seagate DX drives (the newest ones on the list), now if they bought a load in bulk and started using them to replace failed drives then they would be putting most of them in the areas where drive failure is most likely to occur, thus hyper inflating their failure rates.

Another point is that if you have 1000 DriveX and 100 DriveY and break 50 of each then the failure percentage will be vastly different even though you killed the same amount of drives.

In enterprise and / or mid-company, there's no such thing as 'stacking' harddrives without any proper cooling. Esp. in server enviroments there are fans 24/7 running on top speed, rooms airco-cooled and the best possible temps a servercabinet can get.

I do believe that the faillure rate for Seagate is much higher, since i used to fix HDD's (datarecovery) for over 2 years. The majority of discs i had where seagate which either had a corrupted firmware, a booted PCB or toasted heads.

Personally i had many Seagate's in the past untill i had a severe crash which caused data loss. Never, seagate again. I have 2 samsung disks which are over 6 years old now, STILL running properly and maintaining my data for 24/7.

There are better brands then Seagate and google clarifies this as well. Seagate has a much higher faillure rate then any other brand.
 
In enterprise and / or mid-company, there's no such thing as 'stacking' harddrives without any proper cooling. Esp. in server enviroments there are fans 24/7 running on top speed, rooms airco-cooled and the best possible temps a servercabinet can get.

I do believe that the faillure rate for Seagate is much higher, since i used to fix HDD's (datarecovery) for over 2 years. The majority of discs i had where seagate which either had a corrupted firmware, a booted PCB or toasted heads.

Personally i had many Seagate's in the past untill i had a severe crash which caused data loss. Never, seagate again. I have 2 samsung disks which are over 6 years old now, STILL running properly and maintaining my data for 24/7.

There are better brands then Seagate and google clarifies this as well. Seagate has a much higher faillure rate then any other brand.

How curious I do quite a bit of data recovery myself and almost every single drive in my hands right now is WD Blue
 
Those either die of a firmware / PCB crash or head-crash. But i had more Seagate's coming in in 2 years then WD's. I always reject HDD's that on accident, fell on the ground. These coud'nt be fixed without the expertise of a clean-room.
 
How curious I do quite a bit of data recovery myself and almost every single drive in my hands right now is WD Blue

Agreed, I don't do data recovery, just break fix. So I'm replacing a lot of bad drives, and I see more WD Blue drives than anything else.
 
In enterprise and / or mid-company, there's no such thing as 'stacking' harddrives without any proper cooling. Esp. in server enviroments there are fans 24/7 running on top speed, rooms airco-cooled and the best possible temps a servercabinet can get.

I do believe that the faillure rate for Seagate is much higher, since i used to fix HDD's (datarecovery) for over 2 years. The majority of discs i had where seagate which either had a corrupted firmware, a booted PCB or toasted heads.

Personally i had many Seagate's in the past untill i had a severe crash which caused data loss. Never, seagate again. I have 2 samsung disks which are over 6 years old now, STILL running properly and maintaining my data for 24/7.

There are better brands then Seagate and google clarifies this as well. Seagate has a much higher faillure rate then any other brand.
I believe it is the Barracuda 7200.13 drives that were 1-3 TB had firmware issues and higher-than-usual failure rates. A lot of the bad ones should already be either dead or outside of infant mortality so they'll be good for another 5+ years.

Like I said before, it's not really the brand that matters, it's the specific model. Some are exceptionally reliable (<1% failure rate), some break a lot (>5% failure rate), and most are just normal (1-5%).
 
Agreed, I don't do data recovery, just break fix. So I'm replacing a lot of bad drives, and I see more WD Blue drives than anything else.

I actually just finished a recovery on an external using a 5400RPM Blue, have another 3 in docks to the left of me
 
Think People here would be more interested in SSD failure Rate's
By size and Brand
After all its 2017 and Spinning rust is on its way out !!! :)

I'm not. I keep 3tb of data (important to me) on mechanical HDDs. The SSD only holds my OS witch is expendable. Think about it. 99% of users won't spend the thousands of $$ required to buy that much solid state storage, so "spinning rust" is here to stay for at least the next 5 years - if not more.

As for the thread - I did notice that HSGT drives are very reliable (and pretty speedy). I've had both seagate and WD drives fail on me (less then 2 years old), but never a HSGT drive die so far. Samsung drives used to be very good but as well - I still have a working 4GB drive that came in my K6 PC back in the day, and it still runs fine.
 
In enterprise and / or mid-company, there's no such thing as 'stacking' harddrives without any proper cooling. Esp. in server enviroments

Actually, that's exactly the way Backblaze handled all their drives. About a year and a half or so ago they began replacing their boxes full of unventilated drives with better server storage cabinets. This mishandling is one of the reasons I and many others have always considered their reliability studies to be suspect.
 
Actually, that's exactly the way Backblaze handled all their drives. About a year and a half or so ago they began replacing their boxes full of unventilated drives with better server storage cabinets. This mishandling is one of the reasons I and many others have always considered their reliability studies to be suspect.
Do you have a source for this? According to them they've actually reduced the number of fans from 6 to 3 between v4 and v5 (they did add more vents, though) and have actually increased drive density to 60 disks per chassis, all because temperatures were fine...

https://www.backblaze.com/blog/cloud-storage-hardware/
https://www.backblaze.com/blog/open-source-data-storage-server/
https://www.backblaze.com/blog/hard-drive-temperature-does-it-matter/

Incidentally, according to Google's data sheets on their own datacenters, drives actually like to run warmer than they are in BB's pods...
http://research.google.com/archive/disk_failures.pdf
http://storagemojo.com/2007/02/19/googles-disk-failure-experience/
 
Vibrations are as important, if not more so, than heat.
 
Have you seen a server rack's drive carriages? Storage Pods are on par or better than most...

http://www.45drives.com/ - Check out that solution and their client list...
https://www.google.com/search?q=sto...lxvfRAhVJ3IMKHQk-D3AQ_AUICCgC&biw=667&bih=331

Given that their annual failure rates aren't too dissimilar to Google's, I never understood where the doubt comes from...

Don't get me wrong - I also think drives deserve more protection, but you might hate me if I recommended this ;): http://www.silverstonetek.com/product.php?pid=665&area=en

P.S. Preventing vibration is worst for drives then allowing it... Rubber bands and the like, that allow the disk to vibrate, are probably better than the sleds most racks and blades use. Then, I haven't found any data, so who knows? :eek:
 
Do you have a source for this? According to them they've actually reduced the number of fans from 6 to 3 between v4 and v5 (they did add more vents, though) and have actually increased drive density to 60 disks per chassis, all because temperatures were fine...

https://www.backblaze.com/blog/cloud-storage-hardware/
https://www.backblaze.com/blog/open-source-data-storage-server/
https://www.backblaze.com/blog/hard-drive-temperature-does-it-matter/

Incidentally, according to Google's data sheets on their own datacenters, drives actually like to run warmer than they are in BB's pods...
http://research.google.com/archive/disk_failures.pdf
http://storagemojo.com/2007/02/19/googles-disk-failure-experience/

Source on BB, do a search on TPU. It's been covered and debated 15 ways to Sunday on here for several years now.

https://www.techpowerup.com/forums/search/16381259/?page=4&q=BackBlaze&o=date


The Google info is more credible, and actually is useful, especially regarding when drives normally die, and the fact that they do not like it too cool.
 
Last edited:
Having been a part of those debates on other forums exhaustively (and this forum to a lesser extent), and having provided evidence directly from BB that that indicates your claims are inaccurate, I apologize for having a lack of interest in searching for information that seems to directly contradict the horse's mouth, so to speak (alternative facts?)...

The google "info" seems to indicate that unknown (enterprise?) drives in Google's unknown chassis (? maybe they suspend their disks across a room?) in a top secret datacenter fail about as often as consumer drives in BackBlaze's open source (and, in some's opinion, vibration prone) storage pod datacenter... Telling me to search for a source that I literally just pointed to isn't going to change the correlation, and I wholly doubt it will debunk any of the claims I've made thus far, let alone indicate that organizations like NASA, Lockheed, Netflix, MIT, livestream, and Intel don't use racks that look oddly similar to BackBlaze's (hint: they were designed by the same company)...
 
Last edited:
Having been a part of those debates on other forums exhaustively (and this forum to a lesser extent), and having provided evidence directly from BB that that indicates your claims are inaccurate, I apologize for having a lack of interest in searching for information that seems to directly contradict the horse's mouth, so to speak (alternative facts?)...

The google "info" seems to indicate that unknown (enterprise?) drives in Google's unknown chassis (? maybe they suspend their disks across a room?) in a top secret datacenter fail about as often as consumer drives in BackBlaze's open source (and, in some's opinion, vibration prone) storage pod datacenter... Telling me to search for a source that I literally just pointed to isn't going to change the correlation, and I wholly doubt it will debunk any of the claims I've made thus far, let alone indicate that organizations like NASA, Lockheed, Netflix, MIT, livestream, and Intel don't use racks that look oddly similar to BackBlaze's (hint: they were designed by the same company)...

Sources? None? Thanks. Move along then. You want to live by bb info go ahead. Most knowledgeable and competent folks give them only halfhearted credit for their data. Google's long term data is more credible.
 
Stupid test, more drives you have the less the failure rate percentage is going to be. That been said Seagates are just bad lol and ive replaced more HGST drives then any other in the past 4yrs.
 
Sources? None? Thanks. Move along then. You want to live by bb info go ahead. Most knowledgeable and competent folks give them only halfhearted credit for their data. Google's long term data is more credible.
Sorry, but I have no idea what you mean.

You said "source on BB," I provided BB sources, and now those sources don't matter? Sorry, but this logic doesn't make any sense.

I've only made claims based on widely available public data. I've been a datacenter tech for over a decade, have done several petabyte installs, managed hundreds of RAIDs, and have even built a few storage pods for fun.

Conveniently, I don't need any of that life experience to be able to recognize cogency. It doesn't take a genius to look at annual failure rates from both data sets and be mesmerized by how BB manages failure rates as low as Google's with their supposedly bad chassis, or to see some authority in actual data centers (Intel and MIT are apparently incompetent and unknowledgable?) using older versions of BBs storage pods for their servers... You haven't provided a single warrant to your claims other than that some article that BB published that you can't refer to that supposedly (I guess we'll never know) contradicts the BB articles that I linked to in order to support my claims that were, you know, published by BB. All this says to me is that your claim hasn't no warrant at all (or the dog ate it).

You can hide behind patronizing arrogance as much as you'd like, but stupid is as stupid does. I take these data sets with several grains of salt, but most of the arguments against the data are either based on erroneous information or bad assumptions about how disks work (and how much work it is to build a datacenter).
 
Last edited:
I believe it is the Barracuda 7200.13 drives that were 1-3 TB had firmware issues and higher-than-usual failure rates. A lot of the bad ones should already be either dead or outside of infant mortality so they'll be good for another 5+ years.


I think it was the 7200.10/11 drives that had firmware issues. I got hit with 3 1.5TB drives that were affected. By 7200.12 the bugs were ironed out.
 
Back
Top