That doesn't work for hard hard drives, especially in this use case. They know the drives are going to fail, it's just a matter of when. The method you mentioned is a less accurate way of measuring reliability by excluding how long the drive has been in service. Using your method, one could easily manipulate the numbers so that Seagate looks good. For example, take a 6 year old array of WD drives and bit them against brand new seagate drives. Which drives do you think are going to start to fail first? That's not a fair comparison yet it's the method you're advocating for.
Read it again. This is why you measure from the beginning of the parts life, when it is first put in service to when it dies or when you decide the end your evaluation period. It doesn't matter if it is hard drives or a water pump on a car. Come on, this is standardized failure rate testing procedure here...
Your concern applies equally to their testing and stat reporting method. We don't know how long these drives have been in service. We only know the pool of drives during the short 4 month period of time these numbers cover provided a certain amount of drive days of work. We don't know how old those drive actually are, or how long they've been in service before the stat period started. In their method, if they started using PartA a year before they started their stat recording period, and started using PartB only a month before, then I guarantee you PartB is going to look like it has a lower failure rate. It is this exact reason that failure rate test is not done this way. It is measured from moment a part is put in service to the moment it dies or the moment it reaches the decided "EOL".
Useful information would be how many drives failed in the first, say, 3 years they are in use. That'd be a useful statistic and a proper failure rate number. Not this bullshit "we had 1 drive fail and we're going to call it a 30% failure rate".
Of course, even if they did provide the proper failure rate, BackBlazes numbers would still be completely meaningless to the average consumer anyway...
Jfc, seagate is junk, just deal with it. I can make a 99% accurate guess before I open a computer what drive inside has failed. If it's not a seagate, then it's a wore out WD green (likely killed from head parking). I'll take a green alllll day over SG. At least it'll last outside warranty, which is useless. You'll never receive usable drive back from those SG clowns (probably not WD, either).
I'd take a Seagate desktop/Barracuda drive over a WD Green/Blue any day. The WD Green/Blue drives are garbage. But I'd take a WD Black/Purple/Red/Gold over every Seagate except the ES drives, and I'd take a Seagate ES drive over pretty much any other drive on the market. Those drives are damn near bulletproof.
It is about the model of drive, not the brand.
You have to ask yourself. If Seagate was junk, why does BackBlaze, a data storage company, use by a large margin more Seagate drives than any other brand? 74% of their drive are Seagate. That's three times as much as the next manufacturer, HGST, which only accounts for 25% of their drives. WD only accounts for a whole 1% of the drives they use, and they don't even have 1000 WD drives in service, so it really isn't a good enough sample size to judge accurately how they perform.
Makes me wonder what went wrong with that one particular SeaGate model.. 30% fail rate with so few pieces? Jeez.
It is because they had 60 drives and one failed, so they say that is a 30% failure rate... It's not, but they say it is.