Friday, May 7th 2021

SSDs More Reliable than HDDs: Backblaze Study

In the initial days of SSDs, some 11-odd years ago, SSDs were considered unreliable. They'd randomly fail on you, causing irrecoverable data loss. Gaming desktop users usually installed an HDD to go with the SSD in their builds, so they could take regular whole-disk images of the SSD onto the HDD; Microsoft even added a disk imaging feature with Windows 7. Since then, SSDs have come a long way with reliability, are now backed with longer warranties than HDDs, and high endurance. Notebook vendors are increasingly opting for SSDs as the sole storage device in their thin-and-light products. A Backblaze study reveals an interesting finding: SSDs are 21 times more reliable than HDDs.

Backblaze is popular for conducting regular actionable studies on storage device reliability in the enterprise segment, particularly dissecting how each brand of HDD and SSD fares in terms of drive failures or average failure rates (AFR). In a study covering Q1 2021 (January 1 to March 31), Backblaze finds that the AFR of HDDs across brands, stands at 10.56%. In the same period, SSDs across brands lodged an AFR of a stunning 0.58%. In other words, roughly 1 in 10 HDDs failed, compared to roughly 1 in 200 SSDs. Things get interesting when Backblaze looks all the way back to 2013, when it started studying drive reliability.
With annualized failure rates studied between April 2013 and April 1, 2021, Backblaze finds that SSDs total 0.65% AFR, while HDDs do 6.04%. The relatively higher HDD failure rates are attributable to the fact that they have moving parts; pull more average age (before they need replacement), the fact that before they reach their manufacturer-rated endurance, SSDs are highly tolerant to electrical faults using capacitor banks, and the fact that HDD manufacturers have generally reduced the warranties on their drives after the HDD factory flooding incidents in 2012.
Source: Tom's Hardware
Add your own comment

41 Comments on SSDs More Reliable than HDDs: Backblaze Study

#26
bug
WirkoFor some additional fun: you want to salvage data as Windows admin, but don't have access to some user's folder. Well, you're the admin, you can add privileges to yourself ... but no, you can't because privileges are stored as NTFS metadata on the drive, and the drive is read-only.
I'm not even sure you can mount a NTFS partition as read-only. You have to work some magic first and Idk whether that works on a read-only drive.
Posted on Reply
#27
AsRock
TPU addict
NaterSkimmed the article and the thread. Reactions:

Duh.

Unsalvageable? When SSD's "die" from wear don't they go into a read only mode...you can get your data, you just can't write to it anymore.

And old? I think I could go out and fire up my OCZ Vertex 3 240GB SSD based system (that's been off for a good 2 years) and it would work fine. That machine is a solid 10 years old and was on nearly 24/7 for a good 8 years.
Well do not count on them being readable, none of the 3 have been readable after failing for me.
anachronI think we have installed around 500-600 SSD at work, we had a single SSD failure in the last 5 years, and we are using very low priced one for working station, as the datas are on the servers anyway. In the same time we had a lot of HDD with bad sectors inducing low performances or just failing. Indeed the chance to recover the data on an HDD are higher, but the difference in reliability seems to be quite sensible from my experience.

Edit : for the record, we have around 1000 computers, so it's half HDD, half SSD in the inventory. HDD are on average older than SSD since we only buy SSD for the last 2-3years, but we still had more HDD failure in the same lifespan than SSD.
Glad for ya, just not the case for me. Kinda like different HDD brands people seem to have a go to brand and one they would keep away from.
Posted on Reply
#28
Wirko
SSDs are highly tolerant to electrical faults using capacitor banks
Is this really a requirement for datacenter SSDs? If a major electrical failure/blackout/brownout occurs in a datacenter, I guess there would be much more to worry about than corrupt filesystems on some SSDs.
Posted on Reply
#29
WhitetailAni
I guess that my 11-year-old Crucial M4 128GB that's on 89% life after 11 years of being a Windows boot drive is unusual? I've never had any problems with data loss on it.
bugI'm not even sure you can mount a NTFS partition as read-only. You have to work some magic first and Idk whether that works on a read-only drive.
It's VERY easy. Just mount a drive in macOS as NTFS. You can't write to an NTFS drive in macOS unless you pay Paragon Software $20 for NTFS for Mac.
Posted on Reply
#30
Minus Infinity
For my precious 4TB and growing photo collection I'm using HDD's. I have 2 sets of backup, one via NAS, one portable drive as well as having two separate computers that each have a copy of the collection. I would gladly swap to an affordable 8TB SSD but not willing to pay more than about 25% more premium. That's not going to happen for. along time, so HDD's are still important to me.
Posted on Reply
#31
DragonFox
WirkoFor some additional fun: you want to salvage data as Windows admin, but don't have access to some user's folder. Well, you're the admin, you can add privileges to yourself ... but no, you can't because privileges are stored as NTFS metadata on the drive, and the drive is read-only.
Actually; clone the disk first, then manipulate the clone as necessary.
Posted on Reply
#32
arbiter
Backblaze finds that the AFR of HDDs across brands, stands at 10.56%. In the same period, SSDs across brands lodged an AFR of a stunning 0.58%.
This study is complete trash. The fact you read the avg age column where SSD's were only 1 year old vs 4 year old HDD. That is like complaing a New car off the lot with a 4 year old car in and comparing them in maintenance costs but claiming its cheaper for 1 year old car.
Posted on Reply
#33
claes
RealKGBYou can't write to an NTFS drive in macOS unless you pay Paragon Software $20 for NTFS for Mac.
github.com/osxfuse/osxfuse/wiki/NTFS-3G

Unfortunately both options are sketchy since NTFS is closed, but at least this one is free
Posted on Reply
#34
noel_fs
well would be pretty fucking weird if they started failing within 1 year, give it 2 atleast.
Posted on Reply
#35
ymbaja
AusWolfWait a minute... what about the column "Average age"? Does this data mean that 0.65% of SSDs failed after a year, while 6.04% of HDDs failed after 4 years of operation? That isn't really a fair comparison.
Yeah I noticed that too.. you can kinda extrapolate out that the sdds are still more reliable from the data given, but it’s sure not apples to apples.
Posted on Reply
#36
AusWolf
ymbajaYeah I noticed that too.. you can kinda extrapolate out that the sdds are still more reliable from the data given, but it’s sure not apples to apples.
Not exactly. Just because 0.65% of SSDs failed after a year, you can't say that 0.65 x 4 = 2.6% will fail after 4 years. Might be more, might be less.
Posted on Reply
#37
ExcuseMeWtf
AusWolfNot exactly. Just because 0.65% of SSDs failed after a year, you can't say that 0.65 x 4 = 2.6% will fail after 4 years. Might be more, might be less.
If you keep failure rate constant over the years (a bit of a bold assumption), formula would for failure rate would be (1 - (1 - p)^n), where p is annual failure rate and n is number of years passed. So a little under 2.6% actually.
Posted on Reply
#38
80251
NaterSkimmed the article and the thread. Reactions:

Duh.

Unsalvageable? When SSD's "die" from wear don't they go into a read only mode...you can get your data, you just can't write to it anymore.

And old? I think I could go out and fire up my OCZ Vertex 3 240GB SSD based system (that's been off for a good 2 years) and it would work fine. That machine is a solid 10 years old and was on nearly 24/7 for a good 8 years.
I had an NVME Crucial 250GiB SSD once, it was unfortunately installed right under my 980Ti and when it died, it died suddenly, without warning and completely. The data was inaccessible.
Posted on Reply
#39
AusWolf
ExcuseMeWtfIf you keep failure rate constant over the years (a bit of a bold assumption), formula would for failure rate would be (1 - (1 - p)^n), where p is annual failure rate and n is number of years passed. So a little under 2.6% actually.
That's the problem. Failure rate isn't constant through time, therefore the data presented in the article is unreliable to base any assumption upon.
Posted on Reply
#40
Tardian
VannyWhat are hard drives?

I waited but everyone missed the opportunity. What are hard drives? They contain the material in hidden files that gets you there. ;-)
Posted on Reply
#41
Wirko
arbiterThis study is complete trash. The fact you read the avg age column where SSD's were only 1 year old vs 4 year old HDD. That is like complaing a New car off the lot with a 4 year old car in and comparing them in maintenance costs but claiming its cheaper for 1 year old car.
The study comes with all the details and warnings. You only get trash when you try to make a TL;DR from it.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:49 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts