Wednesday, September 14th 2022

Backblaze Data Shows SSDs May In Fact be More Reliable Than HDDs

Cloud storage provider Backblaze is one of the industry players providing insightful reports into the health and reliability of the storage mediums they invest in to support their business. In its most recent report, the company shared data that may finally be pointing towards the general perception (and one of SSD's call to fame upon their introduction): that they boast of higher reliability and lower failure rates than HDDs.

The company's latest reports shows that SSDs have entered their fifth operating year without an escalation in failure rates: something that seems to plague HDDs pretty heavily starting from year 4. The idea is simple: SSDs should be more reliable because there are no moving part (no platters and no read/write heads that can fail). However, SSDs do have other points of failure, such as NAND itself (the reason there's TBW ratings) or its controller. Backblaze's data does however show that those concerns may be overrated. Of course, there's a chance that SSDs employed by Backblaze will hit a "reliability" wall of the sort that HDDs seem to enter in year four of their operation, where failure rates increase immensely. More data throughout a larger span of time will be welcome, but for now, it does seem that SSDs are the best way for users to keep their data available.
In other news, a recent study called to question the environmental friendliness of SSDs as compared to their HDD counterparts, claiming that SSDs actually imposed a steeper environmental cost than HDDs. But not all may be exactly what it seems in that front.
Source: via TechSpot
Add your own comment

20 Comments on Backblaze Data Shows SSDs May In Fact be More Reliable Than HDDs

#1
Denver
I want to see them recover something from a corrupted SSD.
Posted on Reply
#2
SOAREVERSOR
DenverI want to see them recover something from a corrupted SSD.
You won't and this also leaves out the fact that for critical data you're running SAS RAID arrays and swapping the suckers in and out.
Posted on Reply
#3
John Shepard
DenverI want to see them recover something from a corrupted SSD.
SSD can use RAID
Posted on Reply
#4
TheDeeGee
How long does a SSD hold storage when not given power though? 2 years? 5 years? 10 years?
Posted on Reply
#5
Solid State Brain
TheDeeGeeHow long does a SSD hold storage when not given power though? 2 years? 5 years? 10 years?
Supposedly:
  • >10 years with fresh NAND memory.
  • 1 year (consumer drives) or 3 months (enterprise drives) at 100% wear.
Posted on Reply
#6
Shou Miko
That's because when Linus from LTT drops a SSD the media inside ain't as fragile as in a HDD



:roll:
Posted on Reply
#7
ExcuseMeWtf
DenverI want to see them recover something from a corrupted SSD.
Doesn't matter, you're supposed to perform backups anyways.

In fact, from data security standpoint, that's a plus.
Posted on Reply
#8
Ferrum Master
Those SSD's made 8 years ago are not the same tech node as current ones and the early nodes were more robust in RW cycle counts.
Posted on Reply
#9
Shou Miko
Ferrum MasterThose SSD's made 8 years ago are not the same tech node as current ones and the early nodes were more robust in RW cycle counts.
True, last year I had earlier SSD's around 128-180GB even Intel 5100 series / Samsung SSD's in laptop dying that got replaced with either Gigabyte or Teamgroup SSD's that was even faster.
Posted on Reply
#10
ExcuseMeWtf
Ferrum MasterThose SSD's made 8 years ago are not the same tech node as current ones and the early nodes were more robust in RW cycle counts.
Failures captured on that chart are clearly not for that reason.

On the other hand, issues like controller quirks were not quite as ironed out as they are now.

Lower individual write cycle counts are also compensated by higher capacity, since controller has more free space to perform wear-levelling on.
Posted on Reply
#12
trparky
DenverI want to see them recover something from a corrupted SSD.
According to BuildZoid, if you're using ZFS and you've got the file system spread across enough SSDs, you'll never lose a single byte of data due to hardware failure or simply data corruption; ZFS will automatically detect any errors during reads and fix it for you on the fly.
Posted on Reply
#13
Denver
trparkyAccording to BuildZoid, if you're using ZFS and you've got the file system spread across enough SSDs, you'll never lose a single byte of data due to hardware failure or simply data corruption; ZFS will automatically detect any errors during reads and fix it for you on the fly.
Few years ago, I lost all my files when one of my sata SSDs that was in raid zero died, the SSD just died, not just failed like an HDD would. Probably doesn't apply to newer SSDs, though...

Correct me if I'm wrong but the current SSDs have many more parts susceptible to failure, such as arm chips, controllers, Ram etc...
Posted on Reply
#14
defaultluser
just stay away from quad cell from questionable brand names, and it will outlast its capacity becoming too s,mall!

I prefer drive makers with their own flash, but you can still get decent lifetimes from larger re branders like seagate, Inland and Kingston.
Posted on Reply
#15
trparky
DenverFew years ago, I lost all my files when one of my sata SSDs that was in raid zero died, the SSD just died, not just failed like an HDD would. Probably doesn't apply to newer SSDs, though...

Correct me if I'm wrong but the current SSDs have many more parts susceptible to failure, such as arm chips, controllers, Ram etc...
True, but again... if you have enough drives that are all mirroring the data the likelihood of all drives dying at the same time along with a ZFS-based setup, you're (probably, most likely) never going to lose your data.
Posted on Reply
#16
lexluthermiester
This data from BackBlaze shows what many of us already know. However, failure rates of HDDs are still very low on average and many(myself included) will continue to use a blended array of storage in their PCs. SSDs for a boot/OS drive + HDD's for mass storage.
Posted on Reply
#17
Pepamami
DenverI want to see them recover something from a corrupted SSD.
If u have this data in 1 form, in 1 place, with 0 backups, that data worth nothing, even to pay some one to recover it.
Posted on Reply
#18
sLowEnd
TheDeeGeeHow long does a SSD hold storage when not given power though? 2 years? 5 years? 10 years?
Theoretically some last just as little as one year without power before having issues.

Anecdotally, I have powered on an old 240GB Kingston SSD that was idle for 4 years and it had no issues with the data on it.

Edit: shfs37a240g if you care about the exact model I had
Posted on Reply
#19
Ferrum Master
Well my Crucial M4 turned 10 years, I had two of them and both are still alive without smart errors. That's a MLC drive and is still powered on occasionally as it resides Linux boot for maintenance work for my dedicated NAS. So well done Crucial.

I had SSD failures myself, and see them all day long as I work in service business. Most death percent comes from laptops, where a wild zoo of drives is being used, but the damage cause is mechanical failure, as laptops are carried around, bent, abused, and shit thermals... as everyone wants thin lappies... thin melting hot lappies that throttle even using youtube.

So all things considered. We all agree that this chart is not apples and oranges... but apples and shoelaces. There are vastly different points. Consumer drives vs enterprise ie how much space is reserved for reallocation and how well cooled they are. MLC/SLC versus new gen multibits. Controller failures are a different topic. They even can't be compared and make a claim SSD may be more reliable. They are the same in my point it just depends how you use them, you can manage to kill either of those.

In the end... the topic about RAID emerged. Not sure how ZFS is tailored towards NAND if actually and does it support TRIM and manage data rot, it is more tailored towards spinners . I haven't looked into it but in general RAIDing SSD's is a bad idea for consumers, especially in RAID0(leave aside enterprise). You make them work worse as you kill the access time and 4K and 4K mulithread performance. Linear writes do not matter, quit the epeen stuff about it.

Good idea is to do Hybrid raid. BTRFS actually does native support of RAID1 with SSD and HDD. It will do the job on the SSD later syncing up a mirror to the spinner. Doing Snapshots is a decent and mature plan B without any RAID magic. But BTRFS RAID is still in beta stage as such, so you have to read and experiment.

I will say it again.

BACKUP IS YOUR ONLY FRIEND. The better, backup of your backup is your next best friend. I have a files on my workstation, I backup it fast on NAS SSD's and is is later mirrored once more to my two spinners in RAID1 also, then I kinda feel safe about my data.
Posted on Reply
#20
JAB Creations
1. RAID 1 1TB SSDs since 2015 zero issues.
2. Occasional clones to cold storage drives that are only hot during the actual cloning.

There is no point to running an SSD in RAID 0 unless you have some very very specific justifiable reasons to do so.

There is no reason to not run your SSD in RAID 1 unless you really don't care about the data or just occasionally clone to an external drive.

By external I mean not a "commercial external" drive, I mean an internal drive that you don't have physically hooked up to anything as a back up drive.

I've been using Paragon Hard Disk Manager for years now. I clone both full disk-to-disk and individual partitions (e.g. 1TB C:\ and 1TB D:\) to an external 4TB. It's worth the money to buy a program like Paragon Hard Disk Manager, the built-in drive tools for Windows just suck. It takes days to setup my computer if I do it from scratch, less than an hour via a clone. Even better, I can clone from a smaller-to-larger drive and a larger-to-smaller drive setup. Obviously when cloning a larger drive to a smaller drive the data needs to not exceed what the smaller drive uses. The program intelligently readusts the subjective sizes of partitions so that the data partition always uses up the available space (by default) though you can adjust it before making the commit. I have zero complaints and having good software has saved me countless hours of aggravation in addition to the multiple RAID 1s I run.
Posted on Reply
Add your own comment
Dec 22nd, 2024 01:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts