# Btrfs RAID 5/6 Code Found To Be Very Unsafe & Will Likely Require A Rewrite



## P4-630 (Aug 7, 2016)

It turns out the RAID5 and RAID6 code for the Btrfs file-system's built-in RAID support is faulty and users should not be making use of it if you care about your data. 

There has been this mailing list thread since the end of July about Btrfs scrub recalculating the wrong parity in RAID5. The wrong parity and unrecoverable errors has been confirmed by multiple parties. The Btrfs RAID 5/6 code has been called as much as fatally flawed -- "more or less fatally flawed, and a full scrap and rewrite to an entirely different raid56 mode on-disk format may be necessary to fix it. And what's even clearer is that people /really/ shouldn't be using raid56 mode for anything but testing with throw-away data, at this point. Anything else is simply irresponsible." 

So hopefully you aren't making use of any Btrfs RAID 5/6 support as it turns out to be in very bad shape and may even be ifdef'ed out of the mkfs code. Unfortunately it could take some time to fix especially with the potential for a format change being necessary to address the problem. The RAID56 wiki page has already been updated so users don't accidentally try one of these Btrfs RAID levels. 

http://phoronix.com/scan.php?page=news_item&px=Btrfs-RAID-56-Is-Bad


----------



## newtekie1 (Aug 7, 2016)

A couple things:

1.) It bugs when people say Btrfs File System.  LIke ATM Machine, where the M stands for Machine, so saying ATM Machine is redundant.  The fs in Btrfs stands for File System.  I know this isn't the OP's fault, because the article is the one that does this, the OP just copied their mistake.

2.) This is why you don't use Software RAID.  Stuff like this happens far to often.  It is just like ZFS' issues.  There were a lot of people that jumped to it, especially with FreeNAS, because it was cheap and you could apparently run it on cheap hardware.  Then they came out and said, "oh, you really need to be using ECC memory with ZFS, and make sure you have really good power redundancy.  Because they way ZFS stores data in memory, there are times when data chunks and parity chunks are only in memory and not written to disk.  So a memory error, or a power outage before the data/parity is written, will result in an unreadable volume...oops."  I'm not saying hardware RAID is infallible, but major issues like this seem a lot less rare.

3.) Full disclosure, I'm being kind of a hypocrite on number 2 as I'm currently using the Windows Storage Spaces with a bunch of mis-matched drives to create a large redundant storage space.  But that space is only used to back up a proper hardware RAID5 array.


----------

