# NAS Backups



## Russ64 (Jun 9, 2019)

Hi all,

Do you have a NAS?  If so, do you back it up?

Lets get some thoughts.

Today is my NAS backup day.  Backing up my 4TB RAID 0 NAS to 3TB External USB3 HDD.


----------



## Solaris17 (Jun 9, 2019)

Backblaze B2


----------



## Ahhzz (Jun 10, 2019)

I let my NAS back itself up with a mirrored Raid.


----------



## phill (Jun 10, 2019)

Eventually I'll have two backups from my NAS, I've a set of stand alone drives in a caddy that backs it up via a program called SyncToy and when I've securely blanked some hard drives I'll have a server setup with a Raid 5 to have another backup copied too.   I do need to upgrade my drive sizes I backup to since I've 3, 4Tb Raid 1 arrays in my NAS (Synology) and only 3Tb and a 2Tb drive to back up to...  oopsie!!

If I do get time to setup my LTO5 tape drive, I'll have that to backup the more important data to as well, just in case   Otherwise I can just burn a blu ray disc if needed   I like to make sure I don't loose anything I would like to keep hold of


----------



## mashie (Jul 11, 2019)

Really important stuff I put on Google drive, the rest will have to survive on the RAID6 array. 

I don't have enough spare drives to back up 40TB+ of data.


----------



## Jetster (Jul 11, 2019)

My FreeNAS (RAID6) is backed up to a Synology NAS (RAID1)


----------



## silentbogo (Jul 11, 2019)

RAID-1. 
I've also considered cloud storage, like blockchain-based Sia, but it's too localized (not too many nodes in my region). Price is good, though: currently averages at $0.50 per TB/mo, and was around $2 at its peak when crypto was at its highest. May be good for personal projects, but I don't wanna risk using it for work. Alternatives are way too expensive... cheaper to buid another NAS which will pay for itself in less than a year.


----------



## Solaris17 (Jul 11, 2019)

silentbogo said:


> RAID-1.
> I've also considered cloud storage, like blockchain-based Sia, but it's too localized (not too many nodes in my region). Price is good, though: currently averages at $0.50 per TB/mo, and was around $2 at its peak when crypto was at its highest. May be good for personal projects, but I don't wanna risk using it for work. Alternatives are way too expensive... cheaper to buid another NAS which will pay for itself in less than a year.


I hope you do look into something else because RAID1 is not a backup.


----------



## silentbogo (Jul 11, 2019)

Solaris17 said:


> I hope you do look into something else because RAID1 is not a backup.


My NAS is used for cloning local backups, so if I look into something else, it'll tecnically be the "backup for the backup of the backup".
Plus my boss is very paranoid about cloud storage, so that'll never happen (I've tried to convince him on several occasions).


----------



## Solaris17 (Jul 11, 2019)

silentbogo said:


> My NAS is used for cloning local backups, so if I look into something else, it'll tecnically be the "backup for the backup of the backup".
> Plus my boss is very paranoid about cloud storage, so that'll never happen (I've tried to convince him on several occasions).



No thats cool as long as you have more than one. I also couldnt have made that sound anymore like I was a dick. So sorry, I just didn't know if you actually thought it was a backup. I understand the allure of RAID 1 for people not in the know, but I get nervous when people say things like that if they truly do not understand the consequences, because again its easy for people to think its totally fine. I should have elaborated in my previous post, and not aimed at you specifically, but just in case some without the insight stumbles across the thread please allow me.

RAID 1 Protects data against
- Disk failure

RAID 1 does not protect data against
- RAID controller failure
- Virus/Malware (Ransomware)
- Filesystem Corruption
- Accidental deletion
- Acts of god

Since its bit for bit in realtime anything bad that can happen to the data itself will happen to the other even if you dont notice, thus rendering the data effectively useless.


----------



## Deleted member 158293 (Jul 11, 2019)

Backup off-site, saved me many times.  Simple as that.


----------



## phill (Jul 12, 2019)

I always find it's best to have different types of backups as well..  Not just on drives, maybe tape or as @yakk mentions, off site.  I've an extra 2 or 3 backups if I choice to use them..  For me pictures of my daughters are paramount since we rarely print anything off any more...  Couldn't take loosing any of that....


----------



## newtekie1 (Jul 12, 2019)

Not a NAS, but a file server.  I have a RAID5 array that is backed up to another RAID5 array.


----------



## Ahhzz (Jul 12, 2019)

phill said:


> I always find it's best to have different types of backups as well..  Not just on drives, maybe tape or as @yakk mentions, off site.  I've an extra 2 or 3 backups if I choice to use them..  For me pictures of my daughters are paramount since we rarely print anything off any more...  Couldn't take loosing any of that....


I hear you, man. I've got Photos stored on my file server, backed up on an external, and on my NAS, which is Raid 1. And many have also been pushed to one cloud or another...

Speaking of my file server, I can't encourage people enough to use something like HD Sentinel for monitoring critical drives. I've had at least 10 different instances in the last 3 years of user machines doing "weird things", and Windows reporting no issues with a drive, but then I throw HD Sentinel at it, and it tells me I've got a pending drive failure. Start imaging the drive, and I get "There was a problem copying XXXX sectors to the image, continue anyway?" from my software. Use _something _to watch your drives, people ....


----------



## Deleted member 158293 (Jul 12, 2019)

Ahhzz said:


> I hear you, man. I've got Photos stored on my file server, backed up on an external, and on my NAS, which is Raid 1. And many have also been pushed to one cloud or another...
> 
> Speaking of my file server, I can't encourage people enough to use something like HD Sentinel for monitoring critical drives. I've had at least 10 different instances in the last 3 years of user machines doing "weird things", and Windows reporting no issues with a drive, but then I throw HD Sentinel at it, and it tells me I've got a pending drive failure. Start imaging the drive, and I get "There was a problem copying XXXX sectors to the image, continue anyway?" from my software. Use _something _to watch your drives, people ....



Yes!  I like Seagate for their extra health checks (never thought I'd write that) which you can use to trigger a stand-by hot swap drive to kick in and pre-emptively start transferring the raid array away from the pending drive failure.

And yes pictures & videos are now a very big deal for everyone.   Off-site storage of those helped me save them a couple times from unfortunate events.


----------



## silkstone (Jul 12, 2019)

Ahhzz said:


> I let my NAS back itself up with a mirrored Raid.



Raid is not a backup and it's not a good idea with drives over 2Tb


----------



## TheLostSwede (Jul 12, 2019)

I moved to SnapRAID as I mostly have media files on my NAS. It should survive if at least one drive fails and I have very little data on there that's really important.
It's not as fast as RAID-5, but it should have better redundancy and should be easier to both recover data and upgrade. Upgrading RAID arrays is a PITA imho and you risk losing drives doing it, as rebuilding a RAID array is not kind to the drives.

All my important stuff is on Google drive and/or Dropbox.


----------



## Jetster (Jul 12, 2019)

silkstone said:


> Raid is not a backup and it's not a good idea with drives over 2Tb



A raid is a legit back up to a raid as long as its not the same raid.  But a raid by itsself corect, not a back up. And drives over 2Tb are fine for a raid. I have 6 Tb drives. Now there was some debate about using raid 5 on drives larger than 2Tb as by the time you rebuild a drive the raid might fail. So best to have at least two redundant copies. (raid 6, 10, 50, 60) But most servers now have large drives in a raid

I will say that for years I did not back up my data. I just could not afford it. So i would use a USB large drive, replace it every 3 or 4 years and hope it didn't fail. And a few DVD copies,  I was lucky


----------



## mashie (Jul 12, 2019)

silkstone said:


> Raid is not a backup and it's not a good idea with drives over 2Tb


I have 7x 10TB drives in RAID6, working just fine and can handle two drives die at once.


----------



## Jetster (Jul 12, 2019)

mashie said:


> I have 7x 10TB drives in RAID6, working just fine and can handle two drives die at once.



You still should have a back up. That's some $ for those drives, surly you can afford a few more . Do you have a spare 10 Tb drive if one decides to go south?

I have a third cold copy of all my stuff. 25 Gb BluRay disks but its a few years old


----------



## mashie (Jul 12, 2019)

Jetster said:


> You still should have a back up. That's some $ for those drives, surly you can afford a few more . Do you have a spare 10 Tb drive if one decides to go south?
> 
> I have a third cold copy of all my stuff. 25 Gb BluRay disks but its a few years old


If a 10TB drive dies I will just order a new one so only 24h with RAID5 redundancy should that happen. In the past 15 years I have had 2 drives go bad. One was in RAID5 and it died as I migrated to the first iteration of the current server back when it simply did drive merging and no resiliency using MHDDFS. The second drive was part of said MHDDFS array but thankfully it was only half dead, it could still be read but not written to. Data recovered and the RAID6 with a new set of drives was born. The first drive to die was a 2TB Samsung and the second was a 4TB WD blue. 

As long as I can survive a dead drive or two I'm happy.


----------



## silkstone (Jul 13, 2019)

mashie said:


> I have 7x 10TB drives in RAID6, working just fine and can handle two drives die at once.



If one of the drives fails and you have to rebuild, be prepared for an extremely long wait and the possibility that the rebuild fails thus losing all of your data.
I recently built a NAS and was initially looking at some form of RAID, everything I found said that RAID is a bad idea for larger drives. especially if you don't have backups!
There are better methods of having reliable parity, other than raid


----------



## Ahhzz (Jul 13, 2019)

silkstone said:


> If one of the drives fails and you have to rebuild, be prepared for an extremely long wait and the possibility that the rebuild fails thus losing all of your data.
> I recently built a NAS and was initially looking at some form of RAID, everything I found said that RAID is a bad idea for larger drives. especially if you don't have backups!
> There are better methods of having reliable parity, other than raid


In rebuilding a raid, esp a raid 6, all the system does is start copying the data that _should_ be on the new drive, to the new drive. I don't see how that rebuild can result in losing data.


----------



## mashie (Jul 13, 2019)

My guess is a full drive rebuild will take ~18h. That is based on the fact that an array expansion with either one or two drives took 36h, but during those expansions every block on every drive was read and re-written. To replace a drive is just a case of reading from one set and only writing to the replacement.

So unless I have another TWO drives break during that 18h window things will be just fine.

The only thing better than dual parity is triple parity but then you have to go for ZFS RAIDZ3 at which stage you will have fun and games trying to expand the array.

GlusterFS was tempting but that would only offer single parity striping or data duplication/triplication. Not very efficient to use 15x 10TB drives for 50TB of storage.

Now if someone could reverse engineer the Backblaze software which is like GlusterFS but with triple parity striping I would consider it for the next storage solution.


----------



## silkstone (Jul 13, 2019)

Ahhzz said:


> In rebuilding a raid, esp a raid 6, all the system does is start copying the data that _should_ be on the new drive, to the new drive. I don't see how that rebuild can result in losing data.











						RAID5 and Large Disks: Dealing with Rebuild Times
					

There is a fundamental shift in going on in enterprise storage.  At the core of the shift is the fact that hard disks haven’t gotten much faster as they have gotten a lot larger. One part of the shift is the move to use solid state storage, which has very different characteristics to spinning...




					www.enterprisestorageguide.com
				









						RAID 6's Dirty Little Secret
					

We have a SAN with a combination of SAS and SATA drives.  I've configured the SAS drives with a RAID 10 configuration because I'l... | Data Storage



					community.spiceworks.com
				



You can just search around for Raid 6 rebuild failures.

There's a higher probability of another disk failing during a rebuild, especially as the rebuild time is so long on raid 6 with large hard drives. Also, if the controller fails, you are SoL.

Filesystems like ZFS or unRAID work better for larger disks. They will, generally, use less disks and allow you to keep your backups on the spare disks you have left over.



mashie said:


> My guess is a full drive rebuild will take ~18h. That is based on the fact that an array expansion with either one or two drives took 36h, but during those expansions every block on every drive was read and re-written. To replace a drive is just a case of reading from one set and only writing to the replacement.
> 
> So unless I have another TWO drives break during that 18h window things will be just fine.
> 
> ...



10Tb drives? more like 2 weeks - https://www.memset.com/support/resources/raid-calculator/

How about a product like unRaid for your next project? You have enough disks for it to be worth the licence fee.


----------



## mashie (Jul 13, 2019)

Good thing people stopped doing hardware RAID a decade ago, with software RAID in Linux I can move all the disks to a different computer all together if required.


----------



## newtekie1 (Jul 13, 2019)

silkstone said:


> If one of the drives fails and you have to rebuild, be prepared for an extremely long wait and the possibility that the rebuild fails thus losing all of your data.
> I recently built a NAS and was initially looking at some form of RAID, everything I found said that RAID is a bad idea for larger drives. especially if you don't have backups!



I've rebuilt arrays with 6TB disks before, it really doesn't take that long, and a rebuild failing doesn't result in all the data on the array being lost.  The array just goes back to a degraded state.

I always see people making statement like that that have never actually done it.



silkstone said:


> There are better methods of having reliable parity, other than raid



At the end of the day, parity is parity.  ZFS or unRAID is not any better than a dedicated hardware RAID controller.  A drive failure still results in a rebuild pretty much the same as with a RAID5/6.



silkstone said:


> Filesystems like ZFS or unRAID work better for larger disks. They will, generally, use less disks and allow you to keep your backups on the spare disks you have left over.



This statement literally makes no sense.  ZFS's parity is either 1 disk or 2 disk, just like RAID5/6.  It uses the same number of drives and gives the same amount of storage space.



mashie said:


> Good thing people stopped doing hardware RAID a decade ago, with software RAID in Linux I can move all the disks to a different computer all together if required.



I can do that with my hardware RAID.  Hell, I can even move it from a Windows computer to a Linux computer if I wanted...


----------



## silkstone (Jul 13, 2019)

mashie said:


> Good thing people stopped doing hardware RAID a decade ago, with software RAID in Linux I can move all the disks to a different computer all together if required.



And how often do you scrub that array?



newtekie1 said:


> I've rebuilt arrays with 6TB disks before, it really doesn't take that long, and a rebuild failing doesn't result in all the data on the array being lost.  The array just goes back to a degraded state.
> 
> I always see people making statement like that that have never actually done it.
> 
> ...



I am by no means an expert, but literally everything I have read says to stay the hell away from raid when working with large disks and that RAID is not a backup solution. I guess it not being backup isn't important for data that isn't critical, but if it is, by having your disks in RAID rather than as a backup, you might be lulling yourself into a false sense of security regarding the redundancy of that data.

Here's some info on Raid 6 rebuild times: https://serverfault.com/questions/967930/raid-5-6-rebuild-time-calculation

Whilst it's pretty reliable, the rebuild times, when looking at arrays over 50Tb, are pretty insane. If you don't mind putting your array out of commission while it rebuilds, I guess it's a fine solution. I just read about lots of people moving away from all forms of RAID.


----------



## newtekie1 (Jul 13, 2019)

silkstone said:


> I am by no means an expert, but literally everything I have read says to stay the hell away from raid when working with large disks and that RAID is not a backup solution. I guess it not being backup isn't important for data that isn't critical, but if it is, by having your disks in RAID rather than as a backup, you might be lulling yourself into a false sense of security regarding the redundancy of that data.



If anything you read didn't tell you to keep backups for anything on the RAID, because RAID isn't a backup, then they were full of crap.  But nothing I've read from any reputable source says to avoid RAID.



silkstone said:


> Here's some info on Raid 6 rebuild times: https://serverfault.com/questions/967930/raid-5-6-rebuild-time-calculation
> 
> Whilst it's pretty reliable, the rebuild times, when looking at arrays over 50Tb, are pretty insane. If you don't mind putting your array out of commission while it rebuilds, I guess it's a fine solution. I just read about lots of people moving away from all forms of RAID.



Random forum posts aren't reliable sources.  Yes, I recognize the irony in saying this.

The rebuild times with ZFS or unRAID parity is just as bad as a hardware RAID.  I've used them all, there really isn't a huge difference between RAID5/6 and ZFS parity, ZFS is just a software form of it.  And note that ZFS is not redundant, like RAID5/6, just by default.  You have to specifically enable parity(it's actually called Z-RAID) before the storage pool will become redundant.

Also, I have no idea what you mean by "if you don't mind putting your array out of commission while it rebuilds".  The array is completely usable during a rebuild, or a OCE or ORLM.  And most NAS devices still use RAID, it's the industry standard for redundancy.


----------



## silkstone (Jul 13, 2019)

newtekie1 said:


> If anything you read didn't tell you to keep backups for anything on the RAID, because RAID isn't a backup, then they were full of crap.  But nothing I've read from any reputable source says to avoid RAID.
> 
> 
> 
> ...



When the disk is rebuilding there is meant to be a huge amount of overhead slowing all access to your system or putting it out of commission for the duration of the rebuild.
Here's a (non-forum) article about the death of raid 6 in enterprises https://www.zdnet.com/article/why-raid-6-stops-working-in-2019/


----------



## mashie (Jul 15, 2019)

silkstone said:


> And how often do you scrub that array?


Slightly more often then my previous arrays which was 0 times in 10 years. 

Out of curiosity to see how long a scrub would take I did one the other day, it took 14h. If I'm bored enough I might do it again in 2020.


----------



## newtekie1 (Jul 15, 2019)

silkstone said:


> When the disk is rebuilding there is meant to be a huge amount of overhead slowing all access to your system or putting it out of commission for the duration of the rebuild.
> Here's a (non-forum) article about the death of raid 6 in enterprises https://www.zdnet.com/article/why-raid-6-stops-working-in-2019/



The person writing that article doesn't seem to understand how RAID works or has never used one...

And URE doesn't stop the rebuild.  In fact, modern controllers(and a lot of not so modern controllers) have options specifically to address errors on rebuild.  All of mine by default just continue to rebuild.  The URE will cause whatever data the read error happened on to be corrupt, but the entire array is not lost.  And that's why you have backups.  Any corrupt data can be restored from the backup.

As for huge overhead while rebuilding, yes there is.  And yes the array is slower, but still usable and definitely not out of commission during the rebuild.


----------



## Hellfire (Jul 16, 2019)

Ahhzz said:


> I let my NAS back itself up with a mirrored Raid.



If it is still within the NAS that's not a backup, that's a redundancy. 

My backup system for my main data

NAS Drive for on site storage & back up of PC documents
- Google Drive for Backup1
- AWS for Backup2

90 day data retention on all files.


----------



## Ahhzz (Jul 16, 2019)

Hellfire said:


> If it is still within the NAS that's not a backup, that's a redundancy.
> 
> ...


True, but considering at home, all I'm worried about is hard drive failure, it works as "backup" for me . I don't backup my main PC, as all I use it for is gaming, photo editing, browsing, nothing critical. The critical data (taxes, a very few docs) on my server is all double backed on the cloud. The photos are my main concern for "data retention", and the worst that could happen to me there is a drive failure. hence, redundancy equals one more level of backup for me


----------



## Hellfire (Jul 16, 2019)

Ahhzz said:


> True, but considering at home, all I'm worried about is hard drive failure, it works as "backup" for me . I don't backup my main PC, as all I use it for is gaming, photo editing, browsing, nothing critical. The critical data (taxes, a very few docs) on my server is all double backed on the cloud. The photos are my main concern for "data retention", and the worst that could happen to me there is a drive failure. hence, redundancy equals one more level of backup for me



House could burn down.... that's worse than drive failure, but I get what your saying bro ;-)


----------

