# A raidy question....  What would you use??



## phill (Jan 2, 2019)

I'm planning on changing around my raid 1 on my Synology system as I'm running low on space and I'm looking to buy bigger drives. 

Currently I have 6 4Tb drives, they are setup in pairs for a raid 1 giving me 12Tb of usable space (ish).

I'm looking to buy 12Tb drives for extra storage but I was wondering how anyone might suggest to set them up...  Raid 1 I think I'm loosing a bit of storage for how I've set up the volumes (3 separate ones not just 1 big volume) so I was wondering if a Raid 5 or even 6 might have been a better option?

I just thought I'd ask people here and see what their preference might be?  I've done a little pole as well just for info really, so thanks in advance if you vote


----------



## qubit (Jan 2, 2019)

You always lose half your storage when you're configuring for data redundancy and can't be helped. Just raid 1 the 12TB drives and you'll be good to go.

EDIT: there seems to be some misunderstanding on what I explained above. What I'm saying is that even though there's 24TB of storage between the two drives, you'll only be able to see 12TB since the drives mirror each other for data redundancy. That's what I mean by "losing storage space" and I would have thought you guys would have got that.


----------



## FordGT90Concept (Jan 2, 2019)

How many 12 TB drives are you thinking about buying?

RAID1 is ideal with two drives.
RAID5 is ideal with three-four drives.
RAID6 is ideal with more than four drives. Beware that RAID6 controller cards aren't cheap.


----------



## Vayra86 (Jan 2, 2019)

I stopped raiding, only playing solo now

Wait...


----------



## phill (Jan 2, 2019)

qubit said:


> You always lose half your storage when you're configuring for data redundancy and can't be helped. Just raid 1 the 12TB drives and you'll be good to go.





FordGT90Concept said:


> How many 12 TB drives are you thinking about buying?
> 
> RAID1 is ideal with two drives.
> RAID5 is ideal with three-four drives.
> RAID6 is ideal with more than four drives. Beware that RAID6 controller cards aren't cheap.



Thanks for the replies guys, appreciated 

I was currently thinking about doing another Raid 1 volume (so would be a total of 4 volumes and 8 drives) but I wondered if I could get say 4 drives, Raid 5 them, have about 36Tb space ish then and use the 4Tb drives as backups for the Raid 5 array?  
I think the single extra Raid 1 volume would be the easiest but I'd still end up needing 3 drives - 2 for the Raid 1 and then another for a backup drive that would sit outside the NAS.  

As for how many I'd be buying, I'd say @FordGT90Concept , a minimum of at least 3 and then depending on if I'd like an extra or two for being able to backup the data else where.  Raid only good for so much, so I have to make sure I had another copy of it else where   I'm a little OCD to be honest at that point  

As for the Raid 6, I believe I could use one of the server cards I have currently downstairs in the rack of servers I've had from work.  I'm unsure though if the Synology system I use would be able to handle that level of raid?  Currently I just use the software model from Synology under Raid 1.  I'll see if I can add in another drive and just see what levels of Raid it will work with 



Vayra86 said:


> I stopped raiding, only playing solo now
> 
> Wait...



I laughed, I liked


----------



## Athlonite (Jan 2, 2019)

Why not RAID0+1 speed and redundancy


----------



## Aquinus (Jan 2, 2019)

qubit said:


> You always lose half your storage when you're configuring for data redundancy and can't be helped. Just raid 1 the 12TB drives and you'll be good to go.


Wrong. You always lose as least half of your storage when you use RAID-1 which literally makes it the least effective option and that's assuming you only have one copy of any given drive at once. You could have RAID-1 where the same drive is duplicated 3 times, in which case, 3x6TB drives in a single RAID-1 still only gets you 6TB.

Personally, I run RAID-5 because when I started I only had 3 disks in my RAID. The benefit of RAID-5 (or 6 even,) is that you're still striping data across several disks so read speed will be pretty damn good (pretty close to RAID-0 performance,) but you lose performance when writing because you need to calculate and write the extra parity data. With RAID-5 you only lose 1 drive worth of storage and RAID-6 only loses you 2 drives worth of storage. RAID-5 can tolerate a single drive failure while still operating where RAID-6 can tolerate 2 drive failures.

Performance-wise, my 4x1TB WD Blacks will read at ~250-350MB/s which isn't too shabby.



Athlonite said:


> Why not RAID0+1 speed and redundancy


In select cases RAID 0+1 can get you better write performance, but that's really the only benefit. You're also forced into using an even number of disks. For 4 disks, you get as many disks worth of storage as RAID-6, except you don't get nearly as much redundancy as RAID-0 in two RAID-1s could fail after a second drive failure (depending on which drives fail,) whereas any two drives in RAID-6 can fail without an issue.

The only down side of RAID-6 is not all RAID devices support it, but it's undoubtedly the best option for > 4 disks.

tl;dr: Unless you absolutely must have better write speeds, you should always go with RAID-5 or RAID-6 because it's the best balance of storage, performance, and redundancy. Even in that case, I would suggest RAID'ing SSDs if you need better performance but still need capacity and redundancy.


----------



## Athlonite (Jan 2, 2019)

NVM miss read ya post


----------



## phill (Jan 2, 2019)

Aquinus said:


> Wrong. You always lose as least half of your storage when you use RAID-1 which literally makes it the least effective option and that's assuming you only have one copy of any given drive at once. You could have RAID-1 where the same drive is duplicated 3 times, in which case, 3x6TB drives in a single RAID-1 still only gets you 6TB.
> 
> Personally, I run RAID-5 because when I started I only had 3 disks in my RAID. The benefit of RAID-5 (or 6 even,) is that you're still striping data across several disks so read speed will be pretty damn good (pretty close to RAID-0 performance,) but you lose performance when writing because you need to calculate and write the extra parity data. With RAID-5 you only lose 1 drive worth of storage and RAID-6 only loses you 2 drives worth of storage. RAID-5 can tolerate a single drive failure while still operating where RAID-6 can tolerate 2 drive failures.
> 
> ...



So it seems I might be needing to buy 4 12Tb drives and raid them in Raid 5 (as I believe the Synology supports that without any issues and no need for extra cards etc in the system) and then just work out what I need to do with backups  

I'm not sure what the 12Tb drives I'm looking at will do read/write but I have a feeling a benching session for the drives will be done when I buy them  

Are there any other options I could or should consider?


----------



## qubit (Jan 2, 2019)

Aquinus said:


> Wrong. You always lose as least half of your storage when you use RAID-1 which literally makes it the least effective option and that's assuming you only have one copy of any given drive at once. You could have RAID-1 where the same drive is duplicated 3 times, in which case, 3x6TB drives in a single RAID-1 still only gets you 6TB.


Sure, but that's what I've said if you read my comment again - that he's gonna lose half his storage. He had the previous drives in raid 1, so I'm just suggesting to have the new drives in raid 1 too.
There are more options, sure, but this is the simplest and cheapest for data redundancy. There's no absolute right or wrong here.


----------



## Aquinus (Jan 2, 2019)

qubit said:


> Sure, but that's what I've said if you read my comment again - that he's gonna lose half his storage. He had the previous drives in raid 1, so I'm just suggesting to have the new drives in raid 1 too.


That's not a suggestion though because all he gains is more redundancy. No more storage and no more performance. Just because that's the way he did it, doesn't mean sticking with it is a good idea.

There are better options than others though and RAID-1 is literally the worst option unless you only have two drives.


phill said:


> So it seems I might be needing to buy 4 12Tb drives and raid them in Raid 5 (as I believe the Synology supports that without any issues and no need for extra cards etc in the system) and then just work out what I need to do with backups
> 
> I'm not sure what the 12Tb drives I'm looking at will do read/write but I have a feeling a benching session for the drives will be done when I buy them
> 
> Are there any other options I could or should consider?


If you want more storage at the cost of redundancy, go RAID-5. If you want redundancy at the cost of storage, go RAID-6 if the device supports it. If 24TB is enough storage, I would suggest RAID-6 if you're able because that's a lot of data to backup and you don't want to lose the entire array.


----------



## newtekie1 (Jan 2, 2019)

For drives 6TB and under, I say RAID5 is fine.  However, for huge drives like 12TB, I'd definitely go with RAID6 if possible.  The rebuild times on 12TB drives when they are pretty full is too long for my tastes, and the fear of another disk failure during a rebuild worries me too much.



qubit said:


> Sure, but that's what I've said if you read my comment again - that he's gonna lose half his storage. He had the previous drives in raid 1, so I'm just suggesting to have the new drives in raid 1 too.
> There are more options, sure, but this is the simplest and cheapest for data redundancy. There's no absolute right or wrong here.



There are some absolute right and wrong facts here though.  And you are presenting wrong facts.  You do not lose have your space when configured for redundancy.  RAID5 and RAID6 are both redundant.  RAID5 can survive 1 drive failure and RAID6 can survive 2 drive failures.  With RAID5 you only lose the space of a single drive, with RAID6 you only lose the space of two drives.  Not half of the storage space.

If you have more than 2 drives, RAID1 never makes sense.


----------



## qubit (Jan 2, 2019)

@Aquinus @newtekie1 I've put a clarification on my post (post 2) so I hope that clarifies it.


----------



## Aquinus (Jan 2, 2019)

qubit said:


> @Aquinus @newtekie1 I've put a clarification on my post (post 2) so I hope that clarifies it.


That isn't an explanation for bad advice. Your edit doesn't invalidate anything @newtekie1 or I have said and your answer is still just as wrong.


qubit said:


> Just raid 1 the 12TB drives and you'll be good to go.



Using RAID-6 with 6 drives literally recovers an entire disk worth of capacity and makes the array more tolerant of failure. There is no justifiable reason for using RAID-1 with 6 disks.


----------



## phill (Jan 2, 2019)

Aquinus said:


> That's not a suggestion though because all he gains is more redundancy. No more storage and no more performance. Just because that's the way he did it, doesn't mean sticking with it is a good idea.
> 
> There are better options than others though and RAID-1 is literally the worst option unless you only have two drives.
> 
> If you want more storage at the cost of redundancy, go RAID-5. If you want redundancy at the cost of storage, go RAID-6 if the device supports it. If 24TB is enough storage, I would suggest RAID-6 if you're able because that's a lot of data to backup and you don't want to lose the entire array.



I'd be adding either a Raid 1 volume with two drives and one for backup or I'd consider 3 or 4 new drives for a completely new volume and start a fresh using another Raid model.   

I believe the only reason why I set up a Raid 1 over any other Raid, was that I never had enough disk space to backup everything to so now is why I'm considering a completely new Raid setup on the NAS   I'll probably still end up having to buy more drives again to make sure I have enough backup space but that's another thing extra that I'd do personally.



newtekie1 said:


> For drives 6TB and under, I say RAID5 is fine.  However, for huge drives like 12TB, I'd definitely go with RAID6 if possible.  The rebuild times on 12TB drives when they are pretty full is too long for my tastes, and the fear of another disk failure during a rebuild worries me too much.
> 
> There are some absolute right and wrong facts here though.  And you are presenting wrong facts.  You do not lose have your space when configured for redundancy.  RAID5 and RAID6 are both redundant.  RAID5 can survive 1 drive failure and RAID6 can survive 2 drive failures.  With RAID5 you only lose the space of a single drive, with RAID6 you only lose the space of two drives.  Not half of the storage space.
> 
> If you have more than 2 drives, RAID1 never makes sense.



I think I might have made a mistake or misjudgement that I should have considered a different Raid model but at the time and still now I just about have enough storage space to make sure I have everything completely backed up as I mentioned just above 

I believe @qubit isn't wrong as such, he's just agreeing that another Raid 1 would be the easiest thing to do.  
With that said if I'm going to completely take out the 3 Raid 1 volumes I have altogether, I'll consider using a new model of raid as I have so many drives and with the added fact I'm going to replace those 6 drives with possibly 4 new ones of bigger size, I will be in a better position.  That said, I just need to make sure that I have physically enough storage space to be able to backup everything I have, otherwise if anything happens to any thing, I'd have lost it if the worst happens.  I can't have that with my personal data  

That said, now being in IT I'd hope I wasn't that silly to make a daft error and loose it all as well!


----------



## qubit (Jan 2, 2019)

@Aquinus I don't see where I went wrong to be honest and suspect a comms error somewhere, lol. Anyway, at work now, so I'll have a look properly at both your comments and @newtekie1 later and reply then to try and clear this up.


----------



## phill (Jan 2, 2019)

qubit said:


> @Aquinus I don't see where I went wrong to be honest and suspect a comms error somewhere, lol. Anyway, at work now, so I'll have a look properly at both your comments and @newtekie1 later and reply then to try and clear this up.



Hopefully also my reply above will help clear up some of it as well.   No wrong answers here, just more educated answers lol


----------



## newtekie1 (Jan 2, 2019)

qubit said:


> @Aquinus I don't see where I went wrong to be honest and suspect a comms error somewhere, lol. Anyway, at work now, so I'll have a look properly at both your comments and @newtekie1 later and reply then to try and clear this up.



You're only right if you assume he is only using 2 drives.  I read the original post as he was going to be using 6x12TB drives.


----------



## Deleted member 67555 (Jan 2, 2019)

I guess it really depends on the importance of the data but maybe try just doing a weekly backup of new files.
Unless it's for business do you really need daily redundancy?


----------



## Ahhzz (Jan 2, 2019)

I'm a RAID 5 +1 myself. Raid 5 for the security of a single drive failure not losing data, and then the +1 being a hot spare that gets pulled in ASAP if something goes wrong. Best tradeoff for redundancy with minimal loss of space, imo


----------



## phill (Jan 2, 2019)

jmcslob said:


> I guess it really depends on the importance of the data but maybe try just doing a weekly backup of new files.
> Unless it's for business do you really need daily redundancy?



I only tend to run a backup process every week or two, but I just like to make sure if I have changed a lot in a few days or gone through it and deleted all duplicates, that I'd do it right away.  

The raid 5 idea sounds like a good way of maximising space but the plus 1 also helps just in case there's a failure.   Is that simply called Raid 5+1 or is it Raid 6?

How would you set that up?  I'll have to have a Google and see if Synology would even support it..  Either way, I'm thinking 4 or 5 new drives and then equal the space to make sure I can backup the complete load of data


----------



## Ahhzz (Jan 2, 2019)

phill said:


> I only tend to run a backup process every week or two, but I just like to make sure if I have changed a lot in a few days or gone through it and deleted all duplicates, that I'd do it right away.
> 
> The raid 5 idea sounds like a good way of maximising space but the plus 1 also helps just in case there's a failure.   Is that simply called Raid 5+1 or is it Raid 6?
> 
> How would you set that up?  I'll have to have a Google and see if Synology would even support it..  Either way, I'm thinking 4 or 5 new drives and then equal the space to make sure I can backup the complete load of data


Not sure about Synology support, but a Raid 6 is not a 5+1. Raid 6 just adds an additional drive as parity, so you lose 2 drives worth of space instead of just 1. In reality, it pretty much does the same thing, but in my head, 5+1 gives me a drive sitting there, not generating read/write cycles and additional usage for failure, whereas a Raid 6 is using all those drives all the time, so more chance for failure.


----------



## newtekie1 (Jan 2, 2019)

RAID is complicated, LOL.  RAID5+1 is different from RAID5 + Hot Spare which is different than RAID6.  I'll explain.

RAID5+1 is a nested RAID where you have two RAID5 arrays, that are then mirrored in a RAID1 array.  It's never really used.

RAID5+Hot Spare is a standard RAID5 array, with an extra hard drive connected to the RAID controller designated as a Hot Spare.  So in the event of a hard drive failure in the RAID5 array, the Hot Spare is immediately used to rebuild the array.  This is not RAID6, because during the rebuild, if you have another drive failure, you data is lost.

RAID6 is double parity.  Two drives are used to calculate parity, if one fails the array is still redundant. Even if a second drive fails, your data is still intact.

If you just look at the differences between RAID5+Hot Spare and RAID6, ignoring RAID5+1 because no one really uses that, the easiest way to explain it is what happens when a drive fails in each.  With RAID5+Hot Spare when a drive fails, the array effectively becomes a RAID0 array until the rebuild process is completed.  With a RAID6 array when a drive fails, the array effectively becomes a RAID5 array.


----------



## theonek (Jan 2, 2019)

well, if you need only speed, and have other more secure place to hold the files, you can easily make raid0 and enjoy the limit of sata interface like this....


----------



## phanbuey (Jan 2, 2019)

Raid 1+0 or raid 6 seem like the best options.  We always set our SANs up with raid 10 w/hot spares. but the san has like 48 disks in it and it's tuned for maximum IOPS.  Not sure if it would be economical with 6 drives but the performance is great.  And if a drive fails, there is still likely a working mirror drive in the raid 0 array to rebuild the hotspare.


----------



## hat (Jan 2, 2019)

Well you definitely don't want raid5. With drives over a certain size you run a serious risk of having a URE, which is... not good. Raid6 can at least survive one URE and keep chugging.


----------



## phill (Jan 2, 2019)

It kinda sounds like in a way that my single Raid 1's however inefficient they might be, seem to be the better way of going as unless both drives die, then I'll have a working array with one drive down.  If something mad happens and two drives go in them I'd have to be massively unlucky that they'd be part of the same volume..

That said, in fairness like most of the setups we've mentioned Raid anything can only protect against so much, Raid 6 might help against 2 drives failing but for that I'd possibly prefer to have 6 working drives and then 2 sat in a spare capacity just in case.  Thankfully with the WD Reds I've had for going on 4 years, I don't believe any of them have hit 20000 hours and there's little to no signs of any performance degradation..  If I could I'd build another Synology system and have everything copied over to it, then make it my main NAS and make the original one I have now a purely backup model, so just throw all drives in for storage so I could maximise the capacity or failing that just buy enough drives to have another Raid 5 array in it.  Either way, it's going to be expensive just to keep my data safe but it's what I personally feel I need to do.

I know @blindfitter has the very similar setup and I believe he uses Raid 5 with his, just with smaller drives.  
Having said that, Raid 5 might be fine as they'd be brand new drives and hopefully the chances of them failing would be very minimal .............  I'd like to hope and think!!



phanbuey said:


> Raid 1+0 or raid 6 seem like the best options.  We always set our SANs up with raid 10 w/hot spares. but the san has like 48 disks in it and it's tuned for maximum IOPS.  Not sure if it would be economical with 6 drives but the performance is great.  And if a drive fails, there is still likely a working mirror drive in the raid 0 array to rebuild the hotspare.



We have something similar at work as well, but it's covered over by two SANs, massively overkill but its worked fine for 5/6 years straight   I think since I've started in IT, there's been one or two of these drives fail but for being constantly on and never turned off, I can't say that's bad going especially with all the work they are doing as well   My NAS box isn't nearly as busy lol  Not even close!!


----------



## Hockster (Jan 3, 2019)

You could avoid RAID altogether and use each drive individually and have your backup software write to multiple drives.


----------



## hat (Jan 4, 2019)

phill said:


> Having said that, Raid 5 might be fine as they'd be brand new drives and hopefully the chances of them failing would be very minimal .............  I'd like to hope and think!!



RAID is first and foremost useful as a "not a backup" redundancy tool. If you go put very large drives in a RAID 5 just to hope one doesn't fail (because if one does, chances are high it's going to be *bad*) you are better off not using RAID at all. I recommend you use RAID 5 as much as I recommend you take swift kicks to the nether regions for $1 each. A URE during a rebuild, which is more and more likely to happen the larger your drives are and the more of them you have, destroys the *entire array*. And the whole point of RAID is to be able to save your data should a drive fail. You're better off using no RAID at all than using RAID 5, at this point.

RAID 6 can handle two failures. A likely scenario would be losing a disc because it just plain failed, *and* encountering a URE during the rebuild process, and your data is still okay. However, if you get two UREs, good night sweet prince. 

In fact, when dealing with 12TB drives, I'm not sure I'd feel good about RAID 6, either.  How much would it suck to get two or more UREs during a rebuild, and lose all that data? With drives this large, it's really best to just stick with good old RAID 1, or 10, if you got that many drives.

RAID 1 doesn't care about UREs. If you lose a disc, replace it and remirror, a URE can happen, and it might make that specific file that sector was on unusable (maybe), but at least your entire array won't be rekt.


----------



## phill (Jan 4, 2019)

As fun as getting kicks in the nether regions for $1/£1 sound, I think as @qubit mentioned before, whilst Raid 1 might be the most efficient or highest performance, I believe as you've pointed out it might be the safest.  

It would just be my luck that I'd setup Raid 5 and then something would happen or I'd get a dodgy batch of drives and boom, like you say, it's all dead Dave....

I'm unsure if Synology supports raid 10, but I will do some digging up on it   Right now, I'm just doing an extended SMART scan on each drive, just to make sure everything is alright with the drives I currently have, the last thing I want is to loose all of that data before I even get a chance to back it up.  The films and such I'm not worried about, but all my daughters photos, house pictures/documents and so on is very important to me, so I want to make sure it's completely and fully backed up 

I do need to do another backup very soon, so I think when my daughter has to go back to her Mum's, I'll be making sure then everything is backed up and then with some luck, I'll be able to buy this Killdisk program and make sure all of the spare/server drives I have are also OK and properly deleted before I put them into a server setup


----------



## hat (Jan 4, 2019)

If you can't do RAID 10, RAID 1 is a fair alternative. You'll get a boost on read speed, but no write speed boost. More importantly, your data is mirrored either way.


----------



## newtekie1 (Jan 4, 2019)

hat said:


> RAID is first and foremost useful as a "not a backup" redundancy tool. If you go put very large drives in a RAID 5 just to hope one doesn't fail (because if one does, chances are high it's going to be *bad*) you are better off not using RAID at all. I recommend you use RAID 5 as much as I recommend you take swift kicks to the nether regions for $1 each. A URE during a rebuild, which is more and more likely to happen the larger your drives are and the more of them you have, destroys the *entire array*. And the whole point of RAID is to be able to save your data should a drive fail. You're better off using no RAID at all than using RAID 5, at this point.



I don't agree with this one bit.  An URE is bad for sure, but I've had them on RAID5 rebuilds, and it definitely doesn't result in a "lose all your data" scenario.  Modern controllers, even the cheap Highpoint onces I tend to use, continue to rebuild after a URE.  The data related to that URE is just corrupt.  The same it would be if the URE happened on a RAID1 or just a single disk that can't read a sector.  Most controllers even have an option to enable or disable continuing to rebuilt after an error.

Also, verifying the array at least every 6 months, and ideally every 3 months, should be done to catch drives with bad sectors early.  This is so there is less of a chance of a surprise when you go to do a rebuild.

The corrupt data, and possibility of complete array loss, is specifically why we have backups.  Any data you put anywhere should be recoverable from another source if the original source either completely fails, or the data becomes partially corrupt.

RAID5 is a hell of a lot better than no RAID at all, to suggest otherwise just doesn't make any sense.  At least with RAID5, a single drive failure results in no data loss.  The alternative would always result in data being lost if a drive fails.


----------



## Aquinus (Jan 4, 2019)

hat said:


> RAID is first and foremost useful as a "not a backup" redundancy tool.


I would like to reiterate this. I've lost my RAID-5 before due to two disk failures, one shortly after the other, during the rebuild for the first failure. Whatever the OP decides to do, I'd make damn sure that there is a DR strategy in place. There is a reason why I have 4TB and 8TB external drives in addition to my RAID.


----------



## newtekie1 (Jan 4, 2019)

Aquinus said:


> I would like to reiterate this. I've lost my RAID-5 before due to two disk failures, one shortly after the other, during the rebuild for the first failure. Whatever the OP decides to do, I'd make damn sure that there is a DR strategy in place. There is a reason why I have 4TB and 8TB external drives in addition to my RAID.



Yes, it can't be said enough that RAID is not a backup.  It's why every RAID array that I have is backed up nightly to an identical sized backup.  Besided the chance of the entire array failing, there is also just human error.  Like Shift+Deleting your entire media folder containing movies, tv series, and music by accident...yep, I did that once.


----------



## hat (Jan 4, 2019)

newtekie1 said:


> I don't agree with this one bit.  An URE is bad for sure, but I've had them on RAID5 rebuilds, and it definitely doesn't result in a "lose all your data" scenario.  Modern controllers, even the cheap Highpoint onces I tend to use, continue to rebuild after a URE.  The data related to that URE is just corrupt.  The same it would be if the URE happened on a RAID1 or just a single disk that can't read a sector.  Most controllers even have an option to enable or disable continuing to rebuilt after an error.
> 
> Also, verifying the array at least every 6 months, and ideally every 3 months, should be done to catch drives with bad sectors early.  This is so there is less of a chance of a surprise when you go to do a rebuild.
> 
> ...


Your experience shows differently than what I was reading, then:

http://raidtips.com/raid5-ure.aspx

So I guess it's down to the RAID controller? RAID 5 is certainly attractive from a cost and reliability standpoint then, if you don't have to worry about a URE destroying everything. If I were OP, I might just consider RAID 5 *after making sure the controller can handle a URE without exploding*.


----------



## newtekie1 (Jan 4, 2019)

hat said:


> Your experience shows differently than what I was reading, then:
> 
> http://raidtips.com/raid5-ure.aspx
> 
> So I guess it's down to the RAID controller? RAID 5 is certainly attractive from a cost and reliability standpoint then, if you don't have to worry about a URE destroying everything. If I were OP, I might just consider RAID 5 *after making sure the controller can handle a URE without exploding*.



It talks about the speculation that chances are you can't even read the entire array without an URE right after you put the data on it.  But that's just totally bogus.  URE are extremely rare, as the article points out, and to believe that you can't read the data right after writing it would assume that these hard drives are so unreliable that they can't even do what they are designed to do.  Which is to store data.

Even still, the article even says that it isn't true that a single URE kills the whole array.  That may have been the case on some very old controllers, but it certainly isn't the case on any modern controller I've experienced.



> These calculations are based on somewhat naive assumptions, making the problem look worse than it actually is. The silent assumptions behind these calculations are that:
> 
> read errors are distributed uniformly over hard drives and over time,
> the single read error during the rebuild kills the entire array.
> *Both of these are not true*, making the result useless.


----------



## hat (Jan 5, 2019)

Hmm, I specifically remember seeing that somewhere back when I was thinking about raid myself... maybe I was reading old information or parroted misinformation.


----------



## Jetster (Jan 5, 2019)

Still running FreeNAS  with 5 - 6Tb drives in RAID6 Its been going one year now with no issues 24/7. For me, if you are goin to RAID. Do not use a motherboard controller. Use a concroller card or a descent software program like FreeNAS or another ZFS file system. Or even UnRAID which uses a parity-protected array but its not a RAID


----------



## Solaris17 (Jan 5, 2019)

Jetster said:


> Still running FreeNAS  with 5 - 6Tb drives in RAID6 Its been going one year now with no issues 24/7. For me, if you are goin to RAID. Do not use a motherboard controller. Use a concroller card or a descent software program like FreeNAS or another ZFS file system. Or even UnRAID which uses a parity-protected array but its not a RAID



we gotta get you a longer board, so we can fit a 10gb ntework card in it. But this comments off topic so dont pay attention. (for real though do it)


----------



## Jetster (Jan 5, 2019)

I had been looking and wanted a board with ecc also but it works so well I just forgot


----------



## Aquinus (Jan 5, 2019)

Jetster said:


> For me, if you are goin to RAID. Do not use a motherboard controller. Use a concroller card or a descent software program like FreeNAS or another ZFS file system.


I've used RSTe on my X79 board since I bought it almost 7 years ago and I've had no issues with it. I respect your tooling, but a lot of on-board RAID devices work alright. With that said, I've used AMD, nVidia, and Intel chipset RAIDs and Intel is definitely the best. nVidia actually wasn't too bad which is what I was using prior to my current Intel board, but AMD was brutal (I ended up just using mdadm.)

So with that said, it depends on the onboard controller. I think RSTe is probably a better one since that's what they use on server boards. I don't have any experience with plain 'ol RST, so I can't speak to that.


Jetster said:


> I had been looking and wanted a board with ecc also but it works so well I just forgot


If you use ZFS, you'll really be wanting ECC memory too.


----------



## phill (Jan 5, 2019)

Well I'm only using a cheaper Z97 Asrock board, it works and has been fine since it's been in coming up 2 years ago now   The WD Reds I have are 4 years old now, so whilst I'm not so worried about them as such, I would like to put something newer in there..

Here's the setup at the moment, just something that has been setup and literally forgot as @Jetster 









I do have some server PERC cards so I might have a look into using them if it's going to be worth it, firstly I want to get the data from the drives and then do a load of testing with the new drives I get eventually, it's going to be a little while as at £400 a drive, they aren't cheap and getting 4 or more will cost a pretty penny..  I have two Ryzen setups I need to get done before I buy drives! 

Here's a pic of the Snology setup...

        

Currently running DSM 6.1.7 which has been amazingly stable    I couldn't be more happy with it.  So on we go with I think either a newer build or just replacing the drives.
For the moment throughput isn't a massive concern because of the 1Gb network around my home, until I move and upgrade to a 10Gb network, I'll worry a little more about drive performance then  

Thanks for everyone's replies, I didn't think it would be such a hot topic


----------



## newtekie1 (Jan 5, 2019)

If you are staying with te DSM, I would actually recommend against using a dedicated controller card and instead plug all the drives into the motherboard.  The DSM, from what I've read, had a hard time reading drive plugged into anything other than the Intel SATA ports.  And if you are setting up the drives in RAID under the DSM software, then you'll be using a software RAID anyway, so there really isn't a point in a hardware RAID controller.


----------



## Owen1982 (Jan 5, 2019)

Did this turn into an Xpenology thread?!

+1 for RAID 1. I'm all for more capacity but long RAID 5/6 rebuild times on large capacity drives is not a good thing - you are tempting fate I think - if the hard drives are of a similar age/batch then it's not unheard of that the rebuild kills the other drives because it puts a massive strain on the array and then you've lost everything (if you have no backup!). I would say keep it simple - mirror and add new hard drives when you need them, like you are doing now. Just my two cents (and yes I have more than 1 old RAID 5 array ).


----------



## phill (Jan 5, 2019)

newtekie1 said:


> If you are staying with te DSM, I would actually recommend against using a dedicated controller card and instead plug all the drives into the motherboard.  The DSM, from what I've read, had a hard time reading drive plugged into anything other than the Intel SATA ports.  And if you are setting up the drives in RAID under the DSM software, then you'll be using a software RAID anyway, so there really isn't a point in a hardware RAID controller.



I've had no need to even think of changing it really.  I suppose the only other thing I could try would be a very basic Windows Server or even better a Linux server as that would give me much greater support of hardware (you'd like to think so with Windows anyways) and allow the choice of Raid cards etc to become massive   I've not tried to make my NAS/HomeServer a complicated issue, I thought the simplier it is, the better it would be to be honest?

Software raid seems to be perfectly fine for this setup for the moment, does anyone else here use it? 



Owen1982 said:


> Did this turn into an Xpenology thread?!
> 
> +1 for RAID 1. I'm all for more capacity but long RAID 5/6 rebuild times on large capacity drives is not a good thing - you are tempting fate I think - if the hard drives are of a similar age/batch then it's not unheard of that the rebuild kills the other drives because it puts a massive strain on the array and then you've lost everything (if you have no backup!). I would say keep it simple - mirror and add new hard drives when you need them, like you are doing now. Just my two cents (and yes I have more than 1 old RAID 5 array ).



It always was an Xpenology thread   

Do you have a similar setup @Owen1982 ?


----------



## Aquinus (Jan 5, 2019)

Owen1982 said:


> (if you have no backup!)


The take away isn't to avoid RAID 5 and 6, it's to always have a backup. A second drive in a 2 disk RAID-1 can fail too and you're hitting the drives just as hard because you still have to copy the entire contents of the drive, just as you do with RAID 5 or 6 and calculating parity isn't extra wear on the drive, it's extra work for the CPU or RAID controller.


----------



## newtekie1 (Jan 5, 2019)

phill said:


> I've had no need to even think of changing it really. I suppose the only other thing I could try would be a very basic Windows Server or even better a Linux server as that would give me much greater support of hardware (you'd like to think so with Windows anyways) and allow the choice of Raid cards etc to become massive  I've not tried to make my NAS/HomeServer a complicated issue, I thought the simplier it is, the better it would be to be honest?
> 
> Software raid seems to be perfectly fine for this setup for the moment, does anyone else here use it?



Stick with DMS, it's good software.  It has run software RAID on all their NAS products and works very well.


----------



## Aquinus (Jan 5, 2019)

newtekie1 said:


> Stick with DMS, it's good software.  It has run software RAID on all their NAS products and works very well.


Software RAID these days really isn't bad. Modern CPUs have more than enough capability to calculate parity without a dedicated controller. At least on Linux, you can get creative and use dm-cache to do read or write-back caching to other logical volumes so, you could accelerate a RAID array with a NVMe drive or a (or many,) SATA SSD(s).


----------



## newtekie1 (Jan 5, 2019)

Aquinus said:


> Software RAID these days really isn't bad. Modern CPUs have more than enough capability to calculate parity without a dedicated controller. At least on Linux, you can get creative and use dm-cache to do read or write-back caching to other logical volumes so, you could accelerate a RAID array with a NVMe drive or a (or many,) SATA SSD(s).



And the DMS software had SSD caching built in.  I use and SSD cache on my Windows based file server to accelerate writes to my primary RAID5 array.


----------



## phill (Jan 5, 2019)

I think I'm in good hands, I was considering a bit of a change in the hardware I use, maybe a Xeon instead of my G3258 but at the moment there's little point although I believe as I mentioned to the limiting Gb intranet in home   Sadly no 10Gb switch here, yet  

I think I'm going to have a good bit of fun when I get some new drives, I think if I can manage to order at least 4, I'll be on to a winner.  That way I'll have either two new Raid 1 volumes and I can retire a pair of 4Tb drives or I can do something else with it   Either way, testing will be needed!!


----------

