# Need specifics on RAID 10



## vawrvawerawe (Mar 11, 2013)

Thinking of doing RAID 10, because out of all the RAID options it seems to be the best option based on my research.

I need to know: 


Specific hardware to get? Besides the drives obviously. Names or links appreciated
I want to have as many as 10 drives. Currently I have 6.
I have 5TB of space used up. So does this mean that I need to have a spare 5TB of space that I can't use in the array, in order to create the array (since I don't want to lose my data)?
Is there any such thing as Dynamic RAID, as in, can add more hard drives to the array without resetting a new array?
Do I need a full backup of my most important and sensitive data, separate from the RAID, if I have RAID 10?


----------



## Wrigleyvillain (Mar 11, 2013)

While all this info is easily found with a couple Google searches (and some of your other recent threads were a bit annoying) I will go ahead and answer these to the best of my ability...maybe it's the nice formatting and all that got me. 

RAID 10 is a combo of RAID 1 (mirroring one to another--exact "real time" copy) and RAID 0 which stripes various drives together into one larger volume(s) by working in paralell. So it's a nice way to get some of the speed boost of RAID 0 with the redundancy of RAID 1. That said, *redundant RAID is not a backup * and is really, in general,  just further protection against drive failure mainly as relates to time to get back up and running (more important in a business etc). 

Yes, I believe it is generally "dynamic" though to what degree kind of depends on level. It is designed that way in a sense--a drive dies and you physically replace it and then it rebuilds the array. And setting up the array initially reinitializes all drives so you can't set up a RAID array with data you want to keep already there. You would have to move your 5TB and then copy back.

Lastly, it changes absolutely nothing re. backing up important data. In fact, with striping it's even more important to do so.


----------



## vawrvawerawe (Mar 12, 2013)

Wrigleyvillain said:


> While all this info is easily found with a couple Google searches (and some of your other recent threads were a bit annoying) I will go ahead and answer these to the best of my ability...maybe it's the nice formatting and all that got me.
> 
> RAID 10 is a combo of RAID 1 (mirroring one to another--exact "real time" copy) and RAID 0 which stripes various drives together into one larger volume(s) by working in paralell. So it's a nice way to get some of the speed boost of RAID 0 with the redundancy of RAID 1. That said, *redundant RAID is not a backup * and is really, in general,  just further protection against drive failure mainly as relates to time to get back up and running (more important in a business etc).
> 
> ...



I already know what RAID 10 is, I googled it long ago. 1+0 yea yea that was not the question. So much for your smarta** answer "search google".
Useless answer and didn't even answer the question.


----------



## Aquinus (Mar 12, 2013)

vawrvawerawe said:


> Specific hardware to get? Besides the drives obviously. Names or links appreciated



I use WD blacks in my RAID-5. Blues or reds would be adequate. Most drives can be raided but drives with TLER are ideal but not required.



vawrvawerawe said:


> I want to have as many as 10 drives. Currently I have 6.



8-port RAID controllers tend to get pricey (>$300 USD). Even more ports could get very expensive.



vawrvawerawe said:


> I have 5TB of space used up. So does this mean that I need to have a spare 5TB of space that I can't use in the array, in order to create the array (since I don't want to lose my data)?



Correct. You would need to backup all the data on the drives so you can initialize the RAID. Then you would copy your data back. You should have a backup anyways just in case.


vawrvawerawe said:


> Is there any such thing as Dynamic RAID, as in, can add more hard drives to the array without resetting a new array?


Yes. Most RAID levels will let you add more drives after a RAID array has been created but will not let you remove them, only replace them. It also doesn't let you change what kind of array you're running or the stripe size for your RAID as these changes must be made when the array is created.


vawrvawerawe said:


> Do I need a full backup of my most important and sensitive data, separate from the RAID, if I have RAID 10?


Yes. RAID is not intended to be a backup, but rather something to prevent headache and to maximize up-time. Even though RAID does mitigate the chance of data loss due to hardware failure, it is still possible for multiple drives to fail at the same time so a backup is always wise to have handy.



vawrvawerawe said:


> I already know what RAID 10 is, I googled it long ago. 1+0 yea yea that was not the question. So much for your smarta** answer "search google".
> Useless answer and didn't even answer the question.



Well, if you Googled it you would have found out that RAID isn't a replacement for a backup and most of these questions are answered in the first few results when you search RAID on Google. I think that is the point he is trying to make. All of the information I provided is easily accessible from the internet and Google makes it pretty easy to find it all.


----------



## lilhasselhoffer (Mar 12, 2013)

1) You asked a broad question.  Ten minutes on google would answer 90% of your question, and bring us all to the same page.  Being angry that this was suggested isn't reasonable.

2) JBOD exist (just a bunch of drives).  It functionally spans itself so that multiple physical drives appear as only one logical drive.  No redundancy, no performance enhancement.   There is no reason that I can currently see to have a JBOD array for a home user.

3) RAID has to rewrite each drive.  You cannot just put an extra two drives in, then RAID them.  Back-up your data, and be prepared for a clean drive.

4) Sweet jebus you're looking at an expensive setup.  This is one of the cheapest 8 SATAII based RAID controllers I could find: http://www.newegg.com/Product/Product.aspx?Item=N82E16816115026  If you really want that then it's your prerogative.  

5) RAID =/= backup.  It's  been touched on already, but RAID arrays aren't a backup.  They might allow a number of drives to fail catastrophically, but cannot address slow degradation.  Also, remember that you're building an array of the same type of drives.  If they have a design flaw then more than one is likely to fail (in rapid succession).


6) Why in Hades do you want 10?  You lose half of the drive space.  You get a bit of a performance boost.  If you're looking at that kind of investment the solution is clear.  Buy an SSD for the primary applications, run a 4 drive RAID 5 array for storage, and bite the bullet on 3TB drives.  You wind up with the OS drive and 9TB of storage.  A ten drive RAID 10 array only creates an additional 6 TB of data, while costing at least 3 times as much (assuming you find a RAID controller that only costs as much as two drives).


TLR.  
Like the 3930k, if you have to ask about this topic you likely don't need it.  RAID 10 in unlikely to be useful given that you want data back-up and not continuous operation reliability.  You can increase your storage capacity, while decreasing cost, by using RAID 5.


----------



## Aquinus (Mar 12, 2013)

lilhasselhoffer said:


> Like the 3930k, if you have to ask about this topic you likely don't need it. RAID 10 in unlikely to be useful given that you want data back-up and not continuous operation reliability. You can increase your storage capacity, while decreasing cost, by using RAID 5.



+1: If you want anything, it's RAID-5 for redundancy while maintaining space. Unless you're running a high volume database server or HD video editing, RAID-10 isn't going to benefit you and you'll only be wasting space. More than 8 drives also isn't realistic for RAID on a consumer machine though. Most servers I work on don't even have more than 6 drives in them (but have 8-port controllers).

First thing is first though, backup all your stuff then you can worry about RAID otherwise you'll have a RAID with nothing to put in it.


----------



## tokyoduong (Mar 12, 2013)

Why is a RAID question in graphics card forum?


----------



## Wrigleyvillain (Mar 12, 2013)

Aquinus said:


> +1: If you want anything, it's RAID-5 for redundancy while maintaining space. Unless you're running a high volume database server or HD video editing, RAID-10 isn't going to benefit you and you'll only be wasting space. More than 8 drives also isn't realistic for RAID on a consumer machine though. Most servers I work on don't even have more than 6 drives in them (but have 8-port controllers).
> 
> First thing is first though, backup all your stuff then you can worry about RAID otherwise you'll have a RAID with nothing to put in it.



You probably know more about this than me but I would not run RAID 5 on the built-in controller (or RAID 5 at all in lower-budget home scenarios). Every time you sneeze it needs a rebuild, in my (admittedly limited) experience.



tokyoduong said:


> Why is a RAID question in graphics card forum?



LOL. Figure with this, uh, guy. I didn't even notice...


----------



## MxPhenom 216 (Mar 12, 2013)

lilhasselhoffer said:


> *1) You asked a broad question.  Ten minutes on google would answer 90% of your question, and bring us all to the same page.  Being angry that this was suggested isn't reasonable.*
> 
> 2) JBOD exist (just a bunch of drives).  It functionally spans itself so that multiple physical drives appear as only one logical drive.  No redundancy, no performance enhancement.   There is no reason that I can currently see to have a JBOD array for a home user.
> 
> ...



^^This.


----------



## Easy Rhino (Mar 12, 2013)

step 1) purchase a raid card that does raid 10 and has 4x mini sas. you will want at least 1024 ram on that chip.
step 2) purchase 16 2 TB enterprise class constellation drives
step 3) purchase another whole rig to do the same thing 
step 4) setup rsync to copy all of the changed data from server 1 to server 2
step 5) buy 2 800 watt BBU (1 for each)

there.


----------



## Wrigleyvillain (Mar 12, 2013)

I think I detect some sarcasm...


----------



## Aquinus (Mar 12, 2013)

Easy Rhino said:


> step 1) purchase a raid card that does raid 10 and has 4x mini sas. you will want at least 1024 ram on that chip.
> step 2) purchase 16 2 TB enterprise class constellation drives
> step 3) purchase another whole rig to do the same thing
> step 4) setup rsync to copy all of the changed data from server 1 to server 2
> ...



Oh yeah, for what he is asking for it is going blow a gaping hole in his wallet.
You think graphics cards are expensive? Getting a 16-port RAID card makes the price on nVidia's Titan look reasonable. 
LSI MegaRAID SAS LSI00210 (9280-16i4e) SATA/SAS 6G...



Wrigleyvillain said:


> I think I detect some sarcasm...



Haha! Very funny, friend. Talking about sarcasm. The OP is says something about 10-disk RAID? Must be a common theme.


----------



## Easy Rhino (Mar 12, 2013)

Aquinus said:


> Oh yeah, for what he is asking for it is going blow a gaping hole in his wallet.
> You think graphics cards are expensive? Getting a 16-port RAID card makes nVidia's Titan look tempting.
> LSI MegaRAID SAS LSI00210 (9280-16i4e) SATA/SAS 6G...



not to mention RAID 10 is the most expensive form of raid. so while he gets 16 2 TB drives he only gets to use half of that space. then because he needs a backup solution he needs to build an exact duplicate


----------



## Aquinus (Mar 12, 2013)

Easy Rhino said:


> then because he needs a backup solution he needs to build an exact duplicate



Hah! Yeah, I forgot that part. Even with RAID 6 (which is what I would do with that many drives) you're looking at 16TB to backup once you reach 10 drives. Where the heck are you going to back that up to? You don't need that much space if you're not filling it but your backup should kind of scale to your RAID.

I thought I smelled something amiss.

...Shenanigans!


----------



## Mindweaver (Mar 12, 2013)

Just remember the more drives you add to an array the bigger the headache and you'll take a performance loss. 

Oh and Easy Rhino, I think he needs to add a few UPS's to backup his BBU's as well to add to your list.. What do you think?  - Bad joke nobody got it.. hehehe


----------



## Aquinus (Mar 12, 2013)

Mindweaver said:


> Just remember the more drives you add to an array the bigger the headache and you'll take a performance loss.



Eventually, but RAID levels that stripe data across multiple disks tend to scale pretty well and not until you start saturating the RAID card will you really run into this problem. It's rare that you'll add a drive and get worse performance than you did before. Usually it's the same or better in most cases. This really depends on the controller, but a good controller shouldn't do this with reasonable sized data. If he is filling that much space most of it must be video which typically loves sequential reads.

However a very real concern is the amount of time it will take to rebuild your RAID if anything were to happen. It would take a long time, but normally you can run the machine with it rebuilding, it just offers poor performance while it rebuilds.

With all of this said, it's going to cost well over 1000 USD for the controller plus the 4 extra drives to put him at 10, not even considering backing that puppy up.

I would rather get a Titan for that price point and when all is said and done between backups, drives, and controllers, you might be able to get two Titans.


----------



## Geofrancis (Mar 13, 2013)

Try unraid or flex raid 
Easy to expand without formats
no needbuy expensive controllers
Everything is done in software so it's easy to change hardware 

Data is not striped so if you lose 1 drive you can recover via the parity drive and if you lose 2 you only lose data on those disks not the entire array


----------



## Aquinus (Mar 13, 2013)

Geofrancis said:


> Try unraid or flex raid
> Easy to expand without formats
> no needbuy expensive controllers
> Everything is done in software so it's easy to change hardware
> ...



You still need to have the ports and if you cheap out on controllers (which will be slower,) and rely on software (which uses CPU resources and is also slower,) your performance will be terrible and you could easily use a good chunk of your CPU power just to do I/O.

Somehow with that many drives I trust LSI a lot more than any bit of software RAID.


----------



## Steevo (Mar 13, 2013)

I have used hardware based RAID 5 and never had an issue, on cards that were only a hundred bucks or so, no cache obviously or battery backup, but still never had an issue. 



My honest suggestion is to get someone involved and prepare to pay at least a few hundred bucks to get help configuring a system, or buy one configured from a OEM. If you are using that much storage get a NAS with RAID 5 and move you data to that, keep a set of folders you use for working with the media you are going to work on, and setup MS Sync Toy to sync the folders, once done copy or cut and past back to the storage folders. 3TB drives will give you enough storage with 4 or 5 drives, if you need more than this you may need a full file server and that will cost $$$$


----------



## newtekie1 (Mar 13, 2013)

Why does everyone think you need an expensive controller for lots of drives?  I've got a $45 Highpoint RAID card that supports 10 drives...  My entire server hard drive setup that I have now supports 10 drives and cost about $310 when it was all said and done.


----------



## Geofrancis (Mar 13, 2013)

Aquinus said:


> You still need to have the ports and if you cheap out on controllers (which will be slower,) and rely on software (which uses CPU resources and is also slower,) your performance will be terrible and you could easily use a good chunk of your CPU power just to do I/O.
> 
> Somehow with that many drives I trust LSI a lot more than any bit of software RAID.



i have a pair of lsi controllers but i still run software raid because:

if something happened to the controller i just swap it out for any other sata/sas card. 

something happens to the motherboard? swap it out

need to replace a drive? swap them out and still use the array while its being rebuilt with only a little downtime

need to add a drive to the array? just power down, plug it in, then add to array.

i know with software raid there is alot of overhead but thats why i have a 1tb cache drive so that i can dump files to the server and it will move them onto the array at its own pace.


----------



## Aquinus (Mar 13, 2013)

Geofrancis said:


> if something happened to the controller i just swap it out for any other sata/sas card.
> 
> something happens to the motherboard? swap it out



You can do the same thing with hardware RAID controllers and fake raid. You have to stick with the same kind of controller though. Lets say that my P9X79 Deluxe died and I got another X79 or C600-series based chipset, RSTe should detect the RAID configuration from the drives and it should be pretty forgiving if you put everything in the same order. Just keep like hardware with like, just like how you're keeping like software with like.



Geofrancis said:


> need to replace a drive? swap them out and still use the array while its being rebuilt with only a little downtime
> 
> need to add a drive to the array? just power down, plug it in, then add to array.



Sounds like normal RAID. 



Geofrancis said:


> i know with software raid there is alot of overhead but thats why i have a 1tb cache drive so that i can dump files to the server and it will move them onto the array at its own pace.



Ah! Cool idea. Great for static files but not good for dynamically changing content. Are you just caching with a 7200 RPM drive or a raptor?


----------



## Geofrancis (Mar 13, 2013)

im using a 1tb WD black for cache. i can pretty much saturate my gigabit network writing to it.

i prefer software raid for long term backup but if i was runnin an operating system from the array i would do hardware raid.

i went from a combiation of 2x sil, 2x jmb and amd onboard sata ports to my IBM M1015 and dell 6i all i had to do was update unraid to handle the new controllers and away it went.


----------



## Aquinus (Mar 13, 2013)

How is the speed writing from the cache to the RAID though? You can't cache all file transfers to a RAID, even more so if you're storing something that changes a lot like a VM drive.


----------



## Mindweaver (Mar 13, 2013)

Aquinus said:


> *Eventually, but RAID levels that stripe data across multiple disks tend to scale pretty well and not until you start saturating the RAID card will you really run into this problem.* It's rare that you'll add a drive and get worse performance than you did before. Usually it's the same or better in most cases. This really depends on the controller, but a good controller shouldn't do this with reasonable sized data. If he is filling that much space most of it must be video which typically loves sequential reads.
> 
> However a very real concern is the amount of time it will take to rebuild your RAID if anything were to happen. It would take a long time, but normally you can run the machine with it rebuilding, it just offers poor performance while it rebuilds.
> 
> ...



I should have noted using software RAID or a crappy hardware controller, but headache wise I still say the less drives the easier/better... and sure, using a higher end hardware controller card he wouldn't have any issues, but that will cost him a lot of money like you said. Plus, I wouldn't setup a RAID 5 or 10 with out using a BBU, and that's even more money. 

I've got around $800 bucks in one of my hardware RAID Arrays. It's a LSI 3ware 9650SE 4-port (about $650) and I bought a BBU (around $110) and that's not including the Harddrives which were around $220 ea using enterprise drives. So, sinking money into just a 4-port RAID Array can get costly quick. At the moment I'm running 9 Hardware RAID Arrays.


----------



## Geofrancis (Mar 13, 2013)

Aquinus said:


> How is the speed writing from the cache to the RAID though? You can't cache all file transfers to a RAID, even more so if you're storing something that changes a lot like a VM drive.



I get mabe 40mb/s from cache to array thats Limited by the parity calculations But that's not usually a problem as the 1tb drive is more than large enough to dump any data I want to it. The array is not used for Modifying large data files but long term storage with some redundancy.

Esxi and the operating systems run from a separate hard drive but I have ran them on the array without issue .


----------



## de.das.dude (Mar 13, 2013)

tokyoduong said:


> Why is a RAID question in graphics card forum?



because this guy is a troll.


----------



## Aquinus (Mar 13, 2013)

Geofrancis said:


> I get mabe 40mb/s from cache to array thats Limited by the parity calculations But that's not usually a problem as the 1tb drive is more than large enough to dump any data I want to it. The array is not used for Modifying large data files but long term storage with some redundancy.
> 
> Esxi and the operating systems run from a separate hard drive but I have ran them on the array without issue .



Ah okay. 40MB/s is a little slow for my taste. My RAID-5 off of the X79 PCH gives me something like 140-160MB/s. It also rebuilds pretty fast.


----------



## brandonwh64 (Mar 13, 2013)

Aquinus said:


> Ah okay. 40MB/s is a little slow for my taste. My RAID-5 off of the X79 PCH gives me something like 140-160MB/s. It also rebuilds pretty fast.



With raid 0 on two first gen sata HDD's I get around low 100's but this is on a onboard marvel chip.


----------



## Geofrancis (Mar 13, 2013)

It's not as fast because the data is not striped. 

With raid 5 if you lose more than 2 disks you loose everything.
If I lose 2 disks I only loose what's on the 2 disks all the rest of the files on the other 8 drives is intact because it keeps the files whole and on a single drive.


----------



## newtekie1 (Mar 13, 2013)

Mindweaver said:


> I should have noted using software RAID or a crappy hardware controller, but headache wise I still say the less drives the easier/better... and sure, using a higher end hardware controller card he wouldn't have any issues, but that will cost him a lot of money like you said. Plus, I wouldn't setup a RAID 5 or 10 with out using a BBU, and that's even more money.
> 
> I've got around $800 bucks in one of my hardware RAID Arrays. It's a LSI 3ware 9650SE 4-port (about $650) and I bought a BBU (around $110) and that's not including the Harddrives which were around $220 ea using enterprise drives. So, sinking money into just a 4-port RAID Array can get costly quick. At the moment I'm running 9 Hardware RAID Arrays.



If a RAID-5 array is setup properly a BBU is not necessary.  Though for a server an expensive UPS is never a bad idea.


----------



## Easy Rhino (Mar 13, 2013)

newtekie1 said:


> Why does everyone think you need an expensive controller for lots of drives?  I've got a $45 Highpoint RAID card that supports 10 drives...  My entire server hard drive setup that I have now supports 10 drives and cost about $310 when it was all said and done.



i know that you know what you are talking about but it needs to be clarified. a lot of people confuse a raid controller card and a software raid controller card and simply a host card. 

there are massive speed and performance benefits of a raid on chip card over simply a software raid controller. so if you want raid 5 and you want it very fast and you want it to do fancy things like write-through caching calculations then you need a controller that has raid 5 on it with ram, etc.


----------



## newtekie1 (Mar 13, 2013)

Easy Rhino said:


> i know that you know what you are talking about but it needs to be clarified. a lot of people confuse a raid controller card and a software raid controller card and simply a host card.
> 
> there are massive speed and performance benefits of a raid on chip card over simply a software raid controller. so if you want raid 5 and you want it very fast and you want it to do fancy things like write-through caching calculations then you need a controller that has raid 5 on it with ram, etc.



You are right to an extent, but wrong also.  Now a days even the software cards can do write-through and give very decent speeds with RAID-5.  The $45 card I have for example does write-through without a problem for example, and easily sustains 110-120MB/s transfers to and from it, which is enough to pretty much saturate a gigabit connection and hence enough for any server.  Yes, it is still using the CPU to do the work unlike a true hardware card, but even in my dual-core Celeron server the CPU is barely bothered by the extra load.  In fact when read/writing the CPU load barely breaks 5%, and my i7 in my main rig just laughs it off like the CPU load isn't even there.  Yeah, if I wanted it faster for an array directly in the machine I was using, or had a 10 Gigabit network, I'd want a hardware RAID controller.  You are right, there are massive performance gains to a hardware card.  But for a home server that is storing data that doesn't need to be accessed at extremely fast speeds there is no point in a super expensive hardware RAID card, in fact even in most business there isn't a need for it unless they are doing something like video editing directly off the files server or something.


----------



## Mindweaver (Mar 13, 2013)

newtekie1 said:


> If a RAID-5 array is setup properly a BBU is not necessary.  Though for a server an expensive UPS is never a bad idea.



 There again, I would not set up a RAID-5 with out a BBU, and I would still have a UPS as well. I wouldn't consider a RAID-5 as being setup properly with out a BBU. I'm not saying you are wrong newtekie1, I'm just stating how I would handle it.

 If you can't afford a BBU for your *Hardware* RAID-5 Controller card then you shouldn't be using a *Hardware* RAID-5 array and just go software.


----------



## newtekie1 (Mar 13, 2013)

Mindweaver said:


> There again, I would not set up a RAID-5 with out a BBU, and I would still have a UPS as well. I wouldn't consider a RAID-5 as being setup properly with out a BBU. I'm not saying you are wrong newtekie1, I'm just stating how I would handle it.
> 
> If you can't afford a BBU for your *Hardware* RAID-5 Controller card then you shouldn't be using a RAID-5 array.



A BBU is only required if you are using write-back cache, so no a BBU isn't not necessary.  And even if you are using write-back, a properly configured inexpensive UPS is enough to keep you safe, you don't need a BBU for the RAID controller itself.

However, yes, I would say if you can afford a true hardware RAID-5 controller and don't get the BBU, you're doing it wrong.


----------



## Mindweaver (Mar 13, 2013)

newtekie1 said:


> A BBU is only required if you are using write-back cache, so no a BBU isn't not necessary.  And even if you are using write-back, a properly configured inexpensive UPS is enough to keep you safe, you don't need a BBU for the RAID controller itself.



  There again, I stated what I would do, and I never said it was necessary buddy. Why would anyone not want to use that cache? The performance is great. I think the gains out weigh the small extra cost.




newtekie1 said:


> However, yes, I would say if you can afford a true hardware RAID-5 controller and don't get the BBU, you're doing it wrong.


----------



## newtekie1 (Mar 13, 2013)

Mindweaver said:


> There again, I stated what I would do, and I never said it was necessary buddy. Why would anyone not want to use that cache? The performance is great. I think the gains out weigh the small extra cost.



Well, like I said, if you are putting it in a server on a Gigabit network the performance improvement is useless as even software cards using write-through are enough to saturate the network.  So using the cache just adds a chance for data loss for no gain at all.


----------



## Easy Rhino (Mar 13, 2013)

newtekie1 said:


> You are right to an extent, but wrong also.  Now a days even the software cards can do write-through and give very decent speeds with RAID-5.  The $45 card I have for example does write-through without a problem for example, and easily sustains 110-120MB/s transfers to and from it, which is enough to pretty much saturate a gigabit connection and hence enough for any server.  Yes, it is still using the CPU to do the work unlike a true hardware card, but even in my dual-core Celeron server the CPU is barely bothered by the extra load.  In fact when read/writing the CPU load barely breaks 5%, and my i7 in my main rig just laughs it off like the CPU load isn't even there.  Yeah, if I wanted it faster for an array directly in the machine I was using, or had a 10 Gigabit network, I'd want a hardware RAID controller.  You are right, there are massive performance gains to a hardware card.  But for a home server that is storing data that doesn't need to be accessed at extremely fast speeds there is no point in a super expensive hardware RAID card, in fact even in most business there isn't a need for it unless they are doing something like video editing directly off the files server or something.



the problem with software raid comes when running more than one virtual machine. performance comes to a crawl with all of that cross traffic. the best i could get with software raid running several VMs was 50 MB/s transfers with caching enabled.


----------



## newtekie1 (Mar 13, 2013)

Easy Rhino said:


> the problem with software raid comes when running more than one virtual machine. performance comes to a crawl with all of that cross traffic. the best i could get with software raid running several VMs was 50 MB/s transfers with caching enabled.



You're right, but that can bring even hardware controllers down to a crawl, it can be a problem with any hard drive setup simply because hard drives just aren't good with heavy random access loads.  A good hardware controller with a large amount of RAM for cache definitely helps the situation though, since the RAM cache helps makes up for the shortcomings of the hard drives.  But even some of the software controllers that allow you to use a SSD as a cache drive are a lot better with this type of load than they used to be.


----------



## andrewsmc (Mar 13, 2013)

20 post later and OP has said NOTHING. rofl.


----------



## brandonwh64 (Mar 13, 2013)

andrewsmc said:


> 20 post later and OP has said NOTHING. rofl.



If you look at his thread record you would know why.


----------



## Easy Rhino (Mar 13, 2013)

newtekie1 said:


> You're right, but that can bring even hardware controllers down to a crawl, it can be a problem with any hard drive setup simply because hard drives just aren't good with heavy random access loads.  A good hardware controller with a large amount of RAM for cache definitely helps the situation though, since the RAM cache helps makes up for the shortcomings of the hard drives.  But even some of the software controllers that allow you to use a SSD as a cache drive are a lot better with this type of load than they used to be.



The cards now come with NAND flash on them so you can ditch the BBU and don't have to bother with any sort of hybrid raid setup. those get expensive though.



andrewsmc said:


> 20 post later and OP has said NOTHING. rofl.





brandonwh64 said:


> If you look at his thread record you would know why.



still provides for good conversation though.


----------



## Mindweaver (Mar 13, 2013)

I've got one comment to share with you.. Edit your posts, don't double post. I'm sure you have been told this multiple times.


----------



## Aquinus (Mar 13, 2013)

brandonwh64 said:


> With raid 0 on two first gen sata HDD's I get around low 100's but this is on a onboard marvel chip.



...and I would find that barely acceptable. When you're working on the RAID locally for things like VMs, you really want it to be quick.


----------



## brandonwh64 (Mar 13, 2013)

Aquinus said:


> ...and I would find that barely acceptable. When you're working on the RAID locally for things like VMs, you really want it to be quick.



But for a everyday PC it makes opening stuff like word, chrome, and other small apps that much better to deal with.


----------



## vawrvawerawe (Mar 13, 2013)

lilhasselhoffer said:


> 1) You asked a broad question.  Ten minutes on google would answer 90% of your question, and bring us all to the same page.  Being angry that this was suggested isn't reasonable.
> 
> 2) JBOD exist (just a bunch of drives).  It functionally spans itself so that multiple physical drives appear as only one logical drive.  No redundancy, no performance enhancement.   There is no reason that I can currently see to have a JBOD array for a home user.
> 
> ...




Ok what you're saying makes a lot of sense. I wasn't sure, I thought RAID 10 meant more stable, you have less of a chance of losing data. Data security is very important to me. We're talking years of data. Performance increase is not very important to me at all. Currently my main HDD with all primary applications is a 256GB Agility 4 SSD. Everything else is just storage. Yes I access these files, but normal hard drive speed is sufficient to access all my massive data. My SSD as primary drive is sufficient to store all my applications and is very fast, I love it.

Given that performance increase is not important, but risk of data loss is the most important, do you still recommend RAID 5? Because the point of RAID 10 is data redundancy so as to significantly minimize the risk of catastrophic data loss due to hard drive failure.

Cost is of course also important; I would of course look for the best deals. Currently my setup of just using multiple hard drives, is working out for me fine, especially since I use Shell Link Extension to make virtual folders located on other drives to give a "virtual storage increase" - "virtual" as not a technical term but rather as meaning simply, "you don't have to click into the other drives to access the data".

Let me give an example of what I currently do:

For my video collection (it's not really private so I can show you all a screenshot of the file structure), I put all the folders on a different drive, but added Junctions within the one drive, giving the illusion they are all in the same location. Hope you grasp what it is I'm showing you:







As you see here, almost all the folders are Junction links, basically it means that if I move a file into the folder it copies it into the other drive. (a couple of the folders I haven't moved yet; that's why you don't see the chain link overlay on the folder icon on Alias and Motive).

So it's working fine for me to have JBOD, only reason to have RAID is because it would simplify things a bit by having it all one huge drive instead of several drives. Is it even worth it to do RAID for my situation?



Geofrancis said:


> Try unraid or flex raid
> Easy to expand without formats
> no needbuy expensive controllers
> Everything is done in software so it's easy to change hardware
> ...



Whoa, Flex-RAID from their website looks great, anyone have any comments or experience with this?



andrewsmc said:


> 20 post later and OP has said NOTHING. rofl.



sorry for the delay in response, I was asleep last night and woke up this morning and only just checked the thread again. Happy that there are so many useful responses.


----------



## Wrigleyvillain (Mar 13, 2013)

Ok cool though next time try to put it in the right section, man.


----------



## vawrvawerawe (Mar 13, 2013)

Wrigleyvillain said:


> Ok cool though next time try to put it in the right section, man.



hmmm, Storage forum is not the right place???







****24 hours in between bumping threads  -staff****


----------



## Aquinus (Mar 13, 2013)

vawrvawerawe said:


> hmmm, Storage forum is not the right place???
> 
> http://s8.postimage.org/6geypr5et/Clipboard02.jpg



That's because a mod moved it. :shadedshu


----------



## vawrvawerawe (Mar 17, 2013)

bump


----------



## newtekie1 (Mar 17, 2013)

vawrvawerawe said:


> So it's working fine for me to have JBOD, only reason to have RAID is because it would simplify things a bit by having it all one huge drive instead of several drives. Is it even worth it to do RAID for my situation?



Yes, RAID would be worth it for your situation.  RAID-10 is only slightly more redundant than RAID-5.  With RAID-5 you can have one drive fail and still keep all your data, but a second drive failure and all is lost.  With RAID-10 you have two RAID-1 arrays nested in a RAID-0.  So it is possible to loose multiple drives and still be fine, but it is also possible that loosing two drives will kill all your data.  It comes down to which drive dies.

IMO, for your setup a RAID-5 would be sufficient, there isn't much point in going RAID-10, the space you loose isn't worth the minor improvement in redundancy.  If anything do a RAID-5 with a hot spare instead.


----------



## Aquinus (Mar 17, 2013)

Mindweaver said:


> I've got around $800 bucks in one of my hardware RAID Arrays. It's a LSI 3ware 9650SE 4-port (about $650) and I bought a BBU (around $110)



You can get an LSI 4-port card for 320 USD and an 8-port for a bit less than double that, not including the BBU. RAID cards aren't as expensive as they used to be. Not to say that they're not expensive but any RAID with 10 drives is going to be costly no matter how you do it and with that many drives that big I'm super skeptical about software RAID.


newtekie1 said:


> If anything do a RAID-5 with a hot spare instead.



Or RAID-6 if the device supports it, that way you would need to lose 3 drives to lose data as opposed to two. (RAID-5 has one drive worth of active fault tolerance and 6 has two, for those who may not have known.)


----------



## Geofrancis (Mar 17, 2013)

Is the storage for manipulating large files or large data stores or is it for media storage and playback?

If your using large files like for virtual machines or a massive database then go raid on a very expensive controller this is what it was designed for redundancy and performance.

If It's just for media storage and playback then use unraid or flexraid.

You can add drives without breaking the array

The array only powers up the drive you are wanting the data from so you don't need 10 drives spinning just to give you a single file

If you lose 2 drives all your data on the rest of the drives are fine but with most raid it's totally unrecoverable.

Sure the performance is not as fast as raid on a very expensive controller but if it's going over gigabit that's not an issue. 

If you spend $$$ on a 8 port sata card for raid then decide you want more than 8 drives you better hope you can find an identical card to expand your array with. With unraid I have mixed controllers from SIL, JMB, VIA, AMD, LSI.

I have ran unraid with 10 drives for over a year adding a drive every few months. It rebuilds the parity in around 24 hours and the array accessible during this time.


----------



## Aquinus (Mar 17, 2013)

I just read up on Unraid and Flexraid and it's all file-level redundancy. It's by no means RAID and I think even for pictures, video, and music I would be skeptical to run software that lets you schedule when the parity gets calculated. That alone is dangerous and is keeping me far from it. In all seriousness if you're going to run a RAID, use something at at least does hardware level striping rather than file level redundancy because your performance is going to be dismal and integrity of your data upon being written can not be guaranteed to be valid if parity info isn't written at the same time.

I would say that if you're going to run Windows then use Windows software RAID and if you're going to use some distro of Linux I would use Linux software raid if 10 disks is going to be your end all. Anything not done at the driver level is concerning and redundancy that doesn't need to be calculated when the data is written is scary as hell and I would trust none of my data to a system that allows that insanity.


----------



## Geofrancis (Mar 17, 2013)

Aquinus said:


> I just read up on Unraid and Flexraid and it's all file-level redundancy. It's by no means RAID and I think even for pictures, video, and music I would be skeptical to run software that lets you schedule when the parity gets calculated. That alone is dangerous and is keeping me far from it. In all seriousness if you're going to run a RAID, use something at at least does hardware level striping rather than file level redundancy because your performance is going to be dismal and integrity of your data upon being written can not be guaranteed to be valid if parity info isn't written at the same time.
> 
> I would say that if you're going to run Windows then use Windows software RAID and if you're going to use some distro of Linux I would use Linux software raid if 10 disks is going to be your end all. Anything not done at the driver level is concerning and redundancy that doesn't need to be calculated when the data is written is scary as hell and I would trust none of my data to a system that allows that insanity.





It's not file level redundancy with unraid it calculates the parity in real time just like a hardware raid controller but it does not stripe the data across the drives so the read performance is the same as a single drive like parity protected JBOD array . This means in a worst case where I lose 2 data disks or a parity and data disk I can remove the rest of my drives and plug it into any Linux box and just copy the files from it. 

I know I'm limited to 40mb/s writing directly to the array but that's why I have the cache drive to dump files to. And yes I know it's not protected while its on that cache drive but that's why I write my backups direct to the array.

Flex raid works the same but you can have it with network drives or usb or hard drives or any data store you want 

It has 2 options. Real time parity just like normal raid or you can have it as a scheduled parity so you can restore to the last parity run.

Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity. Do you need to read and write your movies at 200mb/s? When you are limited to 100mb of gigabit Ethernet?

 Think of it more like NAS than a server


----------



## Aquinus (Mar 17, 2013)

Geofrancis said:


> Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity. Do you need to read and write your movies at 200mb/s? When you are limited to 100mb of gigabit Ethernet?



Actually RAID was designed for long term reliable storage. There is no point in running a RAID if it's not going to last because uptime is going to cost you and uptime is usually most likely the reason you're running RAID in the first place. Not to mention uptime is directly related to reliability of your storage devices.

Even running software RAID offers a lot of the protection of hardware RAID, is usually just as reliable with a performance penelty, but not nearly as steep at 40MB/s. I don't know about you but when I go to backup my RAID (even unraid isn't a replacement for offsite backup,) but I get pretty annoyed at how slow my USB 2.0 external drive is, and running it on USB 3.0 port in Turbo mode, I get about 42MB/s and it sucks. I don't care what kind of data I have, I don't want those kinds of speeds locally.

Just when you thought a WD Caviar Green on SATA was slow. 

So yeah, for strictly storage I'm sure it's fast enough for regular usage, like watching a video but if you're going to copy an HD video to it or do any trans-coding you're going to wish you had a real RAID that writes faster. So if one person is using it and it's strictly for storage and occasionally a single person watching an HD video, it might be enough but I wouldn't try doing anything to it while your backing up, or syncing, or moving any data to or from it. 40MB/s is fairly easy to saturate now I understand you have a caching drive for writes but not all data can be handled that way.


----------



## Geofrancis (Mar 17, 2013)

Aquinus said:


> Actually RAID was designed for long term reliable storage. There is no point in running a RAID if it's not going to last because uptime is going to cost you and uptime is usually most likely the reason you're running RAID in the first place. Not to mention uptime is directly related to reliability of your storage devices.
> 
> Even running software RAID offers a lot of the protection of hardware RAID, is usually just as reliable with a performance penelty, but not nearly as steep at 40MB/s. I don't know about you but when I go to backup my RAID (even unraid isn't a replacement for offsite backup,) but I get pretty annoyed at how slow my USB 2.0 external drive is, and running it on USB 3.0 port in Turbo mode, I get about 42MB/s and it sucks. I don't care what kind of data I have, I don't want those kinds of speeds locally.
> 
> ...



Yea 40mb's is not the fastest but the 1tb cache is more than adequate for dumping large amounts of data to the server. It's set up so that its transparent so you just use the array as normal and you don't notice that its on the cache or array. I have it set up to write out the cache once a day so it's never unprotected for long. 

The only time I write direct to the array is for backing up my pc and laptop but I know this is no substitute for offsite backup. 

The reasons I went with unraid over hardware raid or windows Home server array

1. Totally hardware independent I just move my flash drive and hard drives to any PC and it boots and works

2. Drives power up independently when data is accessed so you don't have to power up 10 drives for a single file So it saves about 50w when it's idle 

3. I can add drives without having to backup and rebuild the array.

If your just a home user wanting to have a large storage pool with enough redundancy to survive a single drive failure then it's ideal.


----------



## Deleted member 3 (Mar 17, 2013)

Why is RAID 10 the most suitable? I'd go for RAID 6 instead. Similar redundancy and more space. Performance shouldn't be a huge issue with the right hardware.

Also, apart from controller and drives you need to think about housing. Putting 10 drives in a normal case without decent drive bays means you get messy cabling and replacing a drive isn't as straight forwarded. 

I personally use an Axus YI-16SAEU4, you should be able to find similar hardware around the price of a decent hardware RAID controller. You'll have a reliably device with all features you could wish for though.
Quick look on ebay give me this for eaxmple.


----------



## Aquinus (Mar 17, 2013)

Geofrancis said:


> 1. Totally hardware independent I just move my flash drive and hard drives to any PC and it boots and works



So is any other software raid solution such as Windows and Linux software RAID.


Geofrancis said:


> 2. Drives power up independently when data is accessed so you don't have to power up 10 drives for a single file So it saves about 50w when it's idle


It's also why your disk access speeds are incredibly slow. Also spinning up and spinning down hard drives on a regular basis puts more stress on the motor than keeping it running. Granted I don't know how often your drives wake up and go to sleep.



Geofrancis said:


> 3. I can add drives without having to backup and rebuild the array.


Got me there, but RAID you don't need to backup to add a drive (even though you should,) but in defense of this, my RAID can be degraded and rebuilding and it will still perform better than unraid, that's the point. 


Geofrancis said:


> If your just a home user wanting to have a large storage pool with enough redundancy to survive a single drive failure then it's ideal.


I agree but software RAID (real striping RAID,) can offer better performance and equal if not better reliability.


----------



## Geofrancis (Mar 17, 2013)

Aquinus said:


> So is any other software raid solution such as Windows and Linux software RAID.



Yet me clarify I don't have to install unraid it runs live from the flash drive and all it ever writes is to the config file when you set it up. So i don't have to set it up when I migrate hardware I just plug and play.



Aquinus said:


> It's also why your disk access speeds are incredibly slow. Also spinning up and spinning down hard drives on a regular basis puts more stress on the motor than keeping it running. Granted I don't know how often your drives wake up and go to sleep.



They are set to power down after 1 hour so for most of the day they are off. i have the filenames buffered so it doesn't have to power on drives when im browsing for something only if I open a file will it power on the drive. So most of my drives are off apart from an hour or 2 a day where a few might come on. The cache drive also keeps drives powered down by buffering all writes to the array and moving everything at once.



Aquinus said:


> Got me there, but RAID you don't need to backup to add a drive (even though you should,) but in defense of this, my RAID can be degraded and rebuilding and it will still perform better than unraid, that's the point.



As long as your controller supports online capacity expansion no software raid does.



Aquinus said:


> I agree but software RAID (real striping RAID,) can offer better performance and equal if not better reliability.



Here is my problem with striping your storage array is if you loose too many drives you lose everything with zero chance of getting anything from the remaining drives. 

I can take any drive from my array and plug it into any Linux computer and copy the files from it, even the cache. So I can recover my data far easier than any form of striped raid setup.


----------



## Deleted member 3 (Mar 18, 2013)

Geofrancis said:


> As long as your controller supports online capacity expansion no software raid does.



mdadm does, as did the raidcore stack. As for the latter, mentioning it made me wonder what happened with it, seems it's offered now under the name assuredVRA. I should play with it.


----------



## Aquinus (Mar 18, 2013)

DanTheBanjoman said:


> mdadm does, as did the raidcore stack. As for the latter, mentioning it made me wonder what happened with it, seems it's offered now under the name assuredVRA. I should play with it.



mdadm also stores RAID information on the RAID drives rather than in a config. So if you move the drives to another *nix machine you can migrate your RAID. It's also pretty quick for software RAID too.



Geofrancis said:


> Yet me clarify I don't have to install unraid it runs live from the flash drive and all it ever writes is to the config file when you set it up. So i don't have to set it up when I migrate hardware I just plug and play.



I hope you don't migrate 10 drives to different hardware often. 
Software is like hardware though, like works with like. If you're running a RAID off a particular LSI card, any LSI card like it will take it in. Same with Intel's RST, adadm, you name it. It when you change the controller or software you're using. So if you didn't want to use unraid anymore or if the project died and stopped getting upgraded it wouldn't be easy to migrate it to another system. Just like any other method of redundancy.


----------



## Geofrancis (Mar 18, 2013)

Aquinus said:


> I hope you don't migrate 10 drives to different hardware often.
> Software is like hardware though, like works with like. If you're running a RAID off a particular LSI card, any LSI card like it will take it in. Same with Intel's RST, adadm, you name it. It when you change the controller or software you're using. So if you didn't want to use unraid anymore or if the project died and stopped getting upgraded it wouldn't be easy to migrate it to another system. Just like any other method of redundancy.



Your missing the point I am trying to make I can take any single drive from my array and copy everything off it just by copying and pasting While its plugged into any Linux computer.

I don't need the unraid OS to recover files as they are all in one Piece on one drive just spread over the drives. So I f I pulled one of my hard drives and looked at the folder structure it would just like the array on one disk but the folders will only contain the files designated to that perticular drive. 

So when I rebuild the array after adding a disk I am not rebuilding the array but the parity disk so even if I pushed the reset button half way through a rebuild it would just reboot and start rebuilding the parity again with no loss.

I don't know of any raid controller that can survive a total power loss during online capacity expansion process.


----------



## Easy Rhino (Mar 18, 2013)

Geofrancis said:


> so even if I pushed the reset button half way through a rebuild it would just reboot and start rebuilding the parity again with no loss.
> 
> I don't know of any raid controller that can survive a total power loss during online capacity expansion process.



why would you hit the reset button ? if you are hitting the reset button on an enterprise server running RAID then you should be fired.


----------



## Geofrancis (Mar 18, 2013)

Easy Rhino said:


> why would you hit the reset button ? if you are hitting the reset button on an enterprise server running RAID then you should be fired.



Obviously I wouldn't do that my point is my data would be fine if something happened during the process like a power cut.


----------



## Easy Rhino (Mar 18, 2013)

Geofrancis said:


> Obviously I wouldn't do that my point is my data would be fine if something happened during the process like a power cut.



OK. But that seems like a pointless thing to say.


----------



## vawrvawerawe (Mar 18, 2013)

Geofrancis said:


> Is the storage for manipulating large files or large data stores or is it for media storage and playback?
> 
> If your using large files like for virtual machines or a massive database then go raid on a very expensive controller this is what it was designed for redundancy and performance.
> 
> ...



The drives are primarily for large data stores. Two of the drives play media frequently (like video) but most are mainly for storage and accessed relatively infrequently (maybe a few times per day but not continuously like media files).


----------



## xvi (Mar 18, 2013)

Going to try to steer this back on topic.

While it _is_ a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...*including* the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.


----------



## Geofrancis (Mar 18, 2013)

vawrvawerawe said:


> The drives are primarily for large data stores. Two of the drives play media frequently (like video) but most are mainly for storage and accessed relatively infrequently (maybe a few times per day but not continuously like media files).



If you have big data stores then go for a hardware raid card running either raid 0+1 if you only have a few disks but If you have more than 4 I would go with raid 5 or 6 if you can as you won't lose as much space due to using parity rather than 1-1 replication.


----------



## newtekie1 (Mar 18, 2013)

Geofrancis said:


> Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity.



*R*edundant *A*rray of *I*ndependent *D*isks.

You sure RAID wasn't designed for redundancy?  Because its right in the name.

RAID was not designed with performance in mind.  It was designed for redundancy originally.  And it wasn't designed for uptime either.  The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later.  And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild.  Uptime wasn't a concern either initially, it was all about redundancy.  Uptime, capacity, and speed all came later, pretty much in that order.



Geofrancis said:


> If you have big data stores then go for a hardware raid card running either raid 0+1 if you only have a few disks but If you have more than 4 I would go with raid 5 or 6 if you can as you won't lose as much space due to using parity rather than 1-1 replication.



Personally, if you have more than 2 drives any level of mirroring is pointless.  So I'd say 3 or more drives go with RAID-5 at least.



xvi said:


> Going to try to steer this back on topic.
> 
> While it _is_ a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.
> 
> Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...*including* the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.



I'd grab one of these before going with a NAS.  Though I think the reason most didn't suggest a NAS is because he said he wants to use up to 10 drives, and 10 drive NAS devices are expensive.

But with the external RAID box, you can have up to 10 drives connected to one card, with two enclosures.  And all 10 drives can be set up in one big RAID5 or 6 array.


----------



## Geofrancis (Mar 18, 2013)

newtekie1 said:


> *R*edundant *A*rray of *I*ndependent *D*isks.
> 
> You sure RAID wasn't designed for redundancy?  Because its right in the name.
> 
> RAID was not designed with performance in mind.  It was designed for redundancy originally.  And it wasn't designed for uptime either.  The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later.  And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild.  Uptime wasn't a concern either initially, it was all about redundancy.  Uptime, capacity, and speed all came later, pretty much in that order.



Sorry it probably was not the best choice of words on my part all I mean is no version of RAID (apart from raid 1) can you recover data from an individual drive. if you lose 2 drives you cannot recover data from the rest of the drives either as everything is striped between them.


Maximum chance of recovery and lowest power consumption was what I was looking for with my setup, it suits me for my use of dumping media to it and playing it back. Each 1tb drive can saturate my gigabit  home network individually so I did not need the extra performance that raid 5 provided. So it's protected with a parity drive and I can use the 1tb cache as a hot spare of i lose a drive. I just flush it, unmount the array, change it from cache to data and hit rebuild.

If your doing a business setup and have the cash for 12 port sas controllers and a proper server I would recommend doing that instead.


----------



## vawrvawerawe (Mar 29, 2013)

xvi said:


> Going to try to steer this back on topic.
> 
> While it _is_ a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.
> 
> Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...*including* the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.



Well actually those 4TB might be relatively inexpensive but you can get 2TB for only $75 which is 4TB for $150 so multiple 2TB is still the cheapest option.

However, if 4TB eventually lower to the price of 2 x 2TB then certainly it will be better to get 4TB drives.

Personally I see no reason to buy more HDD now, when I still have a couple TB left and HDD prices continue to lower regularly. I figure, by the time I need more HDD space, which might be a couple to a few months, HDD prices will have gone even lower by then.

I don't need to buy a dedicated NAS because that's one of the main reasons for building my desktop PC - to house more HDD space.



newtekie1 said:


> *R*edundant *A*rray of *I*ndependent *D*isks.
> 
> You sure RAID wasn't designed for redundancy?  Because its right in the name.
> 
> ...



Hmm, I'd be open to considering an external dedicated RAID box...


----------



## newtekie1 (Mar 29, 2013)

vawrvawerawe said:


> Hmm, I'd be open to considering an external dedicated RAID box...



That is what I would do, and in this case the RAID box comes with everything you need, including the card. And if you buy two you get a spare card, you can either keep it just to have a spare in case something happens to the first one, or sell it on ebay or something.


----------



## Geekoid (Mar 29, 2013)

Yes - a raid box is the way to go. Personally, I use a dual-channel fibre connect and use RAID6 with hot spares. It takes 4 simultaneous disk failures before I'm in trouble. A rebuild takes less than 12 hours, so I'd have to have the other 3 failures within that time. Even a second disk fail has never happened yet during the rebuild window. Full backups only happen each weekend as it takes quite a while to copy the whole 16TB  As you can tell, I'm all out for keeping my data over disk performance.


----------



## Geofrancis (Mar 29, 2013)

I would love to have a little itx box with an external hard drive rack but it was not going to be cheap to build.


----------

