# +80TB NAS



## remi (Oct 3, 2017)

Hi everyone,

I would like to build a NAS for my documentary collection. 

It's a long term project, so i will be adding 10TB WD RED drives every two months or so. 

I was thinking of going with Storage Spaces and not RAID5/6 or freeNAS

What hardware would you recommend ?

For case i was thinking of the 
*Nanoxia DEEP SILENCE 6 ANTHRACITE (REV. B) Tower *- it has 16 HDD bays

But i'm not sure what motherboard, cpu, powersupply or RAM to use, is ECC RAM necessary ?


----------



## FreedomEclipse (Oct 3, 2017)

What is your budget???? Thats the most important question here.


----------



## remi (Oct 3, 2017)

Without the HDDs, i'm hoping to spend less then 770$. 
I dont have the money to build it all at once, that's why i said it's a long term project.


----------



## FreedomEclipse (Oct 3, 2017)

In theory... even the lowest tier AMD or Intel system can run a NAS so long as youre not hugely demanding. The problem with that is a raid card to plug in loads of hard drives can cost in excess $200 and go as far as $700 if not more.

So thats where i suggest you look.. Grab a dirt cheap MATX or ITX board, Cheapest AMD or intel Dual or Quad for that platform If you want a little more grunt and a raid card depending on  the amount of drives you wish to hook up.... Its only a SOHO NAS you dont really need to expect Enterprise level of performance from it...


If you dont want to bother with any of this.... a QNAP TS-453A-4G-US is a good option for $549 and you can tack on a QNAP Storage Expansion Enclosure UX-500P at a later date if you need more storage space.

Alternatively search for old enterprise rackmount servers on ebay. some businesses sell their old units for a decent price that will come with plenty of hard drive bays to expand into.

I do believe that the Qnap has many great features though. you can plug it into your TV via HDMI and watch your documentaries from it without having to stream it across the network. Might be worth a thought


----------



## remi (Oct 3, 2017)

I'm inclined to go with Storage Spaces, but do i need a fast CPU ? cause i read that "All storage-related operations are offloaded to CPU"

And for the MoBo which ones have 16 sata ports ? Also how much RAM do i need ? 

This would be used only to backup my files and upload and download from it a few hours a day via USB 3.0


----------



## Toothless (Oct 3, 2017)

You're gonna want to triple that budget for anything decent sir.


----------



## FreedomEclipse (Oct 3, 2017)

remi said:


> I'm inclined to go with Storage Spaces, but do i need a fast CPU ? cause i read that "All storage-related operations are offloaded to CPU"



Just upgrade the CPU to a higher tier based on your budget then. With Storage spaces. the hardware recommendations seem to be so long as your system can run "Windows Server 2012 R2, Windows Server 2012, Windows 8.1, or Windows 8" -- Go for an mid tier i3 if you wish but it will cost more out of your budget. 4GB of ram is fine.



remi said:


> And for the MoBo which ones have 16 sata ports



There are no motherboards with 16x sata ports that exist for the commercial market afaik. only way to get this is via an add in raid card or go with rackmount enterprise storage which prices can vary on the pre-owned market. Addin cards cost $900 brand new or $300 pre-owned.


----------



## remi (Oct 3, 2017)

Toothless said:


> You're gonna want to triple that budget for anything decent sir.


Case - 280$ (*Nanoxia DEEP SILENCE 6 )*
MoBo - ? not sure what to go with
i3 cpu - 140$
4gb ram - 50$
power supply 850W - 120$

*FreedomEclipse *the Nanoxia MoBo has 16 HDD bays, so isnt there a cable to expand 10 sata ports of a MoBo to 16 ? $900 is way to expensive, plus i dont want a RAID controler.


----------



## Toothless (Oct 3, 2017)

I wouldn't just have just 4GB of ram and a non-physical 4 core chip. I've seen Explorer eat 3GB+ doing transfers and some decent CPU usage. I'd say at least an it with 2x4GB but then again the board might be an issue with that many drives.


----------



## remi (Oct 3, 2017)

i'm saying 16 but 10 will be fine too. I'm just being overly optimistic for the future. Right now there will only be 3 drives and +1 each month


----------



## FreedomEclipse (Oct 3, 2017)

remi said:


> *FreedomEclipse *the Nanoxia MoBo has 16 HDD bays, so isnt there a cable to expand 10 sata ports of a MoBo to 16 ? $900 is way to expensive, plus i dont want a RAID controler.



Thats not how sata works sadly. Its not like IDE back in the old days when you could plug two hard drives onto the same cable.

Eitherway you look at it. What you want is going to cost money.  Ive given you various options and choices. Even pointed you towards ebay for pre-owned rackmount units that can give you what you want in a nutshell but it appears that you are unwilling to compromise with any of the options i have given you to fit your budget.

I wish you the best in finding whats right for you but I will be retiring from this thread.

Maybe someone else will step in and take over.

The only alternative that will work but will significantly increase your budget is going with an Asus Workstation motherboard that will come with 6x sata ports.


----------



## ERazer (Oct 3, 2017)

I recommend checking Unraid you can start with cheap build and slowly upgrade the system or go all out.

I started with old computer parts then eventually replace the mobo, cpu, ram, and even added raid card without recreating my whole data.

buy cheap mobo/cpu/ram combo then grab two 10tb drives, 1 for parity and one storage, then keep adding more one 10tb when you get the money or upgrade the cpu/mobo/ram


----------



## BaRRoS (Oct 3, 2017)

AsRock X99 Extreme Series comes with 10 Sata Ports with the top model Extreme11 having 18 (10 Sata + 8 SAS)!!!


----------



## FreedomEclipse (Oct 3, 2017)

BaRRoS said:


> AsRock X99 Extreme Series comes with 10 Sata Ports with the top model Extreme11 having 18 (10 Sata + 8 SAS)!!!



and thats half the OP's budget gone and he doesnt have Ram or a CPU yet


----------



## remi (Oct 3, 2017)

Case - 280$ (Nanoxia DEEP SILENCE 6* )*
MoBo - 256$  AsRock X99 - Thanks *BaRRoS* !!
i3 cpu - 140$
4gb ram - 50$
power supply 850W - 120$

846 $ - a bit over budget but i can make it work 

*ERazer - *that's exactly what i was thinking ! thanks !
*
FreedomEclipse - *it's not that i didnt take your advice into account, but i dont want a used rackmount unit, whos parts can die on me anytime. i prefer to do a diy with a great case and a few low tier parts that i can upgrade in the future.


----------



## Athlon2K15 (Oct 3, 2017)

Id budget for 12TB drives, they are right around the corner.


----------



## remi (Oct 3, 2017)

> 1 for parity and one storage



In Storage Spaces dont i need at least 3 drives ? 1 for Parity and 2 for storage ?



Athlon2K15 said:


> Id budget for 12TB drives, they are right around the corner.


do you think they will be more cost effective then 10TB ? at launch i dont think they will ...


----------



## jagjitnatt (Oct 3, 2017)

If you are technically competent, use something like this
Its buggy at times, and uses port multiplication, so you wouldn't get super fast transfer speeds (around 40-60 MB/s). Since mac doesn't support port multiplication, you can use it with Linux and Windows only.

Drivers too are buggy, but Windows 10 will work great with it. But it gets the job done. You can buy a cheap mobo like this with an i3.

I wouldn't spend too much on NAS hardware except for the hard drives.


----------



## remi (Oct 3, 2017)

jagjitnatt i think your link is broken


----------



## ERazer (Oct 3, 2017)

> In Storage Spaces dont i need at least 3 drives ? 1 for Parity and 2 for storage ?



in simple term:

unraid you can have one or two parity drive its up to you, its not hardware base raid but you can set it up that way if you want.

10tb parity = any storage drive size no greater than 10tb. So you can have buncha low storage drives like:  one 10tb parity = 2tb+6tb+8tb+10tb = 26tb storage, later on you can replace the 2tb with 10tb just need to rebuild the data.

Its an option for you to consider might not be the best option for your need, i like unraid due to flexibility you make it simple at first then go all out later on with server grade parts with not much hassle.

Try to 30 day trial


----------



## jagjitnatt (Oct 3, 2017)

remi said:


> jagjitnatt i think your link is broken


For some reason, I can open them just fine.
Here try these: 
https://www.newegg.com/Product/Product.aspx?Item=9SIA3914PS7761
https://www.newegg.com/Product/Product.aspx?Item=N82E16813157748


----------



## R00kie (Oct 3, 2017)

remi said:


> MoBo - 256$ AsRock X99 - Thanks *BaRRoS* !!
> i3 cpu - 140$


That won't work, that CPU is for a different platform.


----------



## BaRRoS (Oct 3, 2017)

remi said:


> Case - 280$ (Nanoxia DEEP SILENCE 6* )
> MoBo - 256$ AsRock X99* - Thanks *BaRRoS* !!
> *i3 cpu *- 140$





gdallsk said:


> That won't work, that CPU is for a different platform.



gdallsk is right. For LGA1151 look at those:

https://www.asrock.com/mb/Intel/Z270 Taichi/index.pt.asp
https://www.asrock.com/mb/Intel/Z170 OC Formula/index.pt.asp
https://www.asrock.com/mb/Intel/Z170 Extreme7+/index.pt.asp


----------



## thebluebumblebee (Oct 3, 2017)

remi said:


> Case - 280$ (*Nanoxia DEEP SILENCE 6 )*


  You might want to consider a Fractal Design Define XL R2.  How much would that be for you?  8 available 3.5" drive bays, with 4 more 5.25" bays that could be converted.


remi said:


> This would be used only to backup my files and upload and download from it a few hours a day via USB 3.0


USB?

OP: the requirements of a file server is quite different than a desktop computer.  IMHO, most of the advice that you have gotten in this thread applies to desktop PC's, not a FS.  I do not feel qualified to answer your question, and am myself struggling with setting up my own FS with just a couple of drives.  Also, Intel made Thunderbolt free, so I'm expecting an explosion of TB external devices, such as: https://www.techpowerup.com/237539/qnap-rolls-out-quad-core-4-bay-ts-453bt3-thunderbolt-3-nas


----------



## ERazer (Oct 3, 2017)

Personally OP should do more research on what software his FS gonna run on.

like if its FreeNas, it uses ZFS and relies on tons of memory (1gb ram per 1tb storage) and ecc is recommended

Hardware will fail no doubt, the question how well your software cope with it. Is it easy to rebuild data? what if the Raid card fail how your data? so easy to slap on hardwares without a second thought what you gonna be running it on.


----------



## AhokZYashA (Oct 3, 2017)

for NAS, 
you can buy used xeon E3v4/v5 processors that supports ECC RAM, 
some supported motherboard, some ECC RAM, 

for SATA ports, i can suggest you get some of these
https://www.newegg.com/Product/Prod...re=sata_expansion_card-_-15-158-365-_-Product

its a expansion card with 4SATA ports, that plugs in to your PCI-e slots.

or you can find used LSI HBAs on ebay for quite cheap


----------



## remi (Oct 3, 2017)

ERazer said:


> Personally OP should do more research on what software his FS gonna run on.



I looked at :
FreeNas
unRAID
Storage spaces 

And the best one for my case is Storage spaces, because :
- there is no need for RAID hardware
- easily increase the size of the array
- you can add different types of drives

About the ECC RAM i'm not sure i need it and it would be to expensive, right ?

And if i buy a MoBo with 10 SATA ports then i dont need a 
4 Port PCI Express 2.0 SATA III 6Gbps RAID Controller Card, which i think isnt recommended in Storage spaces 





AhokZYashA said:


> for NAS,
> for SATA ports, i can suggest you get some of these
> https://www.newegg.com/Product/Prod...re=sata_expansion_card-_-15-158-365-_-Product
> 
> its a expansion card with 4SATA ports, that plugs in to your PCI-e slots.


it says it's a RAID controler, would Storage space work if i connect my hdds through it ?


----------



## ERazer (Oct 3, 2017)

ECC is always recommended if you have the budget for it and it is a tad expensive.

Depends on the OS for example on unRAID/FreeNas i use this raid card that has been flash to run IT mode to run as SATA expansion. I would check Storage Space forums.


----------



## Static~Charge (Oct 3, 2017)

remi said:


> About the ECC RAM i'm not sure i need it and it would be to expensive, right ?



If you get a parity error in RAM, it could corrupt your data and/or your storage configuration. So, whether you choose to get ECC RAM depends on how highly you value your data on the file server.



remi said:


> it says it's a RAID controler, would Storage space work if i connect my hdds through it ?



If you look at the card's specifications, you'll see that it also supports JBOD (Just a Bunch Of Disks, meaning no RAID).


----------



## remi (Oct 3, 2017)

ofcourse i value my data, so then can i use ECC RAM on ASRock X99 Extreme4 ? 
And what cpu do i need for that ? xeon is out of my budget 

What power supply should i get for 10 or more drives ?


----------



## Steevo (Oct 3, 2017)

My 640 card supports port multipliers up to 20 devices out of the 4 ports.


----------



## Static~Charge (Oct 3, 2017)

remi said:


> ofcourse i value my data, so then can i use ECC RAM on ASRock X99 Extreme4 ?
> 
> And what cpu do i need for that ? xeon is out of my budget



Did you look at the specifications for the ASRock X99 Extreme4?

*Memory*
- Supports DDR4 ECC, un-buffered memory/RDIMM with Intel Xeon processors E5 series in the LGA 2011-3 Socket

Most Intel desktop processors don't support ECC memory. You'll have to go with a Xeon to get that feature (i.e., no Celeron, Pentium, or Core i3 options).

Also, please note that the ASRock X99 Extreme4 is a Socket LGA 2011-3. This will limit your choice of processors. The least expensive one that I found in the current processor line-up was a Xeon E5-2603 V4, 1.7 GHz, 6 cores/6 threads, for $220. If you look through the motherboard's supported CPU list, you should be able to find a compatible, older processor for less money.


----------



## Toothless (Oct 3, 2017)

I don't think OP realizes that this project is going to cost a good amount of money yet.


----------



## remi (Oct 4, 2017)

Toothless said:


> I don't think OP realizes that this project is going to cost a good amount of money yet.


I agree )

But hey, a guy can dream right ?

*Static~Charge *thanks so much for that info. $220 for a cpu is a bit more then i wanted to spend, but at least it's not 1000$. 

What if my RAM fails while it's creating the Parity for a new added drive ? Do i lose all my data ? Or with Storage Spaces you only lose one drive ?

btw, this database is what i need the NAS for :


----------



## FordGT90Concept (Oct 4, 2017)

I'd recommend RocketRAID 840a but beware that it is not UEFI bootable.  Put OS on an M.2 SSD.  You'll need to buy 4x miniSAS -> 4x SATA breakout cables for it.  It retails for $300 and the breakout cables can run for $30 each so you're looking at $420 just for RAID + cables.  You're really not going to find a cheaper card than that to handle 16 drives.


Edit: Nanoxia DEEP SILENCE 6 only has room for 10 drives, 13 if some are converted (definitely don't recommend doing that because they're more likely to fail from heat).

Does this CPU need to transcode?


----------



## remi (Oct 4, 2017)

FordGT90Concept said:


> I'd recommend RocketRAID 840a but beware that it is not UEFI bootable.  Put OS on an M.2 SSD.  You'll need to buy 4x miniSAS -> 4x SATA breakout cables for it.  It retails for $300 and the breakout cables can run for $30 each so you're looking at $420 just for RAID + cables.  You're really not going to find a cheaper card than that to handle 16 drives.



I dont want RAID, if the controler fails i lose ALL my data.
I will use Storage Spaces


FordGT90Concept said:


> Edit: Nanoxia DEEP SILENCE 6 only has room for 10 drives, 13 if some are converted (definitely don't recommend doing that because they're more likely to fail from heat).



The case can have up to 21 HDD and it has tones of fans so overheating wont be a problem.


----------



## FordGT90Concept (Oct 4, 2017)

You don't *have* to use RAID.  You can format the 16 drives individually if you so choose.  That said, RAID cards tend to outlive their usefulness (got a 10 year old RocketRAID 3400 in my server).  Should they fail, just buy another Highpoint card and it should detect the existing configuration.

Storage Spaces is software RAID.  RAID cards (like the 840a) have a dedicated processor and RAM to perform all RAID parity processing in realtime without talking to the CPU.  The RAID card monitors and recovers ECC issues and automatically ejects bad drives from the array.


My server has two Rexus Panaflo fans blowing directly across the HDDs and their oriented perpendicular to the case so it intakes from one side and exhausts out the other (not adding that heat to the CPU compartment).


----------



## remi (Oct 4, 2017)

You're saying that if the RAID controler fails i wont lose the data ?


----------



## FordGT90Concept (Oct 4, 2017)

The data is still on the drives.  It just needs another Highpoint card to read that data and it will be like it never happened.  Same thing applies to Intel software RAID: as long as the drives are plugged into another Intel chipset, it will detect the array like it was always there.

Ehm, when you create a RAID, the first thing it writes to the drives is the RAID type, the ID of the RAID, and the serial numbers of its members.  So long as the RAID card knows how to read that data, it will use it.  A Highpoint RAID won't be observed on an Adaptec or Intel chipset but it will be observed by other Highpoint cards.

Highpoint is cheap for what you get (Adaptec is twice as expensive, for example).  And remember, it's an investment (almost 1mhr MTBF).  You can easily move the PCIE card from one computer to another and, once the drivers are installed, it's all still there.  You could even switch from Windows to Linux.  Can't do that with Storage Spaces.


Are you intending to transcode on this system?  That will dictate the rest of the system requirements.  If not, you could get away with an Intel Atom, as long as the motherboard has an x8 slot for the RocketRAID card.

But yeah, not gonna happen for <$800.  I think the minimum (assuming no transcoding) is going to be about $1000.


----------



## remi (Oct 4, 2017)

For now


FordGT90Concept said:


> Does this CPU need to transcode?



No, i just need to have all my data in one place, nothing complicated. That's why i want to keep things as simple as possible with Storage Spaces.


----------



## FordGT90Concept (Oct 4, 2017)

Here ya go:
https://www.newegg.com/Product/ProductList.aspx?Description=intel atom&Submit=ENE&IsNodeId=1&N=100007629 600456442

Big slot for RocketRAID, 4x10/100/1000 (82574L GbE), integrated graphics (Aspeed AST2400/AST2500 Graphic Driver), and an Atom processor (just be careful not to overload it on boot).  The more expensive one is an octo-core.  All you need is some DDR3 sticks, and a SATA SSD for the OS and you're golden.

Edit: Crap, PCIe is Gen 2, not Gen 3.  A bit of a bottleneck but if you're plugging HDDs into the 840A, it should be fine.

No Linux drivers.  Even supports ECC DDR3 because it's a server-grade Atom.


----------



## remi (Oct 4, 2017)

What i dont get is, why do i need ECC RAM ? Cant i use 2 sticks of 4GB DDR4 NON-ECC and if one fails while parity is running i have the other one ?

NOTE that this is not going to be a true NAS, more like local data storage with 1 or 2 drive Parity. 

For ex this guy isnt useing ECC or server grade cpu :










plus  the MoBo : ASRock X99 Extreme4 doesnt even support ECC RAM


----------



## FordGT90Concept (Oct 4, 2017)

You really don't need ECC memory.  Just saying you can if you want to.

ECC is basically an insurance policy against radical electrons.

If you're getting a processor that supports it and it only costs a few bucks more to get ECC, why not?


Edit: Did some more digging on that SuperMicro Atom...
https://ark.intel.com/products/77983/Intel-Atom-Processor-C2558-2M-Cache-2_40-GHz

The 4xGbE is integrated into the CPU...can also function as 2x2.5GbE.

They actually make 16 core Atoms with 4x10GbE ports. 
https://ark.intel.com/products/97927/Intel-Atom-Processor-C3958-16M-Cache-up-to-2_0-GHz

Review of that little beasty (no x8 so won't work for 840a):
https://www.servethehome.com/superm...ore-intel-atom-c3955-mitx-motherboard-review/

iKVM looks interesting.


----------



## remi (Oct 4, 2017)

That's a releif, cause this media server will definetly not be operational 24/7. More like a few hours a day. 

But in worst case scenario, can i lose 80TB of data just because i didnt opt for the ECC RAM ?

*FordGT90Concept, SuperMicro Atom *seems ok (to few sata 3 ports though) but i'm not in the US and i cant find it in Romania


----------



## FordGT90Concept (Oct 4, 2017)

Just look for motherboards with Atom C-series processors and you should be on the right path.

In all honesty, any motherboard with enough network bandwidth and a PCIE x8 slot will work if you have an 840a.  Remember, 1 GbE ~= 110 MB/s.  Your drives each will support 200+ MB/s.  For that reason, dual and even quad 1 GbE looks very attractive to me.



remi said:


> That's a releif, cause this media server will definetly not be operational 24/7. More like a few hours a day.
> 
> But in worst case scenario, can i lose 80TB of data just because i didnt opt for the ECC RAM ?


I seriously wouldn't worry about it.  If you want to learn more about ECC, here you go:
https://www.pugetsystems.com/labs/articles/Advantages-of-ECC-Memory-520/

The best way to prevent ECC-like errors is to use RAID with write-through caching enabled.  Data is verified as it is being written and, through parity information, confirmed (and fixed if necessary) as it is being read.  Too many errors on a specific drive and it will throw it out of the RAID.  840a supports this and if you have their RAID management software running, it can even send you an email that specific serial # has been removed and needs to be replaced.

For the record, you can use Storage Spaces on top of the hardware RAID.  I'm sure Microsoft Azure does this.


----------



## londiste (Oct 4, 2017)

You need to figure out the solution you are going with as well as any additional requirements.

Just some thougths:
- Whether it's RAID, unRAID, FreeNAS, Storage Spaces or something else, the idea behind making sure data is not corrupt or lost remains the same. There must be some extra space for clone and/or parity information. Which particular implementation you choose is fairly irrelevant here. Recovery process may become equally tricky in each of them if something crucial is lost.
- Hardware(-assisted) RAID vs software-based solutions usually becomes a question of performance. Software-based solutions will be heavier on CPU and possibly memory when you need good throughput/IOPS or whichever performance measure. In your case - for archiving - you probably will not have to worry about that aspect.
- RAM does not run in pairs, parity is exactly what ECC is for. ECC is an insurance policy against any memory errors. ZFS is claimed to be particularly vulnerable due to heavy reliance on RAM but most software solutions will have some level of RAM caching.
- Have you looked into requirements for Storage Spaces - CPU, RAM, operating system (licensing)? For what you need and with what you have in mind for budget, a free solution with less complecity and lower needs from hardware is probably more than sufficient.
- For simple storage and archiving, the hardware does not have to be particularly fast when it comes to CPU, RAM etc. Power, cooling and reliability is what to look for.
- Do you have any needs or wishes towards the case itself - pretty, small, large, quiet, convenient, rackable, resistant to dust/temperature?

With what you have written about your needs and budget, I would probably go for FreeNAS or one if its analogues on an Atom-based server/NAS oriented boards. Something like this:
http://www.supermicro.com/products/motherboard/atom/A2SDi-2C-HLN4F.cfm
http://www.supermicro.com/products/motherboard/atom/A2SDi-8C-HLN4F.cfm
(not 100% sure on their prices but lower end of the range should be within budget).

I have a weakness towards cases that I could stash somewhere in living space and would look good enough. For this kind of purpose, I particularly like this one: http://www.lian-li.com/en/dt_portfolio/pc-q26/ 
But that is very subjective suggestion on my part and not a cheap one 

Larger NAS appliances should also be within your budget. A couple Synology's 8-disk NASes should be $800-900 if I remember correctly. I am sure their competitors have some as well. These are extremely worry-free when compared to custom-built NAS, unless you enjoy tinkering and learning about technical aspects of a NAS.


----------



## FordGT90Concept (Oct 4, 2017)

Keep in mind that 10+ TB drives will be using at least 512e format.  May want to consider 4Kn:
https://en.wikipedia.org/wiki/Advanced_Format

http://www.seagate.com/enterprise-storage/hard-disk-drives/enterprise-capacity-3-5-hdd-helium/
http://www.hgst.com/products/hard-drives/ultrastar-he10 (4th from last digit in model number: N = 4Kn, E = 512e)

That's going to limit your options in terms of hardware support.  Also operating system (e.g. Windows 7 doesn't support it).


----------



## remi (Oct 4, 2017)

didnt know that, atm on win 7

Honestly, every person i talk to recommneds another thing.
RAID, unRAID, FreeNAS or Storage spaces ?

I found out that there are many cases where Storage spaces got a bug, and people lost all their data.
unRAID is for Linux and i do prefer Windows.
RAID is the most complicated/expensive,  so pass...
FreeNAS requires tones of RAM and you can't add different size drives, so pass...

*londiste - *for the case i do love the Nanoxia DEEP SILENCE 6, it can hold 21 HDDs !!!


----------



## FordGT90Concept (Oct 4, 2017)

RAID is not complicated.  Seriously, it only has five settings when creating the RAID and it's transparent after that.

*1) RAID type:*
RAID5 = minimum 3 drives, 1 can fail
RAID50 = RAID5 + RAID5 = minimum 6 drives, one can fail in each RAID simultaneously without data loss
RAID6 = minimum 4 drives, 2 can fail

RAID6 and 50 you would need 10 x 10 TB to get 80 TB
RAID5 you would need 9 x 10 TB  to get 80 TB

*2) Volume Name:* e.g. "Storage"

*3) Sector Size:* 512 (512e drives) or 4096 (4Kn drives).
512 ends at 6TB drives, hence change to 512e (e as in emulated, 4K underneath).

*4) Cache Method:*
Write-Thru: confirms write after the data hit the platters.
Write-Back: confirms write when the data is in the RAID-card's RAM (not yet written but queued to write).

*5) Creation Method:*
Foreground: it creates the RAID array right now.  It can't be used until it is done.
Background: it creates the RAID array in the background.  Performance is slow until it is done but it will accept read/write requests.


Once you get into the operating system, install the drivers for the card and it you'll see the volume as unformatted space.  Create a GPT partition on it, assign it a letter, and away you go.


FreeNAS, if I understand it correctly, is like Active Directory on Windows Server.


----------



## londiste (Oct 4, 2017)

remi, you definitely need to figure out what the end result should be.
Adding a new hard drive every month does not seem like something you would want to deal with when it comes to any kind of storage array.

How large is the amount of data you intend to store eventually?
How do you intend to access the data, how often?
What do you need in terms of backup/redundancy? 
Is this the only copy of data or do you intend to have a backup of entire thing?

Do you even need it to be NAS? You have mentioned only using it occasionally and over USB3, NAS by definition is accessed over network and normally always online.
Do you even need a storage array? Perhaps an archive? As the simplest example - a library of hard disks and USB dock?



FordGT90Concept said:


> FreeNAS, if I understand it correctly, is like Active Directory on Windows Server.


I believe that part of Windows Server with functionality similar to FreeNAS is called File and Storage Services 
Linux can be made to do a lot of things, including a LDAP directory but that's not really built into FreeNAS nor the purpose of it.


----------



## FordGT90Concept (Oct 4, 2017)

Still not sure where 21 HDDs is coming from.  21 2.5" HDDs via 10 3.5" to 2 x 2.5" adapters?  If yes, that won't work.  10+ TB drives are all 3.5" right now (and helium).


Do you have access to a USB3-C w/ Thunderbolt port?  If yes:
http://www.highpoint-tech.com/USA_new/series-rs6618a-overview.htm


----------



## remi (Oct 4, 2017)

RAID vs Storage Spaces :

in RAID you cant use different sizes of drives, in Storage Spaces you can
in RAID you cant easily increase the array size, in Storage Spaces you can

Sure RAID is faster, but i dont need performance, just reliability

If i'm wrong let me know.

21  HDDs case :








[/QUOTE]


----------



## FordGT90Concept (Oct 4, 2017)

I'm not going to watch a 7 minute video.  I've skimmed it repeatedly and did not see 21 3.5" drives.  Please provide time to look at.



remi said:


> in RAID you cant use different sizes of drives, in Storage Spaces you can


That's JBOD: Just a Bunch Of Drives.



remi said:


> in RAID you cant easily increase the array size, in Storage Spaces you can


Indeed, adding/removing drives means rebuilding the array.


----------



## remi (Oct 4, 2017)

londiste said:


> remi, you definitely need to figure out what the end result should be.


I know what i want, but i dont think i'm expressing it very well.

Right now i have tens of thousands of documentaries in 22 HDDs that sit on a shelf. I acces the drives via a USB 2 Dock from Vanteck.

It's a nightmare to find anything on them. I would also like to browse the files but since they are on 22 separate drives it would take days.

I just want to have all the data in one place. I acces it a few times a day to upload or download from it. I dont want to watch the videos  or stream them from this NAS/Server/box/ whatever.



> Adding a new hard drive every month does not seem like something you would want to deal with when it comes to any kind of storage array.


Really ? why ? i thought that's what their purpose was


> How large is the amount of data you intend to store eventually?



I hobe it's gonna be 100TB+ but it all depends how much money i will have to put in it.
It's basically a library for documentaries and ebooks.



> How do you intend to access the data, how often?


once a day or once every 2 days, for a few hours via USB 3.0



> What do you need in terms of backup/redundancy?
> Is this the only copy of data or do you intend to have a backup of entire thing?



Yes, cause i cant afford to make a 80+TB backup



> Do you even need it to be NAS? You have mentioned only using it occasionally and over USB3, NAS by definition is accessed over network and normally always online.


No, just a local storage array, no need for network.


> As the simplest example - a library of hard disks and USB dock?


like the system i have atm ? it's incredibly unpractical for so many files.



FordGT90Concept said:


> I'm not going to watch a 7 minute video.  I've skimmed it repeatedly and did not see 21 3.5" drives.  Please provide time to look at.



start at 3:23

i couldnt link the video starting from that, it's the forums fault not mine


----------



## FordGT90Concept (Oct 4, 2017)

remi said:


> start at 3:23


Bad idea.  Stacking hard drives without dampening/bracing?  The vibrations will make them error likely to the point they can't be used.

There's Lian Li cases that can house a lot of drives...sanely.

Edit: Holds 20 3.5" drives: http://www.lian-li.com/en/dt_portfolio/pc-d8000/


I suppose you could fill the motherboard with a lot of cheap SATA cards to supplement what is on the motherboard itself.  It will be cheaper than buying a $300 card that can handle 16 but it will also be messier.

Again, 840A (and similar RAID cards), you don't have to configure a RAID at all.  The drives not in a RAID appear as-is to the operating system.  You can plug and unplug them freely as well.  They still have the advantage of being able to notify you of hardware faults.


----------



## remi (Oct 4, 2017)

The Nanoxia also has vibration dampening rubber on every hdd cage, it's german made, so it should be of the highest standards. 

I can add the 840A SATA card after the 10 sata 3 ports are full, right ? saves me some budget now


----------



## Solaris17 (Oct 4, 2017)

I think there are two problems here.

The first is the OP seems to think he knows what he wants but doesnt have the experience to back it.

What he is asking for is 80TB on a storage array which is attracting the attention of storage people that are used to architecting and doing this properly IE not consumer end equipment.

I think we need to understand that OP is going to need to make some modifications and budgetary sacrifice to make this work the way he thinks it will. And the rest of us need to understand that OP isn't attempting to run $400 LSI cards.

IMO Personally I think storage spaces isnt a good idea but I architect my arrays ahead of time. So I understand why that might be the best bet. I'm also not confident that OP can configure a ZFS volume and sharing permissions in FreeNAS. At the same time I understand OP has a budget but I think he needs to understand this isnt a desktop as much as he thinks its that simple. With that many drives your going to need to drive down into the PSU and make sure your 3/5v rails can carry enough amperage to cold start those drives. Or that the BIOS atleast has staggered spinup.

We need to think outside of raw sata ports there are people that get paid alot of $$$$ to do this right and if it was as easy as getting a $120 seasonic and a sweet super 1337 asrock board everyone would already be doing it.


leeway on both sides of the coin.



> The Nanoxia also has vibration dampening rubber on every hdd cage, it's german made, so it should be of the highest standards.



Assumptions make an ASS of U and ME


----------



## FordGT90Concept (Oct 4, 2017)

I edited this above but want to make sure it gets seen:
Holds 20 3.5" drives: http://www.lian-li.com/en/dt_portfolio/pc-d8000/


And to clarify: if you go this whole X99 motherboard + HDDs in case route: you're making a brand new computer.  You can't just plug some other computer into one of it's USB3 ports and expect it to be able to transfer files.  Your other computer is a USB3 host and so is this new computer you're building.  Two hosts means no clients, which is a problem.  The relationship naturally drifts to networking which drifts into Network Attached Storage (NAS).


What you're really asking for is a 10+ drive enclosure with USB3.

Edit: I can't find any with 10 but StarTech offers a USB3/eSATA enclosure with 8 drives:
https://www.startech.com/HDD/Enclosures/8-bay-removable-hard-drive-enclosure~S358BU33ERM

Reviews are...concerning:
https://www.newegg.com/product/product.aspx?Item=N82E16817707367


----------



## remi (Oct 4, 2017)

Solaris17 said:


> I'm also not confident that OP can configure a ZFS volume and sharing permissions in FreeNAS.



yeah cause you have to be a genius to do that.
FreeNAS is out of the question cause why should i buy 128GB of ECC RAM and an expensive CPU if i dont absolutely need it??

Let's say that my budget is 1000$ without any HDDs, even then i still cant find a consensus about what would be the best option for me, since i dont need the most performance, but the most reliability.



FordGT90Concept said:


> What you're really asking for is a 10+ drive enclosure with USB3.


Nope, that's not what i want cause it's very limited in space and ventilation. 
I'm ok with another PC. Regarding the networking, it's not something i thought about, cause yeah what if one day i decide to transfer the data to another PC...


----------



## Solaris17 (Oct 4, 2017)

remi said:


> yeah cause you have to be a genius to do that.
> FreeNAS is out of the question cause why should i buy 128GB of ECC RAM and an expensive CPU if i dont absolutely need it??
> 
> Let's say that my budget is 1000$ without any HDDs, even then i still cant find a consensus about what would be the best option for me, since i dont need the most performance, but the most reliability.



You dont appear to be taking this well. I think I’ll back out too. If your worried about reliability you would have a budget over $1k for an 80TB config, that’s coming from a professional and a few other professionals. 

Everyone is trying to work with you but you seem conviced you can do this your way.

Your array WILL fail with the picked shit components and I would be surprised if you could even keep the unit on during load with the power draw and I cannot wait for the thread asking why you shouldn’t buy $130 10TB seagate ironwolfs instead of $300 hitachi or WD datacenter drives.

“To save money”


----------



## Jetster (Oct 4, 2017)

Most modern boards have 6 sata ports. With 12tb drives you're talking 62 tb.(one for the system) I can't imagine needing any more. If so then buy a raid card


----------



## remi (Oct 4, 2017)

Solaris17 said:


> You dont appear to be taking this well. I think I’ll back out too. If your worried about reliability you would have a budget over $1k for an 80TB config, that’s coming from a professional and a few other professionals.
> 
> Everyone is trying to work with you but you seem conviced you can do this your way.
> 
> ...



i'm only gonna buy WD red NAS drives FYI

if you're such a profesional how come you have no idea what i should use ?
Ok, so what budget would you recommend, 50,000$ ?


----------



## Jetster (Oct 4, 2017)

Oh, that's the wrong attitude


----------



## FordGT90Concept (Oct 4, 2017)

remi said:


> Nope, that's not what i want cause it's very limited in space and ventilation.
> I'm ok with another PC. Regarding the networking, it's not something i thought about, cause yeah what if one day i decide to transfer the data to another PC...


Limited space...because enclosures are semi-portable.
Limited ventilation...you said you'd only be using it a few hours at a time.  That StarTech does have active cooling too so...

You could always buy several of those enclosures and plug them into the same computer.

Advantage of this path is that you can use those 22 drives as-is.  Say you bought two of these enclosures, that covers 16 of those drives.  The last 6 you could install into your existing computer, maybe, or keep in your external dock.

There's no redundancy with this approach but, assuming you go with it, you're looking at about <$700 which matches your budget.


----------



## Solaris17 (Oct 4, 2017)

remi said:


> if you're such a profesional how come you have no idea what i should use ?



Because when I have almost 100TB of cold data I rarely access I spin out my AMZ S3 instance and access it or I log into my fiber channel SAN because I dont specialize in running priceless data on equipment I can buy at radioshack.


----------



## remi (Oct 4, 2017)

Jetster said:


> Oh, that's the wrong attitude



Hey he started it. 



FordGT90Concept said:


> Limited space...because enclosures are semi-portable.
> Limited ventilation...you said you'd only be using it a few hours at a time.  That StarTech does have active cooling too so...
> 
> You could always buy several of those enclosures and plug them into the same computer.
> ...



If i'm gonna power up many drives at once, i need protection against hdd failure, so redundancy is a must. The initial budget can vary. I dont know what i would do if i would lose all that data.

Plus 80tb is a future prediction. Right now i will have ~30tb


----------



## FordGT90Concept (Oct 4, 2017)

remi said:


> i need protection against hdd failure, so redundancy is a must.


RAID = Redundant Array of Inexpensive Drives

RAID is usually the first line of defense against HDD failure.  Backup is the second line of defense.


----------



## thebluebumblebee (Oct 4, 2017)

remi said:


> NOTE that this is not going to be a true NAS, more like local data storage with 1 or 2 drive Parity.


Guys, doesn't the OP just need external storage?  Like this: http://www.stardom.com.tw/STARDOM2016/product.php?id=353


----------



## Fx (Oct 4, 2017)

Remi,

I have a solution for you. It is reliable, cheap, easy to expand and easy to recover in case of failure. However it doesn't use Storage Spaces; I would never trust that. This is a fine line that you are trying to walk and one that I walk myself. I have over 150TB of data and have never lost a byte.

You are going to need to use a mixture of technologies to pull this off, but these are stupid simple and don't require fancy sauce.

Stablebit is what you will use to create a pool. It basically adds disks together regardless of size. You need to create 2 pools. The second pool is what you will sync your first pool to so that you have a complete backup which doesn't require any parity. You could also opt to use a backup service which you would backup that entire machine for a low monthly cost for unlimited data.

No, you do not need ECC RAM for what you are trying to do so that will save some money. I need a little bit of time to come up with the list of hardware and to see what it totals to.

For starters though, you would begin with a Norco 4224 case (24 bays).


----------



## newtekie1 (Oct 4, 2017)

remi said:


> I dont want RAID, if the controler fails i lose ALL my data.



No you don't, at least not with most of the popular RAID manufactures, including Highpoint.  I think you need to educate yourself a lot more before taking on this project.


----------



## lilhasselhoffer (Oct 4, 2017)

Maybe we should start from the beginning.

ECC is RAM related.  It's Error Checking and Correction.  It makes sure that the values stored in RAM are checked.  The values in RAM are what is taken whilst files transfer through the CPU, and thus if these values change while writing you can corrupt data.  Because modern CPUs have integrated the memory controller onto the CPU, you need to select a CPU which supports ECC, RAM that is ECC, and be willing to pay for it.  It costs more, generally is slower, but is the way to go when money is no object.  If all you are storing is media, then you're likely not going to need the expense.

RAID is how you hook up a hard drive array.  Here's the short of it.  You need to choose what type of RAID array you want.  There are a bunch of types, which all have their ups and downs.  Here's the real issue, adding a new drive every month, as funds become available, isn't really supported (in anything but JBOD).  If I build a RAID 5 array with 4 disks, and want to add a 5th, I have to create a new array.  That means exporting all data somewhere, destroying the old array, building a new one, and then importing the data back.  This isn't going to be a 20 minute drive pop-in and bootup.

As far as selecting RAID, you're going to have to figure that out yourself.  The common types are:
0 - Striping - Data is striped to drives.  This is fastest, but any failure borks all data. - 2 disk minimum
1 - Mirroring - Data is written to each drive.  This is very costly on storage, because you functionally lose half of your storage space. - 2 disk minimum, must be even number of disks
5 - Stripe+Parity - One drive can be lost, and recreated.  You lose the storage of 1 drive. - 3 disk minimum
6 - Stripe+double parity - Two drives can be lost and recreated.  You lose the storage of 1 drive. - 4 disk minimum

What you don't see is that all drives must be the same size.  If they aren't, then best case is you have whatever is the smallest drive.  Additionally, rebuilding arrays is a royal pain.  My 12 TB array (16 actual, RAID 5) took nearly a day to rebuild.  That was with a dedicated RAID card, not the Intel rebuild (tried it back in the SATA 2 days with a 6 TB build, and it took more than 2 days).



So, maybe you need to evaluate a few things.  Is absolute security a concern, because if it is your budget is not in the right area code.  Is your goal a media server, that might better be served as one of the available NAS solutions on the market?  Maybe this is all still a little bit new, and you should do some reading and decide on something a bit more permanent.

I'd suggest that a NAS that can do what you are looking for is in the $2419 range on Newegg right now. 
https://www.newegg.com/Product/Prod...TMATCH&Description=nas&ignorear=0&N=100158125 601286743 601299072 601299171 601299369 600418376&isNodeId=1
Your budget is less than a third of that.  You'll have to learn about Linux, or buy an OS.  You'll have to figure out how to share over your network.  This isn't exactly something you slap together in an hour and start flying with.  I think an evaluation of priorities, and skills is in order.  From personal experience, Linux is a mess the first time a person tries to get it working.  A NAS wraps everything up in a bow.  By the time you buy everything, get the cards, the OS, etc... you'll probably spiral up to the $2000 mark.  You pay an extra $420 for a real warranty, developing software, and convenience.  In my experience, that is worth is when you want something that just works.


----------



## newtekie1 (Oct 4, 2017)

remi said:


> in RAID you cant use different sizes of drives, in Storage Spaces you can



I'm going to tell you right now, while you can use different size drives in Storage Spaces, you shouldn't.  If you have resiliency enabled with Storage Spaces, and use different size drives, the efficiency goes down hill real fast.

The other problem is it really is bad at telling you how much space you actually have after it takes resiliency into account.  To give you an example, I set up an array with a 5TB, 4TB, 3TB, and 2TB drive.  The total usable space after one drive resiliency was only 8TB.



remi said:


> in RAID you cant easily increase the array size, in Storage Spaces you can



It is pretty easy to increase the array size with the Highpoint cards(and most other RAID cards).  You just connect the new drive to the RAID card. Go into HighPoint RAID Management inside of Windows.  Select the OCE(Online Capacity Expansion) option.  Select the drive you want to add to the array.  Hit finish.

At this point, you have two options, you can access the new space immediately but the array becomes degraded(no redundancy) until the array is re-built onto the new drive or you can have it wait until the array is rebuilt onto the new drive before accessing the space(this maintains redundancy at all times).  And, Storage Spaces gives you the same options when you add a drive to the array.



FordGT90Concept said:


> Indeed, adding/removing drives means rebuilding the array.



It does on Storage Spaces as well though.


----------



## remi (Oct 4, 2017)

lilhasselhoffer said:


> adding a new drive every month, as funds become available, isn't really supported (in anything but JBOD).  If I build a RAID 5 array with 4 disks, and want to add a 5th, I have to create a new array.  That means exporting all data somewhere, destroying the old array, building a new one, and then importing the data back.


mate, google Storage Spaces, you can add any new drive easily and rebuilding the array isnt done from 0. It's a million times more simple.

*newtekie1 *are you referring to Parity resiliency ? i did not know that performance drops if the drives are different size, but i guess i could use all of the hdds in the same size. 
Honestly i still dont see a drawback of this software RAID (storage spaces). Although i do agree that i need to do more research. 
Any advice is very appreciated. 

*Fx* i'm very curious of your solution, but please dont be offended if i dont like the Norco 4224 case, cause i really love the Nanoxia case design


----------



## ERazer (Oct 4, 2017)

I agree OP should do more research, so many things to consider.

Does system support stagger HD boot up, does it spin down the HD when not in use to minimize HD wear, cache storage for fast file transfers, how many fail drives can your parity drive supports or how you wanna setup your RAID. How well can you upgrade the server with minimal down time. Heck i can have my cpu/mobo/ram die on me and i can have my system back up in 2-3hrs.

Personally if i building another PC/server running 24/7 it better doing something else other than NAS like VM's, Torrent, VPN, Media Server, game server, etc.


----------



## Fx (Oct 4, 2017)

remi said:


> *newtekie1 *are you referring to Parity resiliency ? i did not know that performance drops if the drives are different size, but i guess i could use all of the hdds in the same size.
> Honestly i still dont see a drawback of this software RAID (storage spaces). Although i do agree that i need to do more research.
> Any advice is very appreciated.
> 
> *Fx* i'm very curious of your solution, but please dont be offended if i dont like the Norco 4224 case, cause i really love the Nanoxia case design



I am not offended at all. I'm merely suggesting hardware which has a lot of value. If you want less bays and subjectively betters looks, that is on you. I just look at rack cases when I am thinking of storage because I like lots of room for expansion. If you had a bigger budget, I would actually suggest a 24-bay Supermicro case. That's what you would use when you are giving storage serious consideration without cutting any corners.


----------



## remi (Oct 4, 2017)

cache storage, adding a second redundancy drive, sata cards for more then 10 sata ports etc are all things that will be a future upgrade. I want to start with the bare bones.

But please bare in mind, this is not for commercial applications, or networking, it's just for my personal video library. It really shouldn't be that complicated.


----------



## Fx (Oct 4, 2017)

I hear ya, my first foray into storage also began with a tower because that was all I knew and I wanted it to look good and be quiet. I used a Fractal Design Define R4. It worked great until I quickly ran out room for expansion. This is when I moved onto a Norco 4224, and then eventually the Supermicro that I have now:

https://www.amazon.com/gp/product/B00C8H17LY/?tag=tec06d-20

You eventually get to a point where you have to design strictly for storage requirements and appearance comes second.


----------



## newtekie1 (Oct 4, 2017)

remi said:


> *newtekie1 *are you referring to Parity resiliency ? i did not know that performance drops if the drives are different size, but i guess i could use all of the hdds in the same size.
> Honestly i still dont see a drawback of this software RAID (storage spaces). Although i do agree that i need to do more research.
> Any advice is very appreciated.



Yes, I'm talking about parity resiliency.

Storage Spaces is poorly implemented. It is a pain in the ass to set up properly, and a lot of the settings aren't clear.



Fx said:


> Supermicro that I have now



I have the 12 bay version of that!  I agree, once you get into storage, you realize cases should be about function, not form.


----------



## remi (Oct 4, 2017)

Fx said:


> I hear ya, my first foray into storage also began with a tower because that was all I knew and I wanted it to look good and be quiet. I used a Fractal Design Define R4. It worked great until I quickly ran out room for expansion. This is when I moved onto a Norco 4224, and then eventually the Supermicro that I have now:
> 
> https://www.amazon.com/gp/product/B00C8H17LY/?tag=tec06d-20
> 
> You eventually get to a point where you have to design strictly for storage requirements and appearance comes second.



I hope one day i'll get to where you are, 150TB ! wow ! what do you store on that ?



newtekie1 said:


> It is a pain in the ass to set up properly.



from what i've seen, setting it up is super easy, but i dont know what properly setting it up entails.


----------



## thebluebumblebee (Oct 4, 2017)

@remi , you want something as simple as an external drive (your comment about connecting via USB), but the capacity that you need pushes you into another strata.  If you want to stay with the external storage, and not NAS (if you build a PC like you've talked about, you're building a NAS), then look into something like SANS DIGITAL TowerRAID TR8UM6G JBOD 8 3.5" Drive Bays 8 Bay SATA to eSATA and USB 3.0 JBOD Enclosure .  Moving upscale a little, and one that I would recommend, the HighPoint RocketStor 6418TS – 8-Bay Q-SATA Turbo RAID Tower Enclosure would increase speed and I think that a RAID controller is built in.  (both of those say that they have a 64 TB capacity, but I think that is just because the description was written before the 10 TB and now 12 TB drives were out) The problem though, is your stated 80 TB capacity.  If you use RAID 5 (really, you should, trust me) with 10 TB HDD's, you need to add drives in groups of three, therefore you could only 6 bays and you would be limited to 40 TB. (I saw that  Stardom said: "and under the RAID 5 mode, a total of *70TB* in volume for storage with a *single drive used for redundancy*,) (maybe someone can enlighten me on that) 

Don't be afraid of RAID.  Really, it's not that big of a deal.


----------



## newtekie1 (Oct 4, 2017)

remi said:


> from what i've seen, setting it up is super easy, but i dont know what properly setting it up entails.



They do some really stupid stuff that can put your data as risk.

When you set it up, you pick the storage pool size.  Except, you can pick a size bigger than your actual storage allows.  So, or example, if you only have 3 10TB drives and set up storage spaces, you can select a pool size of 80TB right off that bat.  And it will add the 3 drives to the pool, and a 80TB drive will show up in windows!  However, with resiliency you only actually have 20TB of space available.  Here comes the problem.  You start filling the new Storage Spaces drive with data.  You'd think when you hit 20TB it would not allow you to add more data.  Wrong.  It actually lets you keep adding data, but you pool no longer is resilient.  It does warn you that resilience is compromised, but not obviously.  It doesn't pop a message up on the screen, it doesn't put something in the system tray warning you.  Nope, none of that, it put the warning in the Storage Spaces control panel, which you'll never see unless you go into Storage Spaces...

I used Storage Spaces for a year, it isn't worth using unless you have no other option.

Yes, it has some advantages, being free obviously being one.  The fact that if the computer dies, you can move the drives to another Windows computer, and the storage pool _should_ be recognized. I say "should" because there have been people that have tried this and it didn't work.

However, an actual RAID controller has it advantages as well.  Speaking for the Highpoint cards here, because that is what I've used for my personal storage setups for years.  The first is you can move the array to any computer, running any operating system, and the array will be recognized. If the RAID controller fails, you DO NOT loose the array.  The array information is stored on the drives.  You can connect the drives to any Highpoint RAID controller and the array will be recognized and usable.  Assuming the RAID controller supports the RAID mode the array is using(like you can take a RAID6 array and connect it to a card that doesn't support RAID6).  But really RAID6 is the only mode that not every Highpoint card supports.  If you use RAID5 or 1 or 0, every RAID card they make will work.


----------



## remi (Oct 4, 2017)

newtekie1 said:


> They do some really stupid stuff that can put your data as risk.
> 
> When you set it up, you pick the storage pool size.  Except, you can pick a size bigger than your actual storage allows.  So, or example, if you only have 3 10TB drives and set up storage spaces, you can select a pool size of 80TB right off that bat.  And it will add the 3 drives to the pool, and a 80TB drive will show up in windows!  However, with resiliency you only actually have 20TB of space available.  Here comes the problem.  You start filling the new Storage Spaces drive with data.  You'd think when you hit 20TB it would not allow you to add more data.  Wrong.  It actually lets you keep adding data, but you pool no longer is resilient.  It does warn you that resilience is compromised, but not obviously.  It doesn't pop a message up on the screen, it doesn't put something in the system tray warning you.  Nope, none of that, it put the warning in the Storage Spaces control panel, which you'll never see unless you go into Storage Spaces...
> 
> I used Storage Spaces for a year, it isn't worth using unless you have no other option.



That sounds like a huge bug, which i cant believe they haven't fixed by now. So you recommend making the pool size only as big as the drives added ? And when you want to add another drive, then increase the pool size ?
Did you use it in 2016 ?



thebluebumblebee said:


> @remiif you build a PC like you've talked about, you're building a NAS


that's the main plan atm


thebluebumblebee said:


> @remi
> Don't be afraid of RAID.  Really, it's not that big of a deal.


I'm not afraid, but i really dont like the disadvantages of a hardware RAID, like increasing the size of the array and adding new drives. Which like i said, will be a monthly thing.


----------



## newtekie1 (Oct 4, 2017)

remi said:


> That sounds like a huge bug, which i cant believe they haven't fixed by now.



It isn't a bug, it is actually how they have designed the system to work.  Resiliency was a second through when they designed Storage Spaces.  And because of that, it is extremely poorly implemented.



remi said:


> So you recommend making the pool size only as big as the drives added ?



That is the problem, if you add 3 drives, and make the pool 30TB, you can still get to the point where you don't have resiliency.  When you create the pool you have to account for the size available after resiliency, and they don't even make that size very clear.



remi said:


> And when you want to add another drive, then increase the pool size ?



Yes, and even when changing the pool size, they don't make the size after resiliency that clear.



remi said:


> Did you use it in 2016 ?



Yes, I've been using it in one form or another(Storage Spaces originally started as Drive Extender on Windows Home Server back in 2007).  I used Storage Spaces on Windows 10 up until about 2 months ago when I finally ditched the last storage pool for a real RAID setup.



remi said:


> I'm not afraid, but i really dont like the disadvantages of a hardware RAID, like increasing the size of the array and adding new drives. Which like i said, will be a monthly thing.



Like I said before, it isn't that hard.  You don't even have to turn the machine off if you have hot-swap bays.  And thanks to OCE, you can still access and work with the data while the new drive is being added to the array.


----------



## remi (Oct 4, 2017)

ok, so for hardware RAID 5, if i make a pool of 30tb, and in 3 months it's full, can i increase the size of the pool and add a new 10tb drive ?
or  do i need to save all the ~20TB data ofsite, delete the pool, and then make a new pool of 40tb, and recopy all the ~20tb of data on the new pool ?


----------



## Fx (Oct 4, 2017)

Remi, this isn't cut n' dry, but it gives you a lot of room for flexibility

Case, your choice

PSU   $110
   -EVGA SuperNOVA 850 G3

Mobo, your choice. I recommend Socket 1151
   -Make sure it has plenty of PCIe connectivity and good ratings for reliability
   -Ensure it has an Intel NIC

HBA (HDD controller) $70
  - LSI SAS9211-8i (flashed to IT mode). This will present the drives at a low level to the operating system

Memory, your choice

CPU, your choice. I recommend a quad core with hyper-threading (HT)
   -Ensure it has high frequency around 3.7-4.0GHz

Stablebit DrivePool Software $30
   You can run this on Windows 10, Windows 7, Windows Server 2012 R2, Windows Server 2016

Add the drives. Create two pools. Use software to sync the data from one pool to the other. I use FreeFileSync and create a script which runs via Task Scheduler automatically each night. This is easy to create and documented by the software.

*Therefore, no complicated raid information is stored and no parity calculation is performed. If a drive goes bad, you simply remove the drive, replace it and restore the data from the other pool. Data on all other drives remains unaffected.
*
I would like to add that I would highly recommend you choosing Xeons and ECC memory. This way you can also repurpose the CPU, mobo and memory into a higher capacity case. Again, it isn't necessary, but recommended.


----------



## remi (Oct 4, 2017)

Fx said:


> Remi, this isn't cut n' dry, but it gives you a lot of room for flexibility
> 
> 
> Add the drives. Create two pools. Use software to sync the data from one pool to the other. Therefore no complicated raid information stored and no parity calculation is performed. If a drive goes bad, you simply remove the drive, replace it and restore the data from the other pool. *Data on all other drives remains unaffected.*


so 50% redundancy ? like a raid 1 ? i might as well use raid 0 and use the other 50% of drives as backup. 10-15% redundancy is something i'm ok with, but 50% ?


----------



## Fx (Oct 4, 2017)

It is 100% redundancy, but not using RAID. It is simply another copy contained on the same system.

You don't have to have your own onsite redundancy, you could pay Backblaze $5/month for unlimited data. It is $4/month if you buy 2 years in advance.

You have to choose how simple, how cheap and how expensive you want it to be. If you want partial redundancy, the software allows you to create duplicate folders for critical stuff.

If you don't want additional risk, more complicated maintenance and working knowledge to manage, expand and recover with RAID then you will go with other alternatives. Someone mentioned unRAID which kind of performs this same kind of setup, but uses its own OS. What I am showing you is a way to do it with Windows which is what you are comfortable with and know. It also doesn't require the cost of having a Server 2012/2016 license.


----------



## remi (Oct 4, 2017)

sorry, yeah i meant 100% redundancy. Sounds absurd to me, especially since i cant afford it. Otherwise, of course it would be the best idea. 

Cloud storage is even more absurd, 5$/TB/month , that means for 70TB, in 10 years i would pay 42,000$, and after that i'm left with absolutely nothing.


----------



## newtekie1 (Oct 4, 2017)

remi said:


> ok, so for hardware RAID 5, if i make a pool of 30tb, and in 3 months it's full, can i increase the size of the pool and add a new 10tb drive ?



Yep.  I've done it multiple times with my Highpoint cards.  You can do it right inside Windows using the Highpoint RAID Management utility and OCE allows you to continue to work normally while the drive is added to the array.


----------



## remi (Oct 4, 2017)

newtekie1 said:


> Yep.  I've done it multiple times with my Highpoint cards.  You can do it right inside Windows using the Highpoint RAID Management utility and OCE allows you to continue to work normally while the drive is added to the array.


so you're absolutely sure i dont have to save all the ~20TB data ofsite, delete the pool, and then make a new pool of 40tb, and recopy all the ~20tb of data on the new pool ?

cause more then one people said that this is the way it's done in RAID


----------



## Fx (Oct 4, 2017)

remi said:


> sorry, yeah i meant 100% redundancy. Sounds absurd to me, especially since i cant afford it. Otherwise, of course it would be the best idea.
> 
> Cloud storage is even more absurd, 5$/TB/month , that means for 70TB, in 10 years i would pay 42,000$, and after that i'm left with absolutely nothing.



You misread. $5/month for *UNLIMITED *data for a single computer or $96 for 2 years.


----------



## thebluebumblebee (Oct 4, 2017)

You can design, build, configure and maintain your own system:









Or, use something like:









BTW, He started with one of those SansDigital external storage units I suggested above:


----------



## newtekie1 (Oct 4, 2017)

remi said:


> so you're absolutely sure i dont have to save all the ~20TB data ofsite, delete the pool, and then make a new pool of 40tb, and recopy all the ~20tb of data on the new pool ?
> 
> cause more then one people said that this is the way it's done in RAID



Yes, I'm 100% sure.  Look up OCE(Online Capacity Expansion).  I don't know who told you that, but they don't know what they are talking about, so stop listening to anything they say.


----------



## Iciclebear (Oct 4, 2017)

Hey Remi, here's my 2 cents,

I'm happily using storage spaces at home in a two-way mirroring using a terramaster DAS but that's not that I want to focus on.  When it comes to buying that much storage you shouldn't be focusing on the cost of the enclosure but the total cost of ownership because the disks are the primary investment.  If you are attempting to build a nas for the total cost of the drives + the hardware you are only going to save 10 - 15% of the total purchase price at best.

All Prices are from amazon.

Ironwolf 8tb drive $260, Ironwolf 10tb drive $360

Let's take 3 high count storage bay NAS devices from Synology, Asustor, and Qnap
Synology DS 2415+ $1400 - 12 Bays 2.4ghz atom quad
Qnap TS-1635 $1276 - 16 bay (4 for ssd caching) 1.7ghz arm processor
Asustor AS6210T $1100 - 10 bays 2.4ghz atom quad core

For the sake of arguement we are gonna set up raid 6 (unless you have a good fast backup I wouldn't raid 5 drives that large, the rebuilds will be terrifying) so 2 drives are "Wasted" in parity.

with 12 bays you can use 8tb drives instead of 10 tb drives at the expense of being able to upgrade further without an extender.

12 8 tb drives = $3120
10 10tb drives = $3600
12 10 tb drives = $4320

80TB Raid 6 Cost
(8TB) Synology 1400 + 3120 = $4520
(8TB) Qnap 1276 + 3120 = 4396
(10TB) Asustor 1100 + 3600 = $4700

You can build a machine that can do most of what these can do for less money, but to be honest when it comes to expanding the raid and waiting to grow your raid constantly like you are saying I would look at the systems that are purpose built for that kind of load and the warranties that come with them.

*Cheapest 8tb drives ive seen are the 8tb wd red's you can get out of the WD mybooks when bestbuy puts them on sale for 160-170.  Shucking drivings out of enclosures is a bit of a lottery though.  Freedomeclipse's earlier recommendation of the qnap 4 bay + 5 bay expansion would put you needing larger than 10tb disks to make an 80tb raid 6 work but something like an 8 bay + an expansion could probably be done in the same cost range as the asustor if you needed to save some $$ now.


----------



## Fx (Oct 4, 2017)

One thing that no one has mentioned is that if you ever have to rebuild an RAID 5/6 (especially old) array with high capacity, low RPM HDD drives, that the chance of failure is orders of magnitude higher than when doing it with RAID 10. You could lose the whole array.

What you also might not know is that *RAID is not a backup! *This is why I am a huge advocate of RAID 10, duplicated pools/arrays, offsite/online backup.


----------



## Iciclebear (Oct 4, 2017)

Fx said:


> One thing that no one has mentioned that if you ever have to rebuild an RAID 5/6 (especially old) array with high capacity, low RPM HDD drives, that the chance of failure is orders of magnitude higher than when doing it with RAID 10. You could lose the whole array.
> 
> What you also might not know is that *RAID is not a backup! *This is why I am a huge advocate of duplicated pools/arrays and RAID 10.



I mentioned the rebuild times a second ago and completely agree.  I could run parity in my Storage Space setup but the parity performance is terrible.  Two way 2 column is essentially a raid 10 in performance and I have a pair of 8tb mybooks that I use for backups.  

Also to be fair you can lose the whole array with raid 10 as well if the other drive on that side of the array fails but the rebuilds are generally a lot faster.  My first raid 5 rebuild on consumer hardware was measured in days, not hours.


----------



## newtekie1 (Oct 4, 2017)

Fx said:


> What you also might not know is that *RAID is not a backup!*



So much THIS!  I personally have 2 RAID 5 arrays.  The main array backs up to the second array every night.

But having two 80TB arrays might be a little harder to do that with...


----------



## Fx (Oct 4, 2017)

newtekie1 said:


> So much THIS!  I personally have 2 RAID 5 arrays.  The main array backs up to the second array every night.
> 
> But having two 80TB arrays might be a little harder to do that with...



Good. At least you are doing due diligence. I have seen more people than not have single RAID 5/6 arrays and feel completely confident in the redundancy via parity.

This is why online services like Backblaze are feasible. $48/year is not a lot of money compared to spending 2-3k more for a duplicate system or additional set of drives. I myself do not use any form of online backup because I am fortunate to make enough money to have multiple servers.

It has its trade-offs though. I have a higher electric bill and much more heat/noise to manage.



Iciclebear said:


> I mentioned the rebuild times a second ago and completely agree.  I could run parity in my Storage Space setup but the parity performance is terrible.  Two way 2 column is essentially a raid 10 in performance and I have a pair of 8tb mybooks that I use for backups.
> 
> Also to be fair you can lose the whole array with raid 10 as well if the other drive on that side of the array fails but the rebuilds are generally a lot faster.  My first raid 5 rebuild on consumer hardware was measured in days, not hours.



Yes, you can lose the whole array. What makes RAID 10 so awesome besides the performance is the rebuild times being way faster which significantly reduce the amount of risk you are at during recovery.

To reduce multiple disk failure, one should always buy drives from different vendors and at different months if possible. This of course is in addition to properly managing the temperatures and vibration the drives are exposed to. I like to keep my drives around 31-35c.

I should add that I *never* spin my drives down; they run 24/7. For controllers, you should also have an active fan over them being that many of them do not even ship with one since they are designed for enterprise environments.


----------



## Static~Charge (Oct 5, 2017)

One item that no one has mentioned yet: an uninterruptible power supply (UPS). You *need* a good battery backup with a storage server like this, one that's big enough to handle the computer and the drives. A monitored UPS would be best: it plugs into the computer, software monitors the batteries' power level and shuts down the system cleanly when the power level gets too low.


----------



## FordGT90Concept (Oct 5, 2017)

Fx said:


> Yes, you can lose the whole array. What makes RAID 10 so awesome besides the performance is the rebuild times being way faster which significantly reduce the amount of risk you are at during recovery.


Not by much, especially if a lot of small files on an HDD.  Build speed is mostly a function of write speed.  RAID6 should theoretically be about the same speed as RAID10 (with a good controller) but RAID6 has the advantage of being able to lose any two drives where all the data is lost if both of the RAID1 dives die on one side of the RAID0.  RAID10 is also terrible for going beyond four drives.  With the capacities he's talking about, beyond four drives is inevitable where RAID10 becomes very inefficient in performance and capacity.


----------



## Steevo (Oct 5, 2017)

Raid 5 of largest disks, there is no way to use the disks you have without some possibility of data loss, shit happens.

The chances are extremely slim, and my first instinct is to agree with Ford, get a raid card, installed into a machine, any tower with enough storage mounts would do the trick, check the health of the drives, and then start to build an array, on line live build with no data destruction, add a disk to the RAID 5 array, let it build, add another disk, etc.....

Really any machine with a PCIe slot and fast enough (100/1000Mbps) ethernet will saturate the network before the card would reach capacity.

Unless you are planning on multiple high bandwidth streams off the shelf hardware and a old computer would work fine.


----------



## FordGT90Concept (Oct 5, 2017)

4K60 is 60 mbps

1 GbE can reasonably handle 900 mbps (10% overhead) throughput or 15 4K60 streams.  It's big file transfers that quickly drag 1 GbE down.


----------



## remi (Oct 5, 2017)

Let's make things simple, i made a Pros and Cons list (anyone can edit) :

https://docs.google.com/spreadsheets/d/1DoPYoniYND37ITfcHC8-1K7G9tmHm9xdg0htJ3D4WiA/edit?usp=sharing


----------



## brandonwh64 (Oct 5, 2017)

ZFS is no novice setup. Today I seen how unforgiving ZFS could be. Our plex servers ran off a ZFS network share on freeNAS. 20+ TB of data setup in a pool. During a weekly scrub the machine rebooted (still do not know why) and now the ZFS share kernel panics when it is imported. I went down the rabbit hole today trying over and over to get it to import so we can start data recovery. After countless google searches (almost 50 browser tabs open HAHAH) I was able to force import the zpool in read only mode. This gave me access to the data and I started copying it off to a 150TB qnap. We originally ran a I3 with 8GB of ram but seen quickly that it was being over worked daily but choose to ignore it. Once we recover the data, we are going to move it to a 100TB server with dual 24 core xeons and 128GB of ram and probably not going with freeNAS this time around (zfs).


----------



## remi (Oct 5, 2017)

@FordGT90Concept

A review on the *HighPoint RocketRAID 840A*



> WARNINGS!!!!
> DO NOT REBOOT WINDOWS OR INTERRUPT OCE IN PROGRESS!!!!! YOU WILL LOSE ALL DATA
> Do not use OCE (Online Capacity Expansion) feature. This feature has not been implemented properly by HighPoint and is extremely risky. There is very high probability that partition will be lost and data will be lost while using OCE feature. The OCE feature is dependent on Windows OS. If you interrupt the processes or reboot Windows OS during processing all data will be lost.
> 
> ...



do you also lose all the data on Storage Spaces if windows crashes while it's creating the array after a new hdd is added ?


----------



## FordGT90Concept (Oct 5, 2017)

I don't recommend using OCE in any case.  Even if it works perfectly fine, I wouldn't try it without a full backup first.

RAID6 on 840A supports RAID 0, 1, 10, 5, and 50 in BIOS.  It supports RAID 6 through drivers.  RAID 6 is not bootable at all.  As I said previously, booting is limited to 2TB partitions via legacy BIOS.  I do not recommend booting the 840A at all.  Install an SSD with the OS on it to boot.  Install the card driver on there then all the features of the card will be available through software, including RAID6 with GPT (supports partition sizes measured in zettabytes).  If you use RAID6, don't mess with the Highpoint boot options at all.

I've contacted Highpoint several times in the past about several cards.  I always get the information/software I need out of them.


Adding a drive using OCE, it literally has to read -> write *everything*.  Say you have 4 drives with 512 byte segments and you add a 5th drive.  It has to read the data off of those 4 drives, confirm the validity of the data, build the new parity information then write the new data back including *over* the existing data on the four drives. Moreover, the data has to spread out.  Let's say this is RAID6 we're talking about: that means there's 1024 bytes of data to redistribute on top of 1024 bytes of parity data.  Adding a 5th drive increases that to 1536 bytes + 1024 bytes of parity data.  The data has to be compacted in this process so the 2048 of bytes that were read still ends up as 2048 bytes with the fifth drive, but instead of each drive having 512 bytes each, it only has ~410 bytes each.  It keeps progressing through each 512 segment adding the data in those ~410 byte layers until the rebuild is completed.  As this process is happening, though, the data is limbo.

If something will go wrong, it is statistically most likely to happen during rebuild.  As Murphy's law states, "anything that can go wrong will go wrong."


----------



## Aquinus (Oct 5, 2017)

For the budget, I might even suggest that a real RAID card might be overkill. MDADM is more than capable for the situation the OP has laid out. A motherboard with enough SATA ports would likely be cheaper than a dedicated RAID card and a MDADM array will be recognized on practically any linux installation which makes it easy to move in case everything goes to crap and isn't tried to any particular hardware vendor. I ran a 3-disk RAID-5 with it for several years and I was impressed at how well it performed and it used to do streaming video without a hitch. Cutting out the RAID card can free up a good chunk of money if it's not required and should be considered.

Also, if this is for long term storage, booting from it should be considered a non-issue because you shouldn't be booting from an array intended storage (in my opinion.)


----------



## remi (Oct 5, 2017)

I wont have 5 drives from start, so I can only do RAID 5.
In the future, when i will have 10 drives, almost full, i cant add another HDD for redundacy, right ? So if i want to change to RAID 6, i will need to take the data off the array, format it all, make a new RAID 6 array and copy the data back on it ?

So every month when i will add a new 10TB HDD, the pool is being rebuilt ? Approximately how much time would that take for ex 70TB of data ?

And does this rebuild time vary in RAID vs unRAID vs Storage Spaces ?

Ofcourse i wont be booting from the array in any case. I'll use a cheap SSD



> anything that can go wrong will go wrong.[/qoute]
> That's one of the things i always try to take into account, what ever i do.
> That and the KISS rule. (Keep It Simple Stupid)


----------



## Aquinus (Oct 5, 2017)

remi said:


> And does this rebuild time vary in RAID vs unRAID vs Storage Spaces ?


Parity-based rebuilds always take a long time, regardless of the type of RAID is being used but, it can vary. Either way, the amount of time scales with the size of the array so, 70TB will take a very long time to rebuild. You're likely looking at over 24 hours to do a rebuild with any form of RAID.


----------



## remi (Oct 5, 2017)

and it has to rebuild every time i add a new drive ? so ideally i should add 2 drives once every 2 months ? would that help ?


----------



## FordGT90Concept (Oct 5, 2017)

You could do 4 drives at a time, RAID5 each.  You'll end up with several drive letters instead of one but assume your software can take care of that.

840A supports up to four simultaneous RAIDs across 16 drives so 4x4 works.  To some degree, the data is actually safer this way because you could handle up to four failures (one in each array) verus one (RAID5) or two (RAID6).  That said, you'd also be losing 4 drives worth of usable space.


Hard drives are ~200 MB/s.  The time to foreground rebuild an array is approximately capacity of one drive / 200 MB/s.  Background rebuild takes days.  On a large volume it could take weeks.

10 TB = 10,000,000 MB / 200 MB/s = 50,000 s = 13.89 hr


----------



## remi (Oct 5, 2017)

I dont understand, so a total pool of 10TB takes 14h ? or adding a new 10TB to (lets say) a 70TB pool takes 14h ?

Many people have said that RAID 5 isnt safe when talking about such large sizes.
Ford i get the 4x RAID 5 but that would complicate things. Cause every time i start a new RAID5 I would need 3 new drives off the bat.


----------



## FordGT90Concept (Oct 5, 2017)

All drives are reading/writing simultaneously.  If there's 8 drives in a RAID, it's effectively 1600 MB/s.  Even though the parity information isn't directly usable, it still figures into the read/write performance of it (those operations are being carried out quietly).

4x10TB = 30 TB usable space.  With 4 of them, you'd end up with 120 TB usable space.  Certainly you can wait until you need another block of 30 TB before adding more capacity.


----------



## remi (Oct 5, 2017)

FordGT90Concept said:


> Certainly you can wait until you need another block of 30 TB before adding more capacity.


I'm saying that now i can afford 3 drives, ok i make a RAID5. And when these are full, i have to add 3 new empty drives in order to make another RAID 5 ? Very impractical for me!


Wouldnt Storage Spaces with three-way mirroring be the best solution ? I can use different sized drives, maybe 12 or even 20TB in the future.
The downside is that i need 5 drives to start, but i can use 2 or 3 of my old 2 or 3 TB drives !

I really think this would be the best solution !

The downside is that i lose a lot of TB to redundancy but i get 2 drive failure protection!

https://docs.microsoft.com/en-us/wi...storage-spaces/storage-spaces-fault-tolerance

3 way mirroring is only 33% Storage efficiency, so if i add 3x 10TB drives i only get 10TB !! That is brutal !

Not sure what Mixed Resiliency entails in Storage Spaces , that also is 2 drive failure security but it can have 80%  Storage efficiency ?!!


----------



## newtekie1 (Oct 5, 2017)

remi said:


> @FordGT90Concept
> 
> A review on the *HighPoint RocketRAID 840A*
> 
> ...



I don't know about the 840A, but on my 2720 I've rebooted during an OCE and was fine.  Once the machine booted, the OCE process picked up right where it left off.


----------



## remi (Oct 5, 2017)

i would really need an answer for this :
do you also lose all the data on Storage Spaces if windows crashes while it's creating the array after a new hdd is added ?


----------



## Iciclebear (Oct 5, 2017)

Remi,

You had the same limitation in storage spaces until recently.  In order to expand a pool you had to add the same # of disks as you had columns, but you can use the optimize command in win10/server 2016 now after the disk is added.

For "mixed resiliency":
 In storage spaces the Storage pool is just the contiguous area of all the disks in the array. so 4x4tb =16tb storage pool.  After that you create a virtual disk and that is where you tell it what kind of resiliency options you want for your data : 2 way mirror, 3 way mirror, parity or dual parity, or simple.  If you wanted a mixed environment you could make a disk that had two way mirror and used up 14tb of your 16 tb pool, but the actual available size of that disk is 7tb to you becuase its a mirror.  The remaining 2tb could just be "simple" and essentially raid 0'd if it was like a scratch drive for video editing.

If you go the storage spaces route be sure to read up on virtual disk creation and if you are using win10 please create your virtual disks via powershell as many of the options that ensure performance are hidden from the win10 gui. (such as # of columns)

Storage space's performance in a mirrored setup is pretty good, but parity isn't as refined yet.



remi said:


> i would really need an answer for this :
> do you also lose all the data on Storage Spaces if windows crashes while it's creating the array after a new hdd is added ?



Best thing I could find was this:
https://answers.microsoft.com/en-us...8/bde12a9b-d54f-4932-beb0-022300196793?auth=1

The entry from that sites reads as follows:
I was curious as well, so I tested it and YES, even with a Windows 8 System drive crash your Storage Spaces drives are intact.

What I did.

Installed Windows 8.1 Pro in Drive 1
Setup 3 drives in Parity Mode.  (drive 2, 3, 4)
Copied about 1TB worth of data.

Removed Drive 1 with the Windows 8.1 image.
Added another Drive (5) and installed Windows 8.1 Pro again.

Went to Manage Storage Spaces, and all the pool was there.
Under "This PC" all the drives were present: Local Disk (C and Data Parity (D

I was able to play test a bunch of movie files in the D: drive without any problem.

So YES, the Storage Spaces do maintain their integrity even if the C: drive crashes and you have to reimage from scratch​Of course this is simply describing the storage pool being moved to a new PC in the event of an OS failure.  It's important to remember that nothing is 100% safe.  It's more important to have a good backup than it is to have a system that you "think" is bulletproof, because none of them truly are.  If you are saying can it survive ANY os level ever and never corrupt data? then the answer is no, but raid cards, ssd's, and even single hard drives have the same issue.

In fact, Toshiba has drives that have nand flash storage caches on them to try and prevent this.  It uses the inertia of the spinning disk to power the nand flash and dumps the drive cache to nand before the drive loses power.  They just released a 10tb model:
http://www.storagereview.com/toshiba_10tb_mg06_series_enterprise_capacity_hdd_line_announced

Cool tech, but probably cheaper to get a monitored UPS.  Enterprise SSD's have the same feature with capacitors built into the drives to make sure the dram cache is flushed before powerdown so data is not lost, but even then its not 100%.

​


----------



## remi (Oct 5, 2017)

Iciclebear that's not the scenario i was referring to,
i'm asking if windows crashes *while* Storage Spaces is adding a new HDD to the pool ? like Ford said, 10TB can take 14h +
Do i lose everything then ?


----------



## Iciclebear (Oct 5, 2017)

remi said:


> Iciclebear that's not the scenario i was referring to,
> i'm asking if windows crashes *while* Storage Spaces is adding a new HDD to the pool ? like Ford said, 10TB can take 14h +



Remi, storage spaces doesn't do  that specifically.  Ford is referring to a raid rebuild if you wanted to add additional disks.  In a raid  5 setup you would add the disk and need to rebuild the array to get access to the extra space. (correct me if im wrong, im not a raid  5 expert)    

In storage spaces (I haven't tested this, just from reading) You can just expand the disk to use them.  Adding disks to the pool and then extending the volume is nearly instantaneous.   Drive optimization is the closest thing to the raid 5 rebuild task that was mentioned earlier.  Optimization takes the data on the disks and rebalances it across the new drives. so 3 drives at 80% full become 4 drives at 60% full and it defrags the slabs so that you don't get bottlenecked by 1 drive having all of the extra copies and so on.

I've read this article talking about having windows crash during a drive optimization and how they got their array back.
https://social.technet.microsoft.co...optimization-problems?forum=win10itprogeneral

But it should be noted that the user is running a parity space in that article, which was specifically noted to be not supported by the drive optimization feature (and the user ran it anyways)


I've only optimized my array twice and both times it was after it was freshly made with about a hundred gigs of data, and it took a matter of minutes.  You could also create new virtual disks when you added new drives or even new storage pools if you didn't care about all the data being accessible from the same drive letter.  

If this was a write once read many times archive and you had a copy of server you could even run DFS-N and slap all the pools in to a single directory and never have to optimize anything, but it really depends on what you are doing with your data.


----------



## remi (Oct 5, 2017)

yes by adding a new HDD to the pool i meant Drive optimization, so you mean Dual parity doesn't support the drive optimization feature ? only Two-way Mirror and Three-way Mirroring ?

Cause now i'm considering Dual parity (Failure tolerance 2 and Storage efficiency 50.0% - 80.0% vs. 33% with Three-way Mirroring

The downside would be very slow write speeds  and drive optimization ?


----------



## Fx (Oct 5, 2017)

FordGT90Concept said:


> Not by much, especially if a lot of small files on an HDD.  Build speed is mostly a function of write speed.  RAID6 should theoretically be about the same speed as RAID10 (with a good controller) but RAID6 has the advantage of being able to lose any two drives where all the data is lost if both of the RAID1 dives die on one side of the RAID0.  RAID10 is also terrible for going beyond four drives.  With the capacities he's talking about, beyond four drives is inevitable where RAID10 becomes very inefficient in performance and capacity.



Absolutely false. RAID 5 and 6 will take much longer in write especially due to the write penalties of their design. Reads are actually what they are decent and can improve at. RAID 10 is only inefficient at cost compared to the others, but not in the grand scheme of data retention. It scales extremely well in both read, write and ease of expansion. Below is a simple graph showing write penalties.


----------



## Iciclebear (Oct 5, 2017)

remi said:


> yes by adding a new HDD to the pool i meant Drive optimization, so you mean Dual parity doesn't support the drive optimization feature ? only Two-way Mirror and Three-way Mirroring ?
> 
> Cause now i'm considering Dual parity (Failure tolerance 2 and Storage efficiency 50.0% - 80.0% vs. 33% with Three-way Mirroring
> 
> The downside would be very slow write speeds  and drive optimization ?



I could be wrong on that remi, the official documentation looks like it was from Tech Preview 4 of server 2016 where it didn't support it.  I've seen some commentators say its now supported but I haven't seen any official documentation.


----------



## ERazer (Oct 5, 2017)

updated the sheet for unRAID, seems not whole ppl familiar with it


----------



## Steevo (Oct 5, 2017)

Fx said:


> Absolutely false. RAID 5 and 6 will take much longer in write especially due to the write penalties of their design. Reads are actually what they are decent and can improve at. RAID 10 is only inefficient at cost compared to the others, but not in the grand scheme of data retention. It scales extremely well in both read, write and ease of expansion. Below is a simple graph showing write penalties.




Considering most of the penalty will be masked by on drive cache, and having ran RAID5 arrays in the past, performance is not as bad as it would seem for the write penalty.

Setting an array and caching up correctly also makes access very fast. And considering OP is looking for stability, and maximum storage space, RAID 5 is the best option.


----------



## thebluebumblebee (Oct 5, 2017)

remi said:


> Without the HDDs, i'm hoping to spend less then 770$.


@Fx , correct me if I'm wrong...
@remi , if you were to build a system like the conversation is talking about, you need to realize how much money is involved.  If you're going to be getting 1 GB RAM for every TB of storage, then you are going to need a motherboard that supports 128 GB of RAM.  Then that registered ECC RAM will cost upwards of $350 for every 32 GB DIMM......

There are 2 columns missing in that spreadsheet.  One for building a system (including the cost) and one for a prebuilt NAS.  If you go with a prebuilt, I'm thinking you could start with the $600 5 bay Synology DS1517 (the 8 bay is $850) and then latter add up to 2 of the 5 bay Dx517 for $520.  Let them deal with all of the configuration.  I also believe that this is the easiest way to add capacity they way that you want.


----------



## remi (Oct 5, 2017)

ok Final Round : unRAID vs Storage Spaces

we need more bullet points for Pros and Cons 

A huge Pro for unRAID :  you can only lose data on the failed drive (depending on File Structure set up)
A huge Pro for Storage Spaces : Windows based (like everything else i own) and very easy to setup, maintain etc 

Please only add Pros ad Cons if you have worked with either of them and know something for a fact.


----------



## Jetster (Oct 5, 2017)

thebluebumblebee said:


> @Fx , correct me if I'm wrong...
> @remi , if you were to build a system like the conversation is talking about, you need to realize how much money is involved.  If you're going to be getting 1 GB RAM for every TB of storage, then you are going to need a motherboard that supports 128 GB of RAM.  Then that registered ECC RAM will cost upwards of $350 for every 32 GB DIMM......
> 
> There are 2 columns missing in that spreadsheet.  One for building a system (including the cost) and one for a prebuilt NAS.  If you go with a prebuilt, I'm thinking you could start with the $600 5 bay Synology DS1517 (the 8 bay is $850) and then latter add up to 2 of the 5 bay Dx517 for $520.  Let them deal with all of the configuration.  I also believe that this is the easiest way to add capacity they way that you want.


The Synology NAS will also have fetures that the build would not. Media server and many apps


----------



## ERazer (Oct 5, 2017)

remi said:


> ok Final Round : unRAID vs Storage Spaces
> 
> we need more bullet points for Pros and Cons
> 
> ...



To go in detail about the file structure, i setup my Media folder to use all my 5 drives if drive A fails only the movies/music thats in drive A will be lose not my whole Media folder thats if my parity drive fail as well.

Intial cost its completely up to you, you can use old pc as start up and upgrade from there or go all out with server grade.

Replace/upgrade whole symtem(cpu/mobo/ram) only need the thumbdrive and drive path

Im no expert but it works for my needs, my recommendation is do 30day trial with old 
computer just get the hang of it.

EDIT:

one more thing theres an app called preclear what it is in simple term it stress test new HDD to make sure no errors before you add it to your drive pool.


----------



## thebluebumblebee (Oct 5, 2017)

Jetster said:


> The Synology NAS will also have fetures that the build would not. Media server and many apps


And, maybe the most important feature, someone to call when you have a question:


> +1 425 296 3177 (Support)


----------



## ERazer (Oct 5, 2017)

Jetster said:


> The Synology NAS will also have fetures that the build would not. Media server and many apps



What i don't like with those NAS is limited hardware upgrade. Tho they are compact .


----------



## azdesign (Oct 5, 2017)

Ram, Raid and Disk aside,
You should decide what kind of nas you will be using first.
Building your own custom nas seems cool and spot-on in term of performance/price. Just remember that *you're* the one who will maintain it. You will need to study, experiments, and prepare for the worst.

also, DO NOT go for ZFS unless you're tech-savvy and have lot of time to learn it. ZFS have *huge *learning curve and you need to do a lot of experiments to build a reliable, stable, and safe zfs nas.

I suggest you save more money for expandable synology/qnap system.
I had freenas for 5+ years and quite lucky enough that my data are still intact (consumer-grade hardware, non-ecc ram, scrub only 2 or 3 times a year, etc)

Now I'm happy with my synology; why?

1. Far easier to maintain, don't need to concern yourself with hardware (except ram, drive, ssd cache) and complicated technical stuffs.
2. You can easily expand it however you wish. ZFS on the other hand, you need plan your expansion and raid structure beforehand to be scaleable in the future as you cannot "expand" an existing pool that simple.
3. When something happened is when the nightmare start for zfs. Before you try to get help on freenas forum, they will ask your specs, if you do not have their recommended specs, well... good luck, especially you're on a budget.

Make your life easier and buy prebuilt nas. They have easy-to-use UI, tons of apps, and professional support.
SHR2 + hot spares + scheduled short, long SMART, scrub, update + good UPS + strict user permissions + put some dust filter (this little box surprisingly quite a dust magnet)
Then you don't have to worry about anything else.


----------



## Jetster (Oct 5, 2017)

ERazer said:


> What i don't like with those NAS is limited hardware upgrade. Tho they are compact .


I've been going back and forth. But I just ordered one so we'll see how it works out


----------



## ERazer (Oct 5, 2017)

im sub to this guy he makes good how to vids for unRAID


----------



## Sasqui (Oct 5, 2017)

Yea, I'm really late to this party, but what about a desktop NAS?

https://www.newegg.com/Product/Product.aspx?Item=N82E16822107342

It lists a max capacity of 64 TB, but then goes on with this:



> Scalable design with 2 QNAP expansion enclosures UX-800P for total storage capacity of up to 24 drives


----------



## ERazer (Oct 5, 2017)

Jetster said:


> I've been going back and forth. But I just ordered one so we'll see how it works out



Eventually ill have my main rig as my server, im amaze a lot ppl never heard about unRAID specially here, a lot users have second rig even third they hardly use why not use it as NAS/Media server.


----------



## FordGT90Concept (Oct 6, 2017)

So, I just tried @newtekie1's advice on my own Highpoint RAID which is approximately a decade old.    I had it configured as 3 RAID5 + 1 hot spare that whole time but it was down to <20% capacity remaining.  I used OCE to add the hot-spare to the RAID.  It estimated less than six hours to complete.  I went to bed and it was done by the time I got up.  No issues, Highpoint's software showed the drive was incorporated but Disk Management didn't reflect the change.  The following night, I rebooted it so the extra space showed up in Disk Management and extended the existing volume to fill the newly added space.  Now my 640 GB RAID5 is 960 GB. 

Mind you, I did all of this because I just did my monthly back up so if something went wrong, I was prepared to deal with it.  Nothing went wrong and the only inconvenience is that it required rebooting once.


----------



## newtekie1 (Oct 6, 2017)

FordGT90Concept said:


> Nothing went wrong and the only inconvenience is that it required rebooting once.



Strange, what OS are you running on the system?  I swear the last time I did an OCE on my Win10 machine it just showed up as a larger drive in disk management, all I had to do was extend the partition, no boot required.  But my card is a newer.


----------



## FordGT90Concept (Oct 6, 2017)

Server 2012 R2.  It's a RocketRAID 2300 so pretty dang old.

I'm not running the latest BIOS on it because my previous motherboard literally couldn't boot because it ran out of memory with it.  When I changed motherboards, I decided to leave well enough alone.  That update maybe fixed that issue.


----------



## newtekie1 (Oct 6, 2017)

FordGT90Concept said:


> Server 2012 R2.  It's a RocketRAID 2300 so pretty dang old.
> 
> I'm not running the latest BIOS on it because my previous motherboard literally couldn't boot because it ran out of memory with it.  When I changed motherboards, I decided to leave well enough alone.  That update maybe fixed that issue.



Yeah, the 2300 is pretty dang old.  I still have 2 though. LOL  One in service an another that I was using but retired, and kept as a spare/backup for the one I still use.  I had boot issues on old motherboards with the 2300 too.  You can flash the firmware and there is an option in the flash utility that changes a setting in the firmware that lets it boot on more motherboards.  It's been so long since I've done it though, I can't remember the setting.

The cool thing is the RAID 5 I'm running now started on a RR 2300 card.  Then I moved it to a 622, then it went to a 642L, and finally to a 2722.  It also started with 3 drives and now has 5.  It was the same array through all the changes, I never had to re-create it.


----------



## Steevo (Oct 6, 2017)

newtekie1 said:


> Yeah, the 2300 is pretty dang old.  I still have 2 though. LOL  One in service an another that I was using but retired, and kept as a spare/backup for the one I still use.  I had boot issues on old motherboards with the 2300 too.  You can flash the firmware and there is an option in the flash utility that changes a setting in the firmware that lets it boot on more motherboards.  It's been so long since I've done it though, I can't remember the setting.
> 
> The cool thing is the RAID 5 I'm running now started on a RR 2300 card.  Then I moved it to a 622, then it went to a 642L, and finally to a 2722.  It also started with 3 drives and now has 5.  It was the same array through all the changes, I never had to re-create it.




This is why I bought my 640 card, and put one in a production machine for backups and thin clients. Something about having 20TB and able to saturate the PCIe link with mechanical disks during sequential reads off a RAID 5 array.

With simple striped arrays the array was detected by the AMD chipset even though it was created by Highpoint


----------



## remi (Oct 8, 2017)

Ok it's settled *Storage Spaces* it is.

Let's talk *hardware :*

Will ECC RAM  help prevent copying errors or something if the system is just used for a *personal video library* ? No networking, just copying to and from the HDDs. 

Will a good CPU improve the performance of Storage Spaces ?

For motherboard i was thinking of 
ASRock X99 TAICHI or 
ASRock X99 Extreme4

both have *10* SATA 3 ports and for 21 total HDDs (in the future) i will need to add SAS expanders

The problem is that those MoBos have 2011-3 Socket so the cheapest CPU i can add is 450$ Intel Broadwell-E, Core i7 6800K 3.4GHz *???*

If possible i would like to spend half that on the CPU(if it's not necessary for Storage Spaces)


----------



## Jetster (Oct 8, 2017)

I would just get the Synology DS1517 or the ds513


----------



## newtekie1 (Oct 8, 2017)

remi said:


> Ok it's settled *Storage Spaces* it is.
> 
> Let's talk *hardware :*
> 
> ...




Ok.  First SAS Expanders only work with SAS ports, not SATA.  Hence the name *SAS* Expander.

However, you don't need an expensive motherboard(and hence expensive CPU) if you are using Storage Spaces.  Storage Spaces can use any drive Windows sees, they all don't have to be connected to the motherboard.  So buy a cheaper 1151 motherboard, and just add cheap SATA PCI-E cards to get more SATA ports to connect more drives.


----------



## remi (Oct 8, 2017)

*newtekie1* Something like this ? 
https://www.amazon.com/dp/B00ESFEI2E/?tag=tec06d-20

Cause i might as well buy a 10 SATA 3 port MoBo, the price difference isnt that much.

What do you think about ECC RAM ? Synology and Drobo dont have it, should i ? Can i lose a lot of data if i dont have it ?


----------



## FordGT90Concept (Oct 8, 2017)

You need a server processor to use ECC memory at all and considering how cheap you're aiming, that's out of the question.  Just buy some non-ECC Kingston sticks and be happy.


----------



## remi (Oct 8, 2017)

All Ryzen CPUs support ECC RAM, and some are quite cheap. The ECC RAM isnt much more expensive then Non ECC.


----------



## FordGT90Concept (Oct 8, 2017)

Ryzen doesn't work in X99.


----------



## newtekie1 (Oct 8, 2017)

remi said:


> *newtekie1* Something like this ?
> https://www.amazon.com/dp/B00ESFEI2E/?tag=tec06d-20
> 
> Cause i might as well buy a 10 SATA 3 port MoBo, the price difference isnt that much.
> ...



No need for ECC, IMO.  You only really need ECC if you were using ZFS with FreeNAS.  FreeNAS needs ECC memory because it caches the parity data in RAM.


----------



## Jetster (Oct 8, 2017)

remi said:


> All Ryzen CPUs support ECC RAM, and some are quite cheap. The ECC RAM isnt much more expensive then Non ECC.


Not entirely true, while they don't block the use of ecc, they offer no support for it ether. So depending on the board it may work or may not


----------



## remi (Oct 8, 2017)

ASRock X370 Taichi + Ryzen + ECC RAM

http://www.hardwarecanucks.com/foru...ws/75030-ecc-memory-amds-ryzen-deep-dive.html


----------



## Jetster (Oct 8, 2017)

Ok support yes validated no. So good luck with that


----------



## remi (Oct 8, 2017)

not sure what that means


----------



## Jetster (Oct 8, 2017)

remi said:


> not sure what that means


It means that it's not officially tested. So it might work might not, might have to try a few sets


----------



## remi (Oct 8, 2017)

on 3 different places i've read that Ryzen and Asrock support ECC :
https://www.reddit.com/r/Amd/comments/6f7s28/what_is_the_state_of_ryzen_and_ecc/
http://www.overclock.net/t/1629642/ryzen-ecc-motherboards/10
http://www.hardwarecanucks.com/foru...ws/75030-ecc-memory-amds-ryzen-deep-dive.html

If i'm wrong, let me know, i dont want an overkill system, but i would like one that wont give me headaches


----------



## Jetster (Oct 8, 2017)

I'm not sure. Really your over thinking this thing. I don't think you need ecc ram and you definitely don't need an expensive system. dual core will run a NAS just fine.


----------



## remi (Oct 8, 2017)

Adding ECC RAM will cost an extra 50$, so then why not ?


----------



## Jetster (Oct 8, 2017)

Most NAS manufactures don't use it


----------



## newtekie1 (Oct 8, 2017)

remi said:


> Adding ECC RAM will cost an extra 50$, so then why not ?



Because it over complicates things, and might not work in the future.  AMD could very well release a micro-code update in the future that blocks ECC mamory, then suddenly have a system that one day without warning just doesn't work.

Besides, it is not necessary.  Even when moving large amounts of data around, ECC isn't going to help any.  There are already checks in place during file write operations to make sure the file isn't corrupted by a bit switch error during the copy process.  When dealing with file and data, CRC is used to make sure there isn't errors in the data written to the storage device.


----------



## remi (Oct 8, 2017)

I guess i can keep that Asrock X370 Taichi + Ryzen 3 and use non-ECC RAM


----------



## remi (Oct 10, 2017)

+ a 120GB ADATA SSD and the Nanoxia Deep silence 6 Case
+ a Windows 10 license for 26 euros from kinguin.net (to get all the updates especially for Storage Spaces)
and for start 3 x WD 10TB RED NAS HDDs

i would buy the Pro Red 10TB with 5 year warranty instead of 2 years but i cant seem to find them in Romania, and if i buy them from abroad how can i send them back for warranty ?

if i wont use these HDD s 24/7 (like they are intended) will i have no problems with them in the future ?(10-15 years from now)


----------



## MxPhenom 216 (Oct 10, 2017)

Imo i would run RAID 5 using an LSI raid controller. Though i dont know anything about Storage Spaces. I just have a lot of experience with lsi raid controllers from past job and i liked them. And i prefer hardware raid over software raid.


----------



## remi (Oct 11, 2017)

How many hard drives will this system handle ?

Motherboard ASRock X370 Taichi = 10 SATA + 2 SATA controlers (each with another 8 SATA III) = *26* SATA
PSU Seasonic PRIME SSR-850GD, 850W 80+ Gold = 10 SATA + 5 Molex (each can have another 4 SATA) = *30* SATA
Case Nanoxia Deep Silence 6 = *21 or 24* drives capacity
CPU Ryzen 3 1300X
UPS APC BACK-UPS, 1400VA, 700W

Am i correct about the PSU and Motherboard SATA ports ?
Am i missing something ?


----------



## thebluebumblebee (Oct 11, 2017)

https://portland.craigslist.org/mlt/sop/d/norco-rpc-4224-intel-raid/6342167237.html


----------



## Kursah (Oct 11, 2017)

Please keep suggestions and use of keys purchased from illegitimate market places off of TPU. In most, if not all cases they are not legal and as such are not authorized to be discussed as per the TPU Forum Guidelines. Link in sig if you have any questions.

If you have questions feel free to PM me or any other TPU mod, or even better, go do some of your own research on what you're actually doing and whom you're affecting when buying from these marketplaces.

Thanks!


----------



## remi (Oct 12, 2017)

sorry Kursah, i didnt know these are ilegal, i understood they are keys from Asia that cost much less then in Europe or US. I specifically wanted to buy one and not pirate it.

@thebluebumblebee
That looks like a huge hdd rack and for 500$ it's definitely worth it, but i already bought a few components (case included) and i'm guessing shipping from the US would cost more then 500$.

*Edit
*
I'm thinking of not going with any RAID system (Storage Spaces), but with JBOD (just a bunch of disks)


With RAID (Storage Spaces) there is the risk of losing 100% of the data +100TB
With JBOD there is the risk of losing 10% / 20% of the data 10TB/20TB

Both solutions have advantages and disadvantages...

Btw the Nanoxia case has arrived, it's huge ! well worth the investment.


----------

