• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

+80TB NAS

Last edited:
RAID vs Storage Spaces :

in RAID you cant use different sizes of drives, in Storage Spaces you can
in RAID you cant easily increase the array size, in Storage Spaces you can

Sure RAID is faster, but i dont need performance, just reliability

If i'm wrong let me know.

21 HDDs case :
[/QUOTE]
 
Last edited by a moderator:
I'm not going to watch a 7 minute video. I've skimmed it repeatedly and did not see 21 3.5" drives. Please provide time to look at.

in RAID you cant use different sizes of drives, in Storage Spaces you can
That's JBOD: Just a Bunch Of Drives.

in RAID you cant easily increase the array size, in Storage Spaces you can
Indeed, adding/removing drives means rebuilding the array.
 
remi, you definitely need to figure out what the end result should be.
I know what i want, but i dont think i'm expressing it very well.

Right now i have tens of thousands of documentaries in 22 HDDs that sit on a shelf. I acces the drives via a USB 2 Dock from Vanteck.

It's a nightmare to find anything on them. I would also like to browse the files but since they are on 22 separate drives it would take days.

I just want to have all the data in one place. I acces it a few times a day to upload or download from it. I dont want to watch the videos or stream them from this NAS/Server/box/ whatever.

Adding a new hard drive every month does not seem like something you would want to deal with when it comes to any kind of storage array.
Really ? why ? i thought that's what their purpose was
How large is the amount of data you intend to store eventually?

I hobe it's gonna be 100TB+ but it all depends how much money i will have to put in it.
It's basically a library for documentaries and ebooks.

How do you intend to access the data, how often?
once a day or once every 2 days, for a few hours via USB 3.0

What do you need in terms of backup/redundancy?
Is this the only copy of data or do you intend to have a backup of entire thing?

Yes, cause i cant afford to make a 80+TB backup

Do you even need it to be NAS? You have mentioned only using it occasionally and over USB3, NAS by definition is accessed over network and normally always online.
No, just a local storage array, no need for network.
As the simplest example - a library of hard disks and USB dock?
like the system i have atm ? it's incredibly unpractical for so many files.

I'm not going to watch a 7 minute video. I've skimmed it repeatedly and did not see 21 3.5" drives. Please provide time to look at.

start at 3:23

i couldnt link the video starting from that, it's the forums fault not mine :)
 
Last edited by a moderator:
start at 3:23
Bad idea. Stacking hard drives without dampening/bracing? The vibrations will make them error likely to the point they can't be used.

There's Lian Li cases that can house a lot of drives...sanely.

Edit: Holds 20 3.5" drives: http://www.lian-li.com/en/dt_portfolio/pc-d8000/


I suppose you could fill the motherboard with a lot of cheap SATA cards to supplement what is on the motherboard itself. It will be cheaper than buying a $300 card that can handle 16 but it will also be messier.

Again, 840A (and similar RAID cards), you don't have to configure a RAID at all. The drives not in a RAID appear as-is to the operating system. You can plug and unplug them freely as well. They still have the advantage of being able to notify you of hardware faults.
 
Last edited:
The Nanoxia also has vibration dampening rubber on every hdd cage, it's german made, so it should be of the highest standards.

I can add the 840A SATA card after the 10 sata 3 ports are full, right ? saves me some budget now
 
I think there are two problems here.

The first is the OP seems to think he knows what he wants but doesnt have the experience to back it.

What he is asking for is 80TB on a storage array which is attracting the attention of storage people that are used to architecting and doing this properly IE not consumer end equipment.

I think we need to understand that OP is going to need to make some modifications and budgetary sacrifice to make this work the way he thinks it will. And the rest of us need to understand that OP isn't attempting to run $400 LSI cards.

IMO Personally I think storage spaces isnt a good idea but I architect my arrays ahead of time. So I understand why that might be the best bet. I'm also not confident that OP can configure a ZFS volume and sharing permissions in FreeNAS. At the same time I understand OP has a budget but I think he needs to understand this isnt a desktop as much as he thinks its that simple. With that many drives your going to need to drive down into the PSU and make sure your 3/5v rails can carry enough amperage to cold start those drives. Or that the BIOS atleast has staggered spinup.

We need to think outside of raw sata ports there are people that get paid alot of $$$$ to do this right and if it was as easy as getting a $120 seasonic and a sweet super 1337 asrock board everyone would already be doing it.


leeway on both sides of the coin.

The Nanoxia also has vibration dampening rubber on every hdd cage, it's german made, so it should be of the highest standards.

Assumptions make an ASS of U and ME
 
I edited this above but want to make sure it gets seen:
Holds 20 3.5" drives: http://www.lian-li.com/en/dt_portfolio/pc-d8000/


And to clarify: if you go this whole X99 motherboard + HDDs in case route: you're making a brand new computer. You can't just plug some other computer into one of it's USB3 ports and expect it to be able to transfer files. Your other computer is a USB3 host and so is this new computer you're building. Two hosts means no clients, which is a problem. The relationship naturally drifts to networking which drifts into Network Attached Storage (NAS).


What you're really asking for is a 10+ drive enclosure with USB3.

Edit: I can't find any with 10 but StarTech offers a USB3/eSATA enclosure with 8 drives:
https://www.startech.com/HDD/Enclosures/8-bay-removable-hard-drive-enclosure~S358BU33ERM

Reviews are...concerning:
https://www.newegg.com/product/product.aspx?Item=N82E16817707367
 
Last edited:
I'm also not confident that OP can configure a ZFS volume and sharing permissions in FreeNAS.

yeah cause you have to be a genius to do that.
FreeNAS is out of the question cause why should i buy 128GB of ECC RAM and an expensive CPU if i dont absolutely need it??

Let's say that my budget is 1000$ without any HDDs, even then i still cant find a consensus about what would be the best option for me, since i dont need the most performance, but the most reliability.

What you're really asking for is a 10+ drive enclosure with USB3.
Nope, that's not what i want cause it's very limited in space and ventilation.
I'm ok with another PC. Regarding the networking, it's not something i thought about, cause yeah what if one day i decide to transfer the data to another PC...
 
Last edited by a moderator:
yeah cause you have to be a genius to do that.
FreeNAS is out of the question cause why should i buy 128GB of ECC RAM and an expensive CPU if i dont absolutely need it??

Let's say that my budget is 1000$ without any HDDs, even then i still cant find a consensus about what would be the best option for me, since i dont need the most performance, but the most reliability.

You dont appear to be taking this well. I think I’ll back out too. If your worried about reliability you would have a budget over $1k for an 80TB config, that’s coming from a professional and a few other professionals.

Everyone is trying to work with you but you seem conviced you can do this your way.

Your array WILL fail with the picked shit components and I would be surprised if you could even keep the unit on during load with the power draw and I cannot wait for the thread asking why you shouldn’t buy $130 10TB seagate ironwolfs instead of $300 hitachi or WD datacenter drives.

“To save money”
 
Most modern boards have 6 sata ports. With 12tb drives you're talking 62 tb.(one for the system) I can't imagine needing any more. If so then buy a raid card
 
Last edited:
You dont appear to be taking this well. I think I’ll back out too. If your worried about reliability you would have a budget over $1k for an 80TB config, that’s coming from a professional and a few other professionals.

Everyone is trying to work with you but you seem conviced you can do this your way.

Your array WILL fail with the picked shit components and I would be surprised if you could even keep the unit on during load with the power draw and I cannot wait for the thread asking why you shouldn’t buy $130 10TB seagate ironwolfs instead of $300 hitachi or WD datacenter drives.

“To save money”

i'm only gonna buy WD red NAS drives FYI

if you're such a profesional how come you have no idea what i should use ?
Ok, so what budget would you recommend, 50,000$ ?
 
Oh, that's the wrong attitude
 
Nope, that's not what i want cause it's very limited in space and ventilation.
I'm ok with another PC. Regarding the networking, it's not something i thought about, cause yeah what if one day i decide to transfer the data to another PC...
Limited space...because enclosures are semi-portable.
Limited ventilation...you said you'd only be using it a few hours at a time. That StarTech does have active cooling too so...

You could always buy several of those enclosures and plug them into the same computer.

Advantage of this path is that you can use those 22 drives as-is. Say you bought two of these enclosures, that covers 16 of those drives. The last 6 you could install into your existing computer, maybe, or keep in your external dock.

There's no redundancy with this approach but, assuming you go with it, you're looking at about <$700 which matches your budget.
 
if you're such a profesional how come you have no idea what i should use ?

Because when I have almost 100TB of cold data I rarely access I spin out my AMZ S3 instance and access it or I log into my fiber channel SAN because I dont specialize in running priceless data on equipment I can buy at radioshack.
 
Oh, that's the wrong attitude

Hey he started it.

Limited space...because enclosures are semi-portable.
Limited ventilation...you said you'd only be using it a few hours at a time. That StarTech does have active cooling too so...

You could always buy several of those enclosures and plug them into the same computer.

Advantage of this path is that you can use those 22 drives as-is. Say you bought two of these enclosures, that covers 16 of those drives. The last 6 you could install into your existing computer, maybe, or keep in your external dock.

There's no redundancy with this approach but, assuming you go with it, you're looking at about <$700 which matches your budget.

If i'm gonna power up many drives at once, i need protection against hdd failure, so redundancy is a must. The initial budget can vary. I dont know what i would do if i would lose all that data.

Plus 80tb is a future prediction. Right now i will have ~30tb
 
i need protection against hdd failure, so redundancy is a must.
RAID = Redundant Array of Inexpensive Drives

RAID is usually the first line of defense against HDD failure. Backup is the second line of defense.
 
Remi,

I have a solution for you. It is reliable, cheap, easy to expand and easy to recover in case of failure. However it doesn't use Storage Spaces; I would never trust that. This is a fine line that you are trying to walk and one that I walk myself. I have over 150TB of data and have never lost a byte.

You are going to need to use a mixture of technologies to pull this off, but these are stupid simple and don't require fancy sauce.

Stablebit is what you will use to create a pool. It basically adds disks together regardless of size. You need to create 2 pools. The second pool is what you will sync your first pool to so that you have a complete backup which doesn't require any parity. You could also opt to use a backup service which you would backup that entire machine for a low monthly cost for unlimited data.

No, you do not need ECC RAM for what you are trying to do so that will save some money. I need a little bit of time to come up with the list of hardware and to see what it totals to.

For starters though, you would begin with a Norco 4224 case (24 bays).
 
Last edited:
I dont want RAID, if the controler fails i lose ALL my data.

No you don't, at least not with most of the popular RAID manufactures, including Highpoint. I think you need to educate yourself a lot more before taking on this project.
 
Last edited:
Maybe we should start from the beginning.

ECC is RAM related. It's Error Checking and Correction. It makes sure that the values stored in RAM are checked. The values in RAM are what is taken whilst files transfer through the CPU, and thus if these values change while writing you can corrupt data. Because modern CPUs have integrated the memory controller onto the CPU, you need to select a CPU which supports ECC, RAM that is ECC, and be willing to pay for it. It costs more, generally is slower, but is the way to go when money is no object. If all you are storing is media, then you're likely not going to need the expense.

RAID is how you hook up a hard drive array. Here's the short of it. You need to choose what type of RAID array you want. There are a bunch of types, which all have their ups and downs. Here's the real issue, adding a new drive every month, as funds become available, isn't really supported (in anything but JBOD). If I build a RAID 5 array with 4 disks, and want to add a 5th, I have to create a new array. That means exporting all data somewhere, destroying the old array, building a new one, and then importing the data back. This isn't going to be a 20 minute drive pop-in and bootup.

As far as selecting RAID, you're going to have to figure that out yourself. The common types are:
0 - Striping - Data is striped to drives. This is fastest, but any failure borks all data. - 2 disk minimum
1 - Mirroring - Data is written to each drive. This is very costly on storage, because you functionally lose half of your storage space. - 2 disk minimum, must be even number of disks
5 - Stripe+Parity - One drive can be lost, and recreated. You lose the storage of 1 drive. - 3 disk minimum
6 - Stripe+double parity - Two drives can be lost and recreated. You lose the storage of 1 drive. - 4 disk minimum

What you don't see is that all drives must be the same size. If they aren't, then best case is you have whatever is the smallest drive. Additionally, rebuilding arrays is a royal pain. My 12 TB array (16 actual, RAID 5) took nearly a day to rebuild. That was with a dedicated RAID card, not the Intel rebuild (tried it back in the SATA 2 days with a 6 TB build, and it took more than 2 days).



So, maybe you need to evaluate a few things. Is absolute security a concern, because if it is your budget is not in the right area code. Is your goal a media server, that might better be served as one of the available NAS solutions on the market? Maybe this is all still a little bit new, and you should do some reading and decide on something a bit more permanent.

I'd suggest that a NAS that can do what you are looking for is in the $2419 range on Newegg right now.
https://www.newegg.com/Product/Prod...TMATCH&Description=nas&ignorear=0&N=100158125 601286743 601299072 601299171 601299369 600418376&isNodeId=1
Your budget is less than a third of that. You'll have to learn about Linux, or buy an OS. You'll have to figure out how to share over your network. This isn't exactly something you slap together in an hour and start flying with. I think an evaluation of priorities, and skills is in order. From personal experience, Linux is a mess the first time a person tries to get it working. A NAS wraps everything up in a bow. By the time you buy everything, get the cards, the OS, etc... you'll probably spiral up to the $2000 mark. You pay an extra $420 for a real warranty, developing software, and convenience. In my experience, that is worth is when you want something that just works.
 
in RAID you cant use different sizes of drives, in Storage Spaces you can

I'm going to tell you right now, while you can use different size drives in Storage Spaces, you shouldn't. If you have resiliency enabled with Storage Spaces, and use different size drives, the efficiency goes down hill real fast.

The other problem is it really is bad at telling you how much space you actually have after it takes resiliency into account. To give you an example, I set up an array with a 5TB, 4TB, 3TB, and 2TB drive. The total usable space after one drive resiliency was only 8TB.

in RAID you cant easily increase the array size, in Storage Spaces you can

It is pretty easy to increase the array size with the Highpoint cards(and most other RAID cards). You just connect the new drive to the RAID card. Go into HighPoint RAID Management inside of Windows. Select the OCE(Online Capacity Expansion) option. Select the drive you want to add to the array. Hit finish.

At this point, you have two options, you can access the new space immediately but the array becomes degraded(no redundancy) until the array is re-built onto the new drive or you can have it wait until the array is rebuilt onto the new drive before accessing the space(this maintains redundancy at all times). And, Storage Spaces gives you the same options when you add a drive to the array.

Indeed, adding/removing drives means rebuilding the array.

It does on Storage Spaces as well though.
 
adding a new drive every month, as funds become available, isn't really supported (in anything but JBOD). If I build a RAID 5 array with 4 disks, and want to add a 5th, I have to create a new array. That means exporting all data somewhere, destroying the old array, building a new one, and then importing the data back.
mate, google Storage Spaces, you can add any new drive easily and rebuilding the array isnt done from 0. It's a million times more simple.

newtekie1 are you referring to Parity resiliency ? i did not know that performance drops if the drives are different size, but i guess i could use all of the hdds in the same size.
Honestly i still dont see a drawback of this software RAID (storage spaces). Although i do agree that i need to do more research.
Any advice is very appreciated.

Fx i'm very curious of your solution, but please dont be offended if i dont like the Norco 4224 case, cause i really love the Nanoxia case design
 
Last edited by a moderator:
I agree OP should do more research, so many things to consider.

Does system support stagger HD boot up, does it spin down the HD when not in use to minimize HD wear, cache storage for fast file transfers, how many fail drives can your parity drive supports or how you wanna setup your RAID. How well can you upgrade the server with minimal down time. Heck i can have my cpu/mobo/ram die on me and i can have my system back up in 2-3hrs.

Personally if i building another PC/server running 24/7 it better doing something else other than NAS like VM's, Torrent, VPN, Media Server, game server, etc.
 
Last edited:
newtekie1 are you referring to Parity resiliency ? i did not know that performance drops if the drives are different size, but i guess i could use all of the hdds in the same size.
Honestly i still dont see a drawback of this software RAID (storage spaces). Although i do agree that i need to do more research.
Any advice is very appreciated.

Fx i'm very curious of your solution, but please dont be offended if i dont like the Norco 4224 case, cause i really love the Nanoxia case design

I am not offended at all. I'm merely suggesting hardware which has a lot of value. If you want less bays and subjectively betters looks, that is on you. I just look at rack cases when I am thinking of storage because I like lots of room for expansion. If you had a bigger budget, I would actually suggest a 24-bay Supermicro case. That's what you would use when you are giving storage serious consideration without cutting any corners.
 
Back
Top