• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

+80TB NAS

Joined
Nov 4, 2005
Messages
12,094 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Raid 5 of largest disks, there is no way to use the disks you have without some possibility of data loss, shit happens.

The chances are extremely slim, and my first instinct is to agree with Ford, get a raid card, installed into a machine, any tower with enough storage mounts would do the trick, check the health of the drives, and then start to build an array, on line live build with no data destruction, add a disk to the RAID 5 array, let it build, add another disk, etc.....

Really any machine with a PCIe slot and fast enough (100/1000Mbps) ethernet will saturate the network before the card would reach capacity.

Unless you are planning on multiple high bandwidth streams off the shelf hardware and a old computer would work fine.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.41/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
4K60 is 60 mbps

1 GbE can reasonably handle 900 mbps (10% overhead) throughput or 15 4K60 streams. It's big file transfers that quickly drag 1 GbE down.
 

brandonwh64

Addicted to Bacon and StarCrunches!!!
Joined
Sep 6, 2009
Messages
19,542 (3.47/day)
ZFS is no novice setup. Today I seen how unforgiving ZFS could be. Our plex servers ran off a ZFS network share on freeNAS. 20+ TB of data setup in a pool. During a weekly scrub the machine rebooted (still do not know why) and now the ZFS share kernel panics when it is imported. I went down the rabbit hole today trying over and over to get it to import so we can start data recovery. After countless google searches (almost 50 browser tabs open HAHAH) I was able to force import the zpool in read only mode. This gave me access to the data and I started copying it off to a 150TB qnap. We originally ran a I3 with 8GB of ram but seen quickly that it was being over worked daily but choose to ignore it. Once we recover the data, we are going to move it to a 100TB server with dual 24 core xeons and 128GB of ram and probably not going with freeNAS this time around (zfs).
 
Joined
Oct 2, 2017
Messages
53 (0.02/day)
@FordGT90Concept

A review on the HighPoint RocketRAID 840A

WARNINGS!!!!
DO NOT REBOOT WINDOWS OR INTERRUPT OCE IN PROGRESS!!!!! YOU WILL LOSE ALL DATA
Do not use OCE (Online Capacity Expansion) feature. This feature has not been implemented properly by HighPoint and is extremely risky. There is very high probability that partition will be lost and data will be lost while using OCE feature. The OCE feature is dependent on Windows OS. If you interrupt the processes or reboot Windows OS during processing all data will be lost.

DO NOT USE FOR RAID 6 - As other users have mentioned this card does not support true hardware RAID 6. RAID 6 is dependent on the operating system to be configured and function.

I have contacted support and their communication is poor and unclear - broken English. They take no responsibility. They do not respond quickly - only one short response per day with no helpful information.

This controller requires a be a hybrid of software and hardware to implement RAID features

do you also lose all the data on Storage Spaces if windows crashes while it's creating the array after a new hdd is added ?
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.41/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I don't recommend using OCE in any case. Even if it works perfectly fine, I wouldn't try it without a full backup first.

RAID6 on 840A supports RAID 0, 1, 10, 5, and 50 in BIOS. It supports RAID 6 through drivers. RAID 6 is not bootable at all. As I said previously, booting is limited to 2TB partitions via legacy BIOS. I do not recommend booting the 840A at all. Install an SSD with the OS on it to boot. Install the card driver on there then all the features of the card will be available through software, including RAID6 with GPT (supports partition sizes measured in zettabytes). If you use RAID6, don't mess with the Highpoint boot options at all.

I've contacted Highpoint several times in the past about several cards. I always get the information/software I need out of them.


Adding a drive using OCE, it literally has to read -> write *everything*. Say you have 4 drives with 512 byte segments and you add a 5th drive. It has to read the data off of those 4 drives, confirm the validity of the data, build the new parity information then write the new data back including *over* the existing data on the four drives. Moreover, the data has to spread out. Let's say this is RAID6 we're talking about: that means there's 1024 bytes of data to redistribute on top of 1024 bytes of parity data. Adding a 5th drive increases that to 1536 bytes + 1024 bytes of parity data. The data has to be compacted in this process so the 2048 of bytes that were read still ends up as 2048 bytes with the fifth drive, but instead of each drive having 512 bytes each, it only has ~410 bytes each. It keeps progressing through each 512 segment adding the data in those ~410 byte layers until the rebuild is completed. As this process is happening, though, the data is limbo.

If something will go wrong, it is statistically most likely to happen during rebuild. As Murphy's law states, "anything that can go wrong will go wrong."
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,174 (2.77/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
For the budget, I might even suggest that a real RAID card might be overkill. MDADM is more than capable for the situation the OP has laid out. A motherboard with enough SATA ports would likely be cheaper than a dedicated RAID card and a MDADM array will be recognized on practically any linux installation which makes it easy to move in case everything goes to crap and isn't tried to any particular hardware vendor. I ran a 3-disk RAID-5 with it for several years and I was impressed at how well it performed and it used to do streaming video without a hitch. Cutting out the RAID card can free up a good chunk of money if it's not required and should be considered.

Also, if this is for long term storage, booting from it should be considered a non-issue because you shouldn't be booting from an array intended storage (in my opinion.)
 
Last edited:
Joined
Oct 2, 2017
Messages
53 (0.02/day)
I wont have 5 drives from start, so I can only do RAID 5.
In the future, when i will have 10 drives, almost full, i cant add another HDD for redundacy, right ? So if i want to change to RAID 6, i will need to take the data off the array, format it all, make a new RAID 6 array and copy the data back on it ?

So every month when i will add a new 10TB HDD, the pool is being rebuilt ? Approximately how much time would that take for ex 70TB of data ?

And does this rebuild time vary in RAID vs unRAID vs Storage Spaces ?

Ofcourse i wont be booting from the array in any case. I'll use a cheap SSD

anything that can go wrong will go wrong.[/qoute]
That's one of the things i always try to take into account, what ever i do.
That and the KISS rule. (Keep It Simple Stupid)
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,174 (2.77/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
And does this rebuild time vary in RAID vs unRAID vs Storage Spaces ?
Parity-based rebuilds always take a long time, regardless of the type of RAID is being used but, it can vary. Either way, the amount of time scales with the size of the array so, 70TB will take a very long time to rebuild. You're likely looking at over 24 hours to do a rebuild with any form of RAID.
 
Joined
Oct 2, 2017
Messages
53 (0.02/day)
and it has to rebuild every time i add a new drive ? so ideally i should add 2 drives once every 2 months ? would that help ?
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.41/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
You could do 4 drives at a time, RAID5 each. You'll end up with several drive letters instead of one but assume your software can take care of that.

840A supports up to four simultaneous RAIDs across 16 drives so 4x4 works. To some degree, the data is actually safer this way because you could handle up to four failures (one in each array) verus one (RAID5) or two (RAID6). That said, you'd also be losing 4 drives worth of usable space.


Hard drives are ~200 MB/s. The time to foreground rebuild an array is approximately capacity of one drive / 200 MB/s. Background rebuild takes days. On a large volume it could take weeks.

10 TB = 10,000,000 MB / 200 MB/s = 50,000 s = 13.89 hr
 
Last edited:
Joined
Oct 2, 2017
Messages
53 (0.02/day)
I dont understand, so a total pool of 10TB takes 14h ? or adding a new 10TB to (lets say) a 70TB pool takes 14h ?

Many people have said that RAID 5 isnt safe when talking about such large sizes.
Ford i get the 4x RAID 5 but that would complicate things. Cause every time i start a new RAID5 I would need 3 new drives off the bat.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.41/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
All drives are reading/writing simultaneously. If there's 8 drives in a RAID, it's effectively 1600 MB/s. Even though the parity information isn't directly usable, it still figures into the read/write performance of it (those operations are being carried out quietly).

4x10TB = 30 TB usable space. With 4 of them, you'd end up with 120 TB usable space. Certainly you can wait until you need another block of 30 TB before adding more capacity.
 
Joined
Oct 2, 2017
Messages
53 (0.02/day)
Certainly you can wait until you need another block of 30 TB before adding more capacity.
I'm saying that now i can afford 3 drives, ok i make a RAID5. And when these are full, i have to add 3 new empty drives in order to make another RAID 5 ? Very impractical for me!


Wouldnt Storage Spaces with three-way mirroring be the best solution ? I can use different sized drives, maybe 12 or even 20TB in the future.
The downside is that i need 5 drives to start, but i can use 2 or 3 of my old 2 or 3 TB drives !

I really think this would be the best solution !

The downside is that i lose a lot of TB to redundancy but i get 2 drive failure protection!

https://docs.microsoft.com/en-us/wi...storage-spaces/storage-spaces-fault-tolerance

3 way mirroring is only 33% Storage efficiency, so if i add 3x 10TB drives i only get 10TB !! That is brutal !

Not sure what Mixed Resiliency entails in Storage Spaces , that also is 2 drive failure security but it can have 80% Storage efficiency ?!!
 
Last edited:

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.06/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
@FordGT90Concept

A review on the HighPoint RocketRAID 840A



do you also lose all the data on Storage Spaces if windows crashes while it's creating the array after a new hdd is added ?

I don't know about the 840A, but on my 2720 I've rebooted during an OCE and was fine. Once the machine booted, the OCE process picked up right where it left off.
 
Joined
Oct 2, 2017
Messages
53 (0.02/day)
i would really need an answer for this :
do you also lose all the data on Storage Spaces if windows crashes while it's creating the array after a new hdd is added ?
 
Joined
Jul 26, 2017
Messages
19 (0.01/day)
Processor i7-4790k
Motherboard Z97-A/USB 3.1
Cooling Noctua NH-C14S
Memory Gskill TridentX DDR3 2400 16GB
Video Card(s) Nvidia GTX 1080
Storage Samsung 960 Evo, WD40E31X 4TB
Display(s) Dell S2716DG
Case Fractal Define R5
Audio Device(s) Sennheiser GSX1000
Power Supply Corsair RM750x
Remi,

You had the same limitation in storage spaces until recently. In order to expand a pool you had to add the same # of disks as you had columns, but you can use the optimize command in win10/server 2016 now after the disk is added.

For "mixed resiliency":
In storage spaces the Storage pool is just the contiguous area of all the disks in the array. so 4x4tb =16tb storage pool. After that you create a virtual disk and that is where you tell it what kind of resiliency options you want for your data : 2 way mirror, 3 way mirror, parity or dual parity, or simple. If you wanted a mixed environment you could make a disk that had two way mirror and used up 14tb of your 16 tb pool, but the actual available size of that disk is 7tb to you becuase its a mirror. The remaining 2tb could just be "simple" and essentially raid 0'd if it was like a scratch drive for video editing.

If you go the storage spaces route be sure to read up on virtual disk creation and if you are using win10 please create your virtual disks via powershell as many of the options that ensure performance are hidden from the win10 gui. (such as # of columns)

Storage space's performance in a mirrored setup is pretty good, but parity isn't as refined yet.

i would really need an answer for this :
do you also lose all the data on Storage Spaces if windows crashes while it's creating the array after a new hdd is added ?

Best thing I could find was this:
https://answers.microsoft.com/en-us...8/bde12a9b-d54f-4932-beb0-022300196793?auth=1

The entry from that sites reads as follows:
I was curious as well, so I tested it and YES, even with a Windows 8 System drive crash your Storage Spaces drives are intact.

What I did.

Installed Windows 8.1 Pro in Drive 1
Setup 3 drives in Parity Mode. (drive 2, 3, 4)
Copied about 1TB worth of data.

Removed Drive 1 with the Windows 8.1 image.
Added another Drive (5) and installed Windows 8.1 Pro again.

Went to Manage Storage Spaces, and all the pool was there.
Under "This PC" all the drives were present: Local Disk (C:) and Data Parity (D:)

I was able to play test a bunch of movie files in the D: drive without any problem.

So YES, the Storage Spaces do maintain their integrity even if the C: drive crashes and you have to reimage from scratch
Of course this is simply describing the storage pool being moved to a new PC in the event of an OS failure. It's important to remember that nothing is 100% safe. It's more important to have a good backup than it is to have a system that you "think" is bulletproof, because none of them truly are. If you are saying can it survive ANY os level ever and never corrupt data? then the answer is no, but raid cards, ssd's, and even single hard drives have the same issue.

In fact, Toshiba has drives that have nand flash storage caches on them to try and prevent this. It uses the inertia of the spinning disk to power the nand flash and dumps the drive cache to nand before the drive loses power. They just released a 10tb model:
http://www.storagereview.com/toshiba_10tb_mg06_series_enterprise_capacity_hdd_line_announced

Cool tech, but probably cheaper to get a monitored UPS. Enterprise SSD's have the same feature with capacitors built into the drives to make sure the dram cache is flushed before powerdown so data is not lost, but even then its not 100%.

 
Last edited by a moderator:
Joined
Oct 2, 2017
Messages
53 (0.02/day)
Iciclebear that's not the scenario i was referring to,
i'm asking if windows crashes while Storage Spaces is adding a new HDD to the pool ? like Ford said, 10TB can take 14h +
Do i lose everything then ?
 
Last edited:
Joined
Jul 26, 2017
Messages
19 (0.01/day)
Processor i7-4790k
Motherboard Z97-A/USB 3.1
Cooling Noctua NH-C14S
Memory Gskill TridentX DDR3 2400 16GB
Video Card(s) Nvidia GTX 1080
Storage Samsung 960 Evo, WD40E31X 4TB
Display(s) Dell S2716DG
Case Fractal Define R5
Audio Device(s) Sennheiser GSX1000
Power Supply Corsair RM750x
Iciclebear that's not the scenario i was referring to,
i'm asking if windows crashes while Storage Spaces is adding a new HDD to the pool ? like Ford said, 10TB can take 14h +

Remi, storage spaces doesn't do that specifically. Ford is referring to a raid rebuild if you wanted to add additional disks. In a raid 5 setup you would add the disk and need to rebuild the array to get access to the extra space. (correct me if im wrong, im not a raid 5 expert)

In storage spaces (I haven't tested this, just from reading) You can just expand the disk to use them. Adding disks to the pool and then extending the volume is nearly instantaneous. Drive optimization is the closest thing to the raid 5 rebuild task that was mentioned earlier. Optimization takes the data on the disks and rebalances it across the new drives. so 3 drives at 80% full become 4 drives at 60% full and it defrags the slabs so that you don't get bottlenecked by 1 drive having all of the extra copies and so on.

I've read this article talking about having windows crash during a drive optimization and how they got their array back.
https://social.technet.microsoft.co...optimization-problems?forum=win10itprogeneral

But it should be noted that the user is running a parity space in that article, which was specifically noted to be not supported by the drive optimization feature (and the user ran it anyways)


I've only optimized my array twice and both times it was after it was freshly made with about a hundred gigs of data, and it took a matter of minutes. You could also create new virtual disks when you added new drives or even new storage pools if you didn't care about all the data being accessible from the same drive letter.

If this was a write once read many times archive and you had a copy of server you could even run DFS-N and slap all the pools in to a single directory and never have to optimize anything, but it really depends on what you are doing with your data.
 
Joined
Oct 2, 2017
Messages
53 (0.02/day)
yes by adding a new HDD to the pool i meant Drive optimization, so you mean Dual parity doesn't support the drive optimization feature ? only Two-way Mirror and Three-way Mirroring ?

Cause now i'm considering Dual parity (Failure tolerance 2 and Storage efficiency 50.0% - 80.0% vs. 33% with Three-way Mirroring

The downside would be very slow write speeds and drive optimization ?
 

Fx

Joined
Oct 31, 2008
Messages
1,332 (0.22/day)
Location
Portland, OR
Processor Ryzen 2600x
Motherboard ASUS ROG Strix X470-F Gaming
Cooling Noctua
Memory G.SKILL Flare X Series 16GB DDR4 3466
Video Card(s) EVGA 980ti FTW
Storage (OS)Samsung 950 Pro (512GB), (Data) WD Reds
Display(s) 24" Dell UltraSharp U2412M
Case Fractal Design Define R5
Audio Device(s) Sennheiser GAME ONE
Power Supply EVGA SuperNOVA 650 P2
Mouse Mionix Castor
Keyboard Deck Hassium Pro
Software Windows 10 Pro x64
Not by much, especially if a lot of small files on an HDD. Build speed is mostly a function of write speed. RAID6 should theoretically be about the same speed as RAID10 (with a good controller) but RAID6 has the advantage of being able to lose any two drives where all the data is lost if both of the RAID1 dives die on one side of the RAID0. RAID10 is also terrible for going beyond four drives. With the capacities he's talking about, beyond four drives is inevitable where RAID10 becomes very inefficient in performance and capacity.

Absolutely false. RAID 5 and 6 will take much longer in write especially due to the write penalties of their design. Reads are actually what they are decent and can improve at. RAID 10 is only inefficient at cost compared to the others, but not in the grand scheme of data retention. It scales extremely well in both read, write and ease of expansion. Below is a simple graph showing write penalties.

 
Last edited:
Joined
Jul 26, 2017
Messages
19 (0.01/day)
Processor i7-4790k
Motherboard Z97-A/USB 3.1
Cooling Noctua NH-C14S
Memory Gskill TridentX DDR3 2400 16GB
Video Card(s) Nvidia GTX 1080
Storage Samsung 960 Evo, WD40E31X 4TB
Display(s) Dell S2716DG
Case Fractal Define R5
Audio Device(s) Sennheiser GSX1000
Power Supply Corsair RM750x
yes by adding a new HDD to the pool i meant Drive optimization, so you mean Dual parity doesn't support the drive optimization feature ? only Two-way Mirror and Three-way Mirroring ?

Cause now i'm considering Dual parity (Failure tolerance 2 and Storage efficiency 50.0% - 80.0% vs. 33% with Three-way Mirroring

The downside would be very slow write speeds and drive optimization ?

I could be wrong on that remi, the official documentation looks like it was from Tech Preview 4 of server 2016 where it didn't support it. I've seen some commentators say its now supported but I haven't seen any official documentation.
 
Joined
Oct 8, 2009
Messages
2,047 (0.37/day)
Location
Republic of Texas
Processor R9 5950x
Motherboard Asus x570 Crosshair VIII Formula
Cooling EK 360mm AIO D-RGB
Memory G.Skill Trident Z Neo 2x16gb (CL16@3800MHz)
Video Card(s) PNY GeForce RTX 3090 24GB
Storage Samsung 970 EVO Plus 1TB NVMe | Intel 660p 2TB NVMe
Display(s) Acer Predator XB323QK 4K 144Hz
Case Corsair 5000D Airflow
Audio Device(s) Objective2 Amp/DAC | GoXLR | AKG K612PRO | Beyerdynamic DT880| Rode Pod Mic
Power Supply Corsair AX 850w
Mouse Razer DeathAdder Elite V2
Keyboard Corsair K95 Platinum RGB "Cherry MX Brown"
VR HMD Oculus Rift
Software Window 11 Pro
updated the sheet for unRAID, seems not whole ppl familiar with it
 
Joined
Nov 4, 2005
Messages
12,094 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Absolutely false. RAID 5 and 6 will take much longer in write especially due to the write penalties of their design. Reads are actually what they are decent and can improve at. RAID 10 is only inefficient at cost compared to the others, but not in the grand scheme of data retention. It scales extremely well in both read, write and ease of expansion. Below is a simple graph showing write penalties.



Considering most of the penalty will be masked by on drive cache, and having ran RAID5 arrays in the past, performance is not as bad as it would seem for the write penalty.

Setting an array and caching up correctly also makes access very fast. And considering OP is looking for stability, and maximum storage space, RAID 5 is the best option.
 
Joined
Jul 2, 2008
Messages
8,100 (1.34/day)
Location
Hillsboro, OR
System Name Main/DC
Processor i7-3770K/i7-2600K
Motherboard MSI Z77A-GD55/GA-P67A-UD4-B3
Cooling Phanteks PH-TC14CS/H80
Memory Crucial Ballistix Sport 16GB (2 x 8GB) LP /4GB Kingston DDR3 1600
Video Card(s) Asus GTX 660 Ti/MSI HD7770
Storage Crucial MX100 256GB/120GB Samsung 830 & Seagate 2TB(died)
Display(s) Asus 24' LED/Samsung SyncMaster B1940
Case P100/Antec P280 It's huge!
Audio Device(s) on board
Power Supply SeaSonic SS-660XP2/Seasonic SS-760XP2
Software Win 7 Home Premiun 64 Bit
Without the HDDs, i'm hoping to spend less then 770$.
@Fx , correct me if I'm wrong...
@remi , if you were to build a system like the conversation is talking about, you need to realize how much money is involved. If you're going to be getting 1 GB RAM for every TB of storage, then you are going to need a motherboard that supports 128 GB of RAM. Then that registered ECC RAM will cost upwards of $350 for every 32 GB DIMM......

There are 2 columns missing in that spreadsheet. One for building a system (including the cost) and one for a prebuilt NAS. If you go with a prebuilt, I'm thinking you could start with the $600 5 bay Synology DS1517 (the 8 bay is $850) and then latter add up to 2 of the 5 bay Dx517 for $520. Let them deal with all of the configuration. I also believe that this is the easiest way to add capacity they way that you want.
 
Last edited:
Top