• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Which 20 TB drive to get?

Which is the best drive?

  • Seagate Exos X20 / X24 20 TB

    Votes: 11 34.4%
  • Seagate IronWolf Pro 20 TB

    Votes: 8 25.0%
  • Toshiba MG10 20 TB

    Votes: 13 40.6%

  • Total voters
    32
Joined
Jan 14, 2019
Messages
12,567 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
So motherboard RAID is bad, I see. But why, though? If the motherboard dies, can't you just move your drives into a new machine and use them as normal? I thought that was the whole theory behind RAID 1.

I was thinking that motherboard RAID would be a good option in case I decide to ditch Windows and go over to Linux, as the drives would keep working in RAID, but if you still need a driver, then I guess I was wrong and there's no point.

So then, is software RAID in Windows better? Obviously I'd have to reconfigure everything on Linux, but I imagine that's easier than looking for motherboard RAID drivers.
 
Joined
Feb 1, 2019
Messages
3,666 (1.71/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
So motherboard RAID is bad, I see. But why, though? If the motherboard dies, can't you just move your drives into a new machine and use them as normal? I thought that was the whole theory behind RAID 1.

I was thinking that motherboard RAID would be a good option in case I decide to ditch Windows and go over to Linux, as the drives would keep working in RAID, but if you still need a driver, then I guess I was wrong and there's no point.

So then, is software RAID in Windows better? Obviously I'd have to reconfigure everything on Linux, but I imagine that's easier than looking for motherboard RAID drivers.
I dont know about the new storage space system, never used it. But the legacy Windows software raid I would stay away from, I used it for a while but it requires something called dynamic disks which is a propriety Microsoft disk structure, and unlike other software raid I have used it was very clunky and slow. Like many old obsolete raid designs, it was dumb with no intelligence at all, unsafe shutdown? Lets do a complete raid rebuild without knowing if the data is bad and which drive has the good data, interrupt that rebuild with a reboot? lets start it again.

I first became aware of the propriety problem when I booted a Macrium rescue disk disk and even though it is Windows PE, it could not access any of the dynamic disks (even standalone) and the backups were on a dynamic disk. Read Macrium's help page on it, and ended up reverting all the disks back to basic.

To be fair to Microsoft they were trying to overcome MBR limitations, as this was designed prior to GPT being a thing.

Storage spaces might be decent though, I just have no experience of using it.


Also old thread here comparing the two Microsoft options, Wiz seems to like dynamic disks.


Remember this is just my opinion, I expect there is many happy users of dynamic disks. Ultimately it did do its basic job and I didnt lose any data.

In terms of software vs hardware/fake raid, I prefer to be tied to a open source software solution vs tied to a hardware solution. The best software raid solutions are just flat out better as well, they more flexible and have better features. You can still use onboard ports, add on cards/bays, you just need to present the drives as standalone drives for the software solution.
 
Last edited:
Joined
Oct 30, 2022
Messages
239 (0.30/day)
Location
Australia
System Name Blytzen
Processor Ryzen 7 7800X3D
Motherboard ASRock B650E Taichi Lite
Cooling Deepcool LS520 (240mm)
Memory G.Skill Trident Z5 Neo RGB 64 GB (2 x 32 GB) DDR5-6000 CL30
Video Card(s) Powercolor 6800XT Red Dragon (16 gig)
Storage 2TB Crucial P5 Plus SSD, 80TB spinning rust in a NAS
Display(s) MSI MPG321URX QD-OLED (32", 4k, 240hz), Samsung 32" 4k
Case Coolermaster HAF 500
Audio Device(s) Logitech G733 and a Z5500 running in a 2.1 config (I yeeted the mid and 2 satellites)
Power Supply Corsair HX850
Mouse Logitech G502X lightspeed
Keyboard Logitech G915 TKL tactile
Benchmark Scores Squats and calf raises
So motherboard RAID is bad, I see. But why, though? If the motherboard dies, can't you just move your drives into a new machine and use them as normal? I thought that was the whole theory behind RAID 1.

I was thinking that motherboard RAID would be a good option in case I decide to ditch Windows and go over to Linux, as the drives would keep working in RAID, but if you still need a driver, then I guess I was wrong and there's no point.

So then, is software RAID in Windows better? Obviously I'd have to reconfigure everything on Linux, but I imagine that's easier than looking for motherboard RAID drivers.
Motherboard raid is bad mostly because it relies on you being able to get a new motherboard with the same drive controller chip (not an impossible feat but a potential crapshoot turning into a 'they phase that chip out so I need to get a second hand board')

Motherboard raid also can be like bios raid, transparent to the operating system (so instead of say 4x 8tb in raid 5 it would see a single drive of 24tb) which could present issues migrating between operating systems.

Software raid in windows is an average solution. I ran 2 arrays in a single machine (4x 200 gig and 4x 300 gig many moons back) and it was constantly resyncing/resilvering (so rebuilding) the array like they'd experienced a power failure almost every time I rebooted the server so not a fan of windows raid.

My poor mans experience says either get a well back proprietary system (Synology/Qnap) which isn't cheap, or go linux now array migration between variants of linux a lot easier to manage. A raspberry pi might be a cheap way to get into linux running your 2 bay unit for now.
 
Joined
Jul 30, 2019
Messages
3,338 (1.69/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
So motherboard RAID is bad, I see. But why, though? If the motherboard dies, can't you just move your drives into a new machine and use them as normal? I thought that was the whole theory behind RAID 1.
RAID 1 is about redundancy - surviving a disk failure. Whatever you setup your RAID system on typically limits where/how you can move those disks if the controller fails.

If you use simple windows mirroring (RAID-1) or striping (RAID-0) in Windows Disk Management (assuming your connecting them to normal SATA ports) you can plug those into different Windows machines and import the disks if necessary making your array easy to recover or if you want to upgrade and move your array to a new pc. Of course striped disks require all the disks to recover data but mirrored disks can be imported independently if necessary for data recovery.

Systems like Synology I believe are also software RAID (Linux under the hood). When you want to upgrade units you can move your entire array to a new unit.

The trend is software based RAID is more portable however more CPU intensive since a separate controller is not offloading the work.

Once you start RAID you also want to think about safe shutdown and/or power loss management strategies. Put your RAID on a UPS to help ensure you can perform a graceful shutdown to minimize the chance of incomplete writes from damaging your array. Of course now being on UPS you need to make sure to test battery life every now and then and replace it when necessary otherwise you won't get the protection.

Keep in mind of course that won't provide any protection for your data if your machine BSOD's in the middle of writing something so you want to host your RAID on a stable machine or device.

Another thing once you setup a RAID solution you want to be able to correctly identify the drive that failed for replacement and/or the one that didn't fail for recovery.
 
Last edited:
Joined
Jul 5, 2013
Messages
28,238 (6.74/day)
These days, it's pretty much NAS or bust
Not really. In-system RAID is less common these days, but not unheard of. I build them fairly regularly. RAID with SSDs can be very effective if done right. Is it essential? No, but then again, RAID in consumer systems never was. It's hella nice when done right. That's a catch though. One needs to understand RAID, how it works, why it works well for some things and not others and what RAID setup a particular system needs for a given use-case-scenario.

The last time RAID was used as a bootable array in my personal daily driver system was with 4x480GB(IIRC) in a hardware driven RAID5. They were SATA and it was very kickin. Once NVMe hit it's stride, I no longer needed it. However, I've been looking at NVMe RAID lately and thinking I want to do it, not because I need it, but because it would be kick-a$$. 4x512GB NVMe in RAID5? Oh yeah, speed demon city!
 
Last edited:
  • Like
Reactions: bug
Joined
Mar 18, 2023
Messages
931 (1.45/day)
System Name Never trust a socket with less than 2000 pins
So motherboard RAID is bad, I see. But why, though? If the motherboard dies, can't you just move your drives into a new machine and use them as normal? I thought that was the whole theory behind RAID 1.

I was thinking that motherboard RAID would be a good option in case I decide to ditch Windows and go over to Linux, as the drives would keep working in RAID, but if you still need a driver, then I guess I was wrong and there's no point.

So then, is software RAID in Windows better? Obviously I'd have to reconfigure everything on Linux, but I imagine that's easier than looking for motherboard RAID drivers.

I don't know anything about Windows software RAID. But in Linux and FreeBSD software RAID is very good.

For your situation you want ZFS, because (apart from being good) it is available on FreeBSD, Linux, macOS and Windows. So you can drag your array around with you. Just make sure that you
- don't activate any ZFS options that might not be available on all platforms. Test on all OSes you need before deploying
- practice disk replacement so that you are not green when you have to do it in anger
 
Joined
Feb 1, 2019
Messages
3,666 (1.71/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Yeah although I moaned about the old Windows raid system, I still think that is better than using some kind of hardware/board based raid. So if you not keen on using linux/bsd for whatever reason perhaps because you not comfortable using the OS or want it all running on same machine as you have Windows on, then Windows I feel is still a better choice than using a hardware based raid.

Also interesting to learn ZFS is now usable on Windows. Unwind's advice about not updating feature flags and testing swapping disks in the pool is very sound.
 
Joined
Jan 14, 2019
Messages
12,567 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I don't know anything about Windows software RAID. But in Linux and FreeBSD software RAID is very good.

For your situation you want ZFS, because (apart from being good) it is available on FreeBSD, Linux, macOS and Windows. So you can drag your array around with you. Just make sure that you
- don't activate any ZFS options that might not be available on all platforms. Test on all OSes you need before deploying
- practice disk replacement so that you are not green when you have to do it in anger
I just did some reading on ZFS, and it seems to be a Linux thing (no wonder I never heard about it since recently). How does it work on Windows?
 
Joined
Jul 30, 2019
Messages
3,338 (1.69/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
Yeah although I moaned about the old Windows raid system, I still think that is better than using some kind of hardware/board based raid. So if you not keen on using linux/bsd for whatever reason perhaps because you not comfortable using the OS or want it all running on same machine as you have Windows on, then Windows I feel is still a better choice than using a hardware based raid.

Also interesting to learn ZFS is now usable on Windows. Unwind's advice about not updating feature flags and testing swapping disks in the pool is very sound.
There are some decent RAID cards you can get on ebay for about $50 if you want to smooth out the performance of drives. I picked up an ARC-1882i SATA/SAS PCIe 3.0 card with 1GB ECC cache. Does nicely to buffer reads and writes. Maybe one day I'll try it with one of those large enterprise drives OP has but right now it hosts some old and slow 2TB 5400rpm WD Reds. Hardware based RAID still has some niche uses and you can get some older generations dirt cheap now. Back in 2009 I had the older sibling ARC-1120 with 256MB ECC cache and it helped immensely with system responsiveness with my Core2Quad Q6600. It's kind of amazing you can get what was a $700 card (ARC-1882i) for $50 today with up to date Windows drivers.
 
Joined
Mar 18, 2023
Messages
931 (1.45/day)
System Name Never trust a socket with less than 2000 pins
I just did some reading on ZFS, and it seems to be a Linux thing (no wonder I never heard about it since recently).

Technically it is not native to Linux (because of licensing messes). It is a Sun thing and then FreeBSD, where it is native.

How does it work on Windows?

You tell us. It is a pretty new port. The overwhelming majority of the code is the same as on the Unix platforms, so I am not too concerned about it just eating your data when it is bored.
 
Joined
Oct 5, 2024
Messages
127 (1.65/day)
Location
United States of America
The golden rule. If you don't have a backup of your backup, you don't have a backup.
The way I think about this is that your data needs to be in two different systems (separate devices) at the least and preferably in different geographic locations as well.

So if your data is on a USB backup drive but not on your PC or anywhere else, that is not a backup, that's just a storage method. You lose that USB drive, you lost your data.

If your data is on a secondary hard drive on your PC as well as on your main drive on that same PC, that is also not a backup. You fry your PC, you (probably) lost your data. The same logic applies to RAID 1 setups, neither drive is a backup because you (can) lose the data when you lose your PC.

If your data is on a USB backup drive and on your PC, that USB drive is now a proper backup.

If your data is on a USB backup drive and on your NAS, then either the USB drive or the NAS drive is the backup (but not both of them). This is my preferred way to store casual or consumer data, one is a storage location for my data that is easy to access and the other is a offline backup.
 
Joined
Jan 14, 2019
Messages
12,567 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
You know what, guys? I'm game. :)

I have two 1 TB HDDs laying around (albeit one of them is a 3.5", the other one is a laptop drive) that I'm not using, so I'm gonna do a little experiment. I'm gonna pop them into my PC, into my RAID box, try hardware RAID, software (Windows) RAID, and all sorts, see how it goes. :)

Technically it is not native to Linux (because of licensing messes). It is a Sun thing and then FreeBSD, where it is native.

You tell us. It is a pretty new port. The overwhelming majority of the code is the same as on the Unix platforms, so I am not too concerned about it just eating your data when it is bored.
Sounds good. It's only that I couldn't find any resource on Windows ZFS.
 
Joined
Mar 18, 2023
Messages
931 (1.45/day)
System Name Never trust a socket with less than 2000 pins
Sounds good. It's only that I couldn't find any resource on Windows ZFS.

Installers are here:

Actually using it should be mostly the same as the Unix and macOS versions. You will learn a lot about zpools and vdevs :)
 
Joined
Jul 30, 2019
Messages
3,338 (1.69/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
You know what, guys? I'm game. :)

I have two 1 TB HDDs laying around (albeit one of them is a 3.5", the other one is a laptop drive) that I'm not using, so I'm gonna do a little experiment. I'm gonna pop them into my PC, into my RAID box, try hardware RAID, software (Windows) RAID, and all sorts, see how it goes. :)


Sounds good. It's only that I couldn't find any resource on Windows ZFS.
Just be aware putting disks of significantly different types into the same array may result in poor performance.
Don't forget to have fun.

(edit) Also some disks lack compatible TLER (time limited error recovery) to work well in an array so they drop from the array quickly when they malfunction. NAS and enterprise drives should not have this issue as they are intended to be used in arrays and will have their TLER setup accordingly.
 
Last edited:
Joined
Mar 21, 2021
Messages
5,148 (3.75/day)
Location
Colorado, U.S.A.
System Name CyberPowerPC ET8070
Processor Intel Core i5-10400F
Motherboard Gigabyte B460M DS3H AC-Y1
Memory 2 x Crucial Ballistix 8GB DDR4-3000
Video Card(s) MSI Nvidia GeForce GTX 1660 Super
Storage Boot: Intel OPTANE SSD P1600X Series 118GB M.2 PCIE
Display(s) Dell P2416D (2560 x 1440)
Power Supply EVGA 500W1 (modified to have two bridge rectifiers)
Software Windows 11 Home
Joined
Mar 18, 2023
Messages
931 (1.45/day)
System Name Never trust a socket with less than 2000 pins
That's because it's not widely supported as a consumer level format. I would advise sticking with the tried and true NTFS. Keep it simple and easy.

ZFS has many advantages, starting from the data integrity checking.

The only reason why there isn't much about its port to Windows is that it is brand new. You get to be first penguin.
 
Joined
Jul 5, 2013
Messages
28,238 (6.74/day)
ZFS has many advantages, starting from the data integrity checking.
Not really, NTFS has similar features built into it's journaling system. Lot's of people have been pushing ZFS as the new hotness, but it's really not. It's good and solid, but no more or less than NTFS. ZFS's main advantage is volume size and file size limits, which far exceeds NTFS and most other file systems. Only Ext4 is comparable in size limits. Sun envisioned a future where truly ginormous data arrays would be a thing and designed ZFS accordingly. That's it. All it's other features are very much on-par with other file systems. For Windows based systems, NTFS has the native support advantage and that fact can not be understated.

The only reason why there isn't much about its port to Windows is that it is brand new.
It's not new. It was first used on Sun Solaris workstations in 2005. That's nearly 2 decades ago. https://en.wikipedia.org/wiki/ZFS
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,842 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Joined
Jan 18, 2021
Messages
194 (0.14/day)
Processor Core i7-12700
Motherboard MSI B660 MAG Mortar
Cooling Noctua NH-D15
Memory G.Skill Ripjaws V 64GB (4x16) DDR4-3600 CL16 @ 3466 MT/s
Video Card(s) AMD RX 6800
Storage Too many to list, lol
Display(s) Gigabyte M27Q
Case Fractal Design Define R5
Power Supply Corsair RM750x
Mouse Too many to list, lol
Keyboard Keychron low profile
Software Fedora, Mint
Depends on what kind of RAID.

soft raid.

Raid controller with backplane?

pcie raid or vroc?

A lot of pretty general comments steering you in not so great directions. You should really make sure you are looking into a lot of this yourself. Between disk recommendations based on a shit out of backblaze data but presented in an ambiguous fashion, to people recommending ZFS and soft raid, to blanket statements about "BIOS" raid and finally enclosure and OS recommendations.

For someone that barely knew what RAID 1 was and just wanted to know out of 3 drives which he should buy, you are asking questions, but you dont know what you dont know.

This is not as simple as

"this drive is the best"

"this raid is the best"

"this is how you set it up"

This is an entire concentration in higher computing. Unless you have a proper thread that could maintain the conversation (big doubt) there is ALOT to absorb. A lot to learn, and A LOT of nuance. Some of the suggestions, recommendations or otherwise general advice are just that, general. Thats like asking us to "pick the best cornflakes" because you "want to make a bowl of ceral" it doesnt mean much, make a ton of sense and there is no clear winner at the end of the tunnel.

You have an external enclosure and plan to keep data in more than 1 place. You did it. You are safer than 90% of the people probably even in this thread.

From here on out though, I would make sure if you TRULY want an answer you do some reading if your serious and I would warn you about taking advice from 1 off opinions or posts. You can even count this among them.

Data and data safety should not be a dick waving contest and you should REALLY understand the buttons you are pushing before you cast your wedding, birthdays, and vacation pictures to a random internet post that "told you to".
This is a great post.

We briefly talked about @AusWolf 's options a few weeks ago. I'm glad he's got a proper thread now to hash this stuff out in detail. He's a good guy and I want him to find something that fits.

I will, however, reiterate what I said then: forget RAID. Just buy a big disk, connect it to a motherboard SATA port, run an Extended SMART test, and put it into service as a single-disk volume. If you plan to stay on Windows, use NTFS. If you switch to Linux, just use Ext4. Keep things simple and native. Then arrange regular back ups, which are far more important than any RAID scheme. Anything more elaborate than that, if you want to do it properly, will demand time and research.

Not really, NTFS has similar features built into it's journaling system. Lot's of people have been pushing ZFS as the new hotness, but it's really not. It's good and solid, but no more or less the NTFS. ZFS's main advantage is volume size and file size limits, which far exceeds NTFS and most other file systems. Only Ext4 is comparable in size limits. Sun envisioned a future where truly ginormous data arrays would be a thing and designed ZFS accordingly. That's it. All it's other features are very much on-par with other file systems. For Windows based systems, NTFS has the native support advantage and that fact can not be understated.

I agree that NTFS is probably underrated, but ZFS does have advantages, most notably scrubs and snapshots. Then you could talk about more exotic features like zfs-send/receive, filesystem-level encryption, and de-duplication (not that de-duplication is remotely worthwhile for any home use case). But as above, ZFS is deep-end-of-the-pool stuff from the average home user's perspective. It has a laundry list of disadvantages in that context, most notably its reliance on CLI and the inflexibility of the vdev structure.

And I think you're right to caution against using non-native filesystems, particularly on Windows. Keep it simple. There's a temptation to go elaborate in the name of increasing data integrity, but added complexity often translates to added points of failure. Any system you put in place is only as good as your willingness/ability to maintain it.
 
Joined
Jan 14, 2019
Messages
12,567 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Okay, so those who said putting two different drives into RAID is a bad idea were right.

My 1 TB 7200 RPM desktop drive can do about 160-200 MB/s.
My 1 TB 5400 RPM laptop drive can do about 90-130 MB/s.
The two drives together in software (Windows) RAID 1 can do around 90 MB/s as expected for writing low amounts of data, but it soon drops to 30 MB/s during continuous writes for seemingly no reason.

More experimenting is needed. :)
 
Joined
Jul 30, 2019
Messages
3,338 (1.69/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
Okay, so those who said putting two different drives into RAID is a bad idea were right.

My 1 TB 7200 RPM desktop drive can do about 160-200 MB/s.
My 1 TB 5400 RPM laptop drive can do about 90-130 MB/s.
The two drives together in software (Windows) RAID 1 can do around 90 MB/s as expected for writing low amounts of data, but it soon drops to 30 MB/s during continuous writes for seemingly no reason.

More experimenting is needed. :)
I'd say that's about right. I have 320GB/500GB laptop drives (Seagate and WD) from many moons ago and 30 MB/s is about what they do after the cache runs out during a sustained file write so by mixing very different drives your getting the poor performance from the least capable drive in the array.

Have you decided how you're going to arrange your storage yet?
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,842 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I'd say that's about right. I have 320GB/500GB laptop drives (Seagate and WD) from many moons ago and 30 MB/s is about what they do after the cache runs out during a sustained file write so by mixing very different drives your getting the poor performance from the least capable drive in the array.
That should only be true for RAID1. RAID0 or RAID5 would be able to give you better performance that the weakest drive's.
 
Joined
Mar 18, 2023
Messages
931 (1.45/day)
System Name Never trust a socket with less than 2000 pins
I don't think I have seen a laptop HDD go from 90 down to 30 MB/sec.

Are you sure this is not the fault of Windows software raid, or a general windows problem?
 
Top