• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Which 20 TB drive to get?

Which is the best drive?

  • Seagate Exos X20 / X24 20 TB

    Votes: 11 34.4%
  • Seagate IronWolf Pro 20 TB

    Votes: 8 25.0%
  • Toshiba MG10 20 TB

    Votes: 13 40.6%

  • Total voters
    32
How desperate do you have to be to use HDDs in a primary workstation? I went all SSD with mine in 2018 and dumped all spinners to servers.
Each to their own. Some people have only one desktop computer, which has to perform the duties of workstation, server, gaming/office PC, all rolled into one. They might not have a dedficated server or NAS, to store all the files that won't fit on a solid state drive.

Although I have four TrueNAS Core RAID-Z2 servers, they don't remain switched on 24/7. With electricty costing the equivalent of $0.33 per kWhr, I don't feel inclined to leave them (plus associated 10GbE switches) running all the time. Instead, I fit spinning rust disks in my workstations and move files to/from M.2 NVMe as required. I don't have any motherboards with four or five M.2 slots, so I chuck a few 8TB WD drives into the builds instead. When I need to backup data, I switch on the servers.

"cable connected disks are for servers."
This presupposes everyone has a desktop computer with M.2 slots. Spare a thought for people who don't have that option and must use SATA cables to run all their drives. They may not have enough spare PCIe slots either for an NVMe card.

I only have four motherboards that will take NVMe drives (two boards have one M.2 slot each, one board has two M.2 slots and the last has three M.2 slots). Most of my machines still run a combination of SATA SSDs and hard disks, i.e. "cable connected disks". Come to think of it, the hard disks in my HP servers plug directly into hot-swap bays, so there are no SAS/SATA cable connections directly to the hard disks (or SSDs).

I still use optical drives, from SCSI CD Writers, up to Blu-ray writers (for burning the occasional 4K H.265 holiday video). I prefer audio CDs in the car over MP3, finding it easier to select and pop in a new disc, instead of wading through menus on my phone.
https://www.theguardian.com/music/2...player-source-of-weirdly-deep-musical-fandoms

Some of us still live in the computer Dark Ages, have Luddite tendencies, or not very much spare cash.
 
Thanks everyone for the replies. I've learned a lot about RAID, the difference between hardware and data redundancy, and the disks I asked the question about in the first place. :)

A little update: my JBOD / RAID 0/1 capable two-disk enclosure arrived yesterday. Big thanks to @lexluthermiester for the recommendation! For now, I've put my 4 and 8 TB Seagate Barracudas in it in normal mode. I'm keeping some data on one of them, some other data on the other, and I'm duplicating the most important stuff between the two, as well as my main PC, and some other disconnected drives.

Later, I'll be looking at those 20 TB disks to keep all of my data safe and backed up, not just the most critical parts of it. Whether I'll do a RAID 1 setup, or just manually synchronise them, I'm not sure, yet.

Basically goes like "cable connected disks are for servers." Now that's a line that goes hard.
I like that idea, except that you can get a 20 TB spinner for the price of a 4 TB SSD.
If you don't have a lot of data, fair enough, but if you do, you don't have much of a choice. HDDs are still the most cost-efficient ways to store data.
 
What era is this? I get that I'm the odd one out for slumping anywhere from 20-40TBs of HDD into an eMachines and it's hilarious but if you're encountering a dead Tosh or more in SERVERS...They were either DOA or have 100K hours on them. Personal computer? What are you doing? How desperate do you have to be to use HDDs in a primary workstation? I went all SSD with mine in 2018 and dumped all spinners to servers.
They are old yes. Anywhere from 500GB to 3-4TB. Backstory on this one. After monthly server issues because deaths of Toshiba drives for years, the company decided it wasn't worth keeping a warranty subscription after it expired. They went with another company who had similar problems but with seagate. imagine their surprise when I repurposed the 16 drive server into JBOD, just some old WD Golds I found on a box destined for ewaste. It's still going to 8+ years later. The official contracted server on the other hand cost 3k a month (which covers all warranty stuff) and constantly fails.

I don't have any experience with Toshiba newer drives, but wouldn't recommend it seeing the old ones fail so often.

But once again no data is safe even in a raid-1. It's just that the hassle of copying 20 TB to another drive will be annoying and time consuming. I value my time, those Toshiba are the half the price, but you can get WD Gold on sale if you wait.
 
I don't care if my car doesn't have a cassette player. Or a CD player, for that matter. But is it actually true that new cars forgo any kind of music player, including MP3 from USB/SD?
I find that hard to believe however Smartphones and Bluetooth are a thing so it wouldn't surprise me. I used to do the USB thing but finally acquiesced to using playlist downloads with Plex on mobile and it's been great. Ok I'll veer back on topic now, sorry.
 
Eh. If your like me and always come across dead Toshiba drives in servers and personal computer, you tend to avoid them.
I didn't and still don't. Of course, I retired from admin duties earlier this year, can't imagine things have changed much in a few months. Most of the drives we found dead or dying were Seagate's. My shop sells Toshiba drives as one of it's main staples, along side WD and HGST and they rarely come back. My experience is echoed by the BlackBlaze numbers to a very reasonable degree.
I just stick to WD Golds now. Don't waste my money on the others.
At least this we can agree on. However, Auswolf doesn't seem to have that option from the seller they buying from(WTH?!?).

How desperate do you have to be to use HDDs in a primary workstation?
Are you kidding? Ok big boy, let's see you find, ANYWHERE, an SSD with 20TB of space for less than $200. None of us will be holding our breath on that one..
I went all SSD with mine in 2018 and dumped all spinners to servers.
Not everyone has a server in their house. That kind of thing is very rare. However, if you had actually READ the OP you would know the OP is putting the drives in question into a NAS/DAS box. So there is that..

A little update: my JBOD / RAID 0/1 capable two-disk enclosure arrived yesterday. Big thanks to @lexluthermiester for the recommendation!
Cool! I suggested a few different models, which one did you end up with?
 
Last edited:
Cool! I suggested a few different models, which one did you end up with?
This one:
Screenshot_20241006-205210~2.png

So far, it's great. :)
 
If you do use ZFS host side, its a software raid so you would present the drives as "normal" to the operating system, the Raid 1 feature on that bay would be an internal raid system they implemented.
 
Another idea... What if I use the 20 TB drives internally in hardware RAID through the BIOS, and keep the externals I've got now (4+8 TB in the enclosure, and another 8 TB external HDD) as manual backups? That way, I'd have hardware redundancy in case of failure, and have the backup problem sorted. Feel free to correct me or suggest something better if I'm wrong. ;)

Also, can someone send me a guide on ZFS? I've read a bit about what it is, and it looks interesting. :)
 
In what way? Sorry for the obvious question, I'm inexperienced with RAID. :ohwell:

Depends on what kind of RAID.

soft raid.

Raid controller with backplane?

pcie raid or vroc?

A lot of pretty general comments steering you in not so great directions. You should really make sure you are looking into a lot of this yourself. Between disk recommendations based on a shit out of backblaze data but presented in an ambiguous fashion, to people recommending ZFS and soft raid, to blanket statements about "BIOS" raid and finally enclosure and OS recommendations.

For someone that barely knew what RAID 1 was and just wanted to know out of 3 drives which he should buy, you are asking questions, but you dont know what you dont know.

This is not as simple as

"this drive is the best"

"this raid is the best"

"this is how you set it up"

This is an entire concentration in higher computing. Unless you have a proper thread that could maintain the conversation (big doubt) there is ALOT to absorb. A lot to learn, and A LOT of nuance. Some of the suggestions, recommendations or otherwise general advice are just that, general. Thats like asking us to "pick the best cornflakes" because you "want to make a bowl of ceral" it doesnt mean much, make a ton of sense and there is no clear winner at the end of the tunnel.

You have an external enclosure and plan to keep data in more than 1 place. You did it. You are safer than 90% of the people probably even in this thread.

From here on out though, I would make sure if you TRULY want an answer you do some reading if your serious and I would warn you about taking advice from 1 off opinions or posts. You can even count this among them.

Data and data safety should not be a dick waving contest and you should REALLY understand the buttons you are pushing before you cast your wedding, birthdays, and vacation pictures to a random internet post that "told you to".
 
In respect to Solaris17's point.

The reason I mentioned ZFS more than once is when you setting up a storage array, its ideal to not then change it again after you set it up because you regret what you did the first time.

Fundamentally there is 3 different types of ways of implementing raid, hardware, software and what is known as fake raid. Fake raid aside from being very limited, also had the problem of if you couldnt get hold of the same type of hardware if the host board fails, then you risk losing your raid, hardware raid has a similar issue in that respect. I did try to find what internal system the bay that you brought uses, but couldnt find much information, but can see from screenshots its using linux so at a guess its either mdadm or ZFS, I dont think it is a proprietary setup.

The advantages of software raid are typically configuration flexibility, ongoing improvements with development, controller flexibility, features and now days also the size of the user base using it for support, questions and the like.

ZFS specifically has extra features which I consider extremely important for this type of use such as the checksums (which is how it implements its bitrot protection) and snapshots, a very handy convenient way of being able to roll back files. Compression, flexibility in the file system configuration, compression, external cache device support and more, I think it is good to read up on it so have tried to find some starting points for you.

Natively its supported on FreeBSD which includes TrueNAS Core, on Linux due to licencing concerns the official support isnt there as a whole, but certain distro's have adopted its use out of the box such as TrueNAS Scale and Proxmox.

I found this link as a kind of feature list for you, it may not be up to date.


You can do replacement of disks, if a disk is removed from the pool, it will still be accessible but in a degraded state, then you can replace the device in the pool and do something called a resilver which is rebuilding the pool back to a normal state with the new disk, so in that respect its not hugely different to how raid normally works.

The bit rot protection is handled automatically and by default a ZFS mirror setup (Raid 1) will have it configured out of the box, whenever data is accessed it will auto repair it if bit rot is detected, you can run a scan on demand something called a scrub, so data that hasnt been accessed for a while is checked (and repaired if required).

In terms of using ZFS the main and most user friendly way that I know off, where it can be handled from a GUI for its basic features is TrueNAS, Proxmox also although TrueNAS is specifically designed as a software raid package.

So here is some TrueNAS documentation.


There is alternatives like unraid, which supports both ZFS and BTRFS. But I have no experience of using that so havent much to say about it other than its a ZFS alternative to TrueNAS.

If you have specific questions rather than a general give me something to read I can try to help.
 
Another idea... What if I use the 20 TB drives internally in hardware RAID through the BIOS, and keep the externals I've got now (4+8 TB in the enclosure, and another 8 TB external HDD) as manual backups? That way, I'd have hardware redundancy in case of failure, and have the backup problem sorted. Feel free to correct me or suggest something better if I'm wrong. ;)
That sounds like a decent plan. I would go with the RAID1 in the enclosure personally, but the internal setup would have faster access & bandwidth speeds.
 
Although I have four TrueNAS Core RAID-Z2 servers, they don't remain switched on 24/7. With electricty costing the equivalent of $0.33 per kWhr, I don't feel inclined to leave them (plus associated 10GbE switches) running all the time. Instead, I fit spinning rust disks in my workstations and move files to/from M.2 NVMe as required. I don't have any motherboards with four or five M.2 slots, so I chuck a few 8TB WD drives into the builds instead. When I need to backup data, I switch on the servers.
On-Demand application specific data server....Kinda based. That electrical rate is flat out robbery though. Residential here is $0.96/day basic with $0.068/kWhr, which is still a lot but if I suddenly want to start running lots of powerful equipment, it's not so troublesome.
Each to their own. Some people have only one desktop computer, which has to perform the duties of workstation, server, gaming/office PC, all rolled into one. They might not have a dedficated server or NAS, to store all the files that won't fit on a solid state drive.
Aus isn't some people. I'm not sure you've noticed but when you start considering volume sizes like this, you're no longer in normie territory. I've said before I don't know where the threshold is exactly but it's somewhere in there. These drives look expensive and for a while they were. Took ~20 years to start making them into high performers over 2.5GbE and similar.
This presupposes everyone has a desktop computer with M.2 slots. Spare a thought for people who don't have that option and must use SATA cables to run all their drives.
Nah, deal with it. I specifically went to Ryzen for this reason because I wasn't about to buy a PCI-E adapter or a 990FX.
You missed the part where I mentioned it's a radical idea. HDDs and mainstream SSDs have had the sata hookup for years.
If you've been around long enough to consider 14-20TB, you're somebody that understands the behaviors and complexities of more than one computer, period.
I don't make the rules, I just recognize the patterns. We're all mentally ill data hoarders at some point.
They may not have enough spare PCIe slots either for an NVMe card.
1728261018604.png
1728261467319.png

Problem? :sleep:
Everyone has the spare x1 somewhere. It's just running out of PCI-E bandwidth or predisposed lane splitting that makes it a non-starter for consumer.
Ok big boy, let's see you find, ANYWHERE, an SSD with 20TB of space for less than $200.
Best I can do right now is a 3.2TB WarpDrive and it's a FATBOI.
You're not going to want that for much other than scratch or a data jail though.
>90PB write cycle life tells you everything. LSI prices go kerchunk in a few years though so I'd hold out for better.
Also you'll run out of lanes anyway. Those x8 cards don't have a lot of options in consumer and literally nobody is here to spend the $$$.
Another idea... What if I use the 20 TB drives internally in hardware RAID through the BIOS, and keep the externals I've got now (4+8 TB in the enclosure, and another 8 TB external HDD) as manual backups?
Sort out your data first. If something happens to that RAID, you won't have recovery options for those specific drives, which is a bad hangup. Choose carefully.

I arrange things like this:
-Start of the partition-
Static Game Library 1 > Frequently used, extremely LARGE file and data size, prestaged for zero growth, max performance, replaceable fair burden
-ISCSI PADDING-
Static Game Library 2 > Frequently used, extremely LARGE file and data size, prestaged for zero growth, max performance, replaceable heavy burden
-ISCSI PADDING-
TV Shows > Rarely used, VERY LARGE data set, slow growth pattern, max performance, replaceable heavy burden
ISO > Rarely used, LARGE data set, slow growth pattern, max performance, replaceable heavy burden
VMs > Rarely used, LARGE data set, slow growth pattern, max performance, replaceable fair burden
EMU > Rarely used, LARGE collection, stable growth pattern, max performance, semi-replaceable fair burden
Installers > Frequently used, normal file and data size, slow growth pattern, max performance, non-replaceable
Photostock > Rarely used, normal file and data size, no growth pattern, max performance, replaceable fair burden
Galleries > Frequently used, LARGE collection, fast growth pattern, max performance, replaceable? unknown burden
Game backups > Rarely used, normal file and data size, slow growth pattern, high performance, non-replaceable
Distribution shares > Always used, normal file and data size, steady growth, high performance, replaceable fair burden
Artbook/Manga/Other > Always used, normal file and data size, steady growth, high performance, replaceable fair burden


I keep as few non-replaceable collections on the new disk during the first 90-180 days. I figured out the importance of keeping the cluster size low and putting anything LARGE with zero growth at the front of the disk for maximum performance behavior. You want to set and forget. Go down the size chart until you encounter a growth pattern and make a choice to mirror non-replaceable collections. Remember to copy, not move. Collections with tons of small files like Unity/Unreal projects should probably exist as compressed collections with the uncompressed copies on SSDs (because it's a work set). I'm in the middle of figuring out a similar situation for myself, so this is good motivation to get on that. I have a lot to sort.

When the smaller disks start to look barren, you can consolidate, copy stuff over to a faster disk with less hours and keep the smaller disks around as offline backups or something. It sounds dumb but has worked out for me and others just fine. Also pay attention to drive health reports once in a while. That's about all. :pimp:
 
Another idea... What if I use the 20 TB drives internally in hardware RAID through the BIOS, and keep the externals I've got now (4+8 TB in the enclosure, and another 8 TB external HDD) as manual backups? That way, I'd have hardware redundancy in case of failure, and have the backup problem sorted. Feel free to correct me or suggest something better if I'm wrong. ;)

Also, can someone send me a guide on ZFS? I've read a bit about what it is, and it looks interesting. :)
(This is just my opinion of course. Take with grain of salt and do what makes sense to you)

Whatever you do I'd just try to keep it simple and get some backup software that's easy to use to help manage it even if it's just something stupid simple like BeyondCompare to manually compare and replicate differences between folders.

Forgo RAID for now until you get an opportunity to get another 20TB disk. Your live vs. backup capacity is greatly imbalanced and you will find yourself filling up that 20TB without enough backup space.

My thoughts how to arrange your disks for a balanced capacity system...
  • Internal - Live disks internal to your PC: One 20TB and one 8TB
    • This is what you work on daily
  • External - Backup Disks: One 20TB and one 8TB
    • This is what you backup to daily
  • External - Extra backup disks: 4TB for a 2nd copy of the really important stuff since you don't have a larger capacity disk yet.
    • This is what you backup to daily but only the really important stuff
 
Last edited:
Best I can do right now is a 3.2TB WarpDrive and it's a FATBOI.
You're not going to want that for much other than scratch or a data jail though.
>90PB write cycle life tells you everything. LSI prices go kerchunk in a few years though so I'd hold out for better.
Also you'll run out of lanes anyway. Those x8 cards don't have a lot of options in consumer and literally nobody is here to spend the $$$.
And that was part of the point. Large storage space SSD's are not affordable. HDD's for mass storage above 4TB are easily the best option. 8tb SSD's are still crazy expensive, but within grasp of some. Above 8TB SSD? Forget about it. That is where HDD's become the clear and simple choice, almost the only choice.
 
I remember some years back, I was looking at my setup which was a bunch of standalone drives, some over 5 years old, holding disorganised data, some of this data not a big deal if I lose it like games, but was also things like game saves, personal documents, photos, old recordings from STB, I had just started to record games I play, many old games patches and mods, old cheat engine tables, mod files I made for FF7 modding, outlook pst files, keeppass database/key and all of my stuff for integrated customised windows installations, scripts, notes and so forth. All of which I consider invaluable and would be distraught if lost. Now all of what I have listed here at the very least is backed up, but a lot of it is on RAID and backed up on top of that.

Not everything is safe in terms of recovery via original download source, game modders e.g. have started more commonly pulling their mods of the internet.
 
Not everything is safe in terms of recovery via original download source, game modders e.g. have started more commonly pulling their mods of the internet.
Here I got an idea for an advanced backup. It should make fewer copies if the original is still available for download, and more if it isn't.
 
Another idea... What if I use the 20 TB drives internally in hardware RAID through the BIOS,
See below.
Fake raid aside from being very limited, also had the problem of if you couldnt get hold of the same type of hardware if the host board fails, then you risk losing your raid
To give an example of what @chrcoluk said:

Many years ago I built a 4-disk fake RAID using a motherboard with 8-SATA ports on two chipsets. When the motherboard eventually died, I had to source a replacement with an identical RAID controller chipset. That's why people steer clear of motherboard RAID.

Contrast this with an 8-disk ZFS TrueNAS Core RAID-Z2 array which I transplanted from an HP Server into a Desktop PC. The two computers used different SAS/SATA controllers, but the drives were recognised immediately by the new machine. After copying a small configuration file over, the old array was back up and running in its new home.

Whatever you do I'd just try to keep it simple and get some backup software that's easy to use to help manage it even if it's just something stupid simple like BeyondCompare to manually compare and replicate differences between folders.
I wholeheartedly agree with this statement.
Don't get blinded by all the esoteric solutions other people use, until you've become more conversant.
Remember the old adage K.I.S.S.
https://en.wikipedia.org/wiki/KISS_principle

For light entertainment, here's Samsung's up coming 128TB SSD:
https://www.servethehome.com/samsung-bm1743-shows-how-a-128tb-nvme-ssd-is-made/
 
Mainboard RAID, like all RAID, lives and dies with:
- how does marking a drive bad work?
- how is the user notified?
- how does replacing a drive work?
- did the user practice replacing a drive?

Mainboard RAiD often works while the disks are OK, but the recovery procedures have been put together during amateur hour. And mainboard RAID needs everything twice: once in the BIOS and once in a OS driver.

Then there is the on-disk format. If you change your mainboard you presumably want to take the array with you. Not all mainboard RAID has identical on-disk format.

Furthermore, it is difficult or impossible to expand mainboard RAID to additional ports or other additional disks not directly connected to the board.
 
Mainboard RAID, like all RAID, lives and dies with:
- how does marking a drive bad work?
- how is the user notified?
- how does replacing a drive work?
- did the user practice replacing a drive?

Mainboard RAiD often works while the disks are OK, but the recovery procedures have been put together during amateur hour. And mainboard RAID needs everything twice: once in the BIOS and once in a OS driver.

Then there is the on-disk format. If you change your mainboard you presumably want to take the array with you. Not all mainboard RAID has identical on-disk format.

Furthermore, it is difficult or impossible to expand mainboard RAID to additional ports or other additional disks not directly connected to the board.
Yeah, somehow it feels like motherboard RAID is a relic of the HDD era when you would strip your data to get less of a bottleneck from your storage (remember WD Raptors?). Even then, you'd better have a backup sitting around. These days, it's pretty much NAS or bust, if you're not just playing around. Steep price of admission, for sure, but maybe your data is worth more.
 
Back
Top