Although I have four TrueNAS Core RAID-Z2 servers, they don't remain switched on 24/7. With electricty costing the equivalent of $0.33 per kWhr, I don't feel inclined to leave them (plus associated 10GbE switches) running all the time. Instead, I fit spinning rust disks in my workstations and move files to/from M.2 NVMe as required. I don't have any motherboards with four or five M.2 slots, so I chuck a few 8TB WD drives into the builds instead. When I need to backup data, I switch on the servers.
On-Demand application specific data server....Kinda based. That electrical rate is flat out robbery though. Residential here is $0.96/day basic with $0.068/kWhr, which is still a lot but if I suddenly want to start running lots of powerful equipment, it's not so troublesome.
Each to their own. Some people have only one desktop computer, which has to perform the duties of workstation, server, gaming/office PC, all rolled into one. They might not have a dedficated server or NAS, to store all the files that won't fit on a solid state drive.
Aus isn't
some people. I'm not sure you've noticed but when you start considering volume sizes like this, you're no longer in normie territory. I've said before I don't know where the threshold is exactly but it's somewhere in there. These drives look expensive and for a while they were. Took ~20 years to start making them into high performers over 2.5GbE and similar.
This presupposes everyone has a desktop computer with M.2 slots. Spare a thought for people who don't have that option and must use SATA cables to run all their drives.
Nah, deal with it. I specifically went to Ryzen for this reason because I wasn't about to buy a PCI-E adapter or a 990FX.
You missed the part where I mentioned it's a
radical idea. HDDs and mainstream SSDs have had the sata hookup for years.
If you've been around long enough to consider 14-20TB, you're somebody that understands the behaviors and complexities of more than one computer, period.
I don't make the rules, I just recognize the patterns. We're all mentally ill data hoarders at some point.
They may not have enough spare PCIe slots either for an NVMe card.
Problem?
Everyone has the spare x1 somewhere. It's just running out of PCI-E bandwidth or predisposed lane splitting that makes it a non-starter for consumer.
Ok big boy, let's see you find, ANYWHERE, an SSD with 20TB of space for less than $200.
Best I can do right now is a 3.2TB WarpDrive and it's a FATBOI.
You're not going to want that for much other than scratch or a data jail though.
>90PB write cycle life tells you everything. LSI prices go kerchunk in a few years though so I'd hold out for better.
Also you'll run out of lanes anyway. Those x8 cards don't have a lot of options in consumer and literally nobody is here to spend the $$$.
Another idea... What if I use the 20 TB drives internally in hardware RAID through the BIOS, and keep the externals I've got now (4+8 TB in the enclosure, and another 8 TB external HDD) as manual backups?
Sort out your data first. If something happens to that RAID, you won't have recovery options for those specific drives, which is a bad hangup. Choose carefully.
I arrange things like this:
-Start of the partition-
Static Game Library 1 > Frequently used, extremely LARGE file and data size, prestaged for zero growth, max performance,
replaceable fair burden
-
ISCSI PADDING-
Static Game Library 2 > Frequently used, extremely LARGE file and data size, prestaged for zero growth, max performance,
replaceable heavy burden
-
ISCSI PADDING-
TV Shows > Rarely used, VERY LARGE data set, slow growth pattern, max performance,
replaceable heavy burden
ISO > Rarely used, LARGE data set, slow growth pattern, max performance,
replaceable heavy burden
VMs > Rarely used, LARGE data set, slow growth pattern, max performance,
replaceable fair burden
EMU > Rarely used, LARGE collection, stable growth pattern, max performance,
semi-replaceable fair burden
Installers > Frequently used, normal file and data size, slow growth pattern, max performance,
non-replaceable
Photostock > Rarely used, normal file and data size, no growth pattern, max performance,
replaceable fair burden
Galleries > Frequently used, LARGE collection, fast growth pattern, max performance,
replaceable? unknown burden
Game backups > Rarely used, normal file and data size, slow growth pattern, high performance,
non-replaceable
Distribution shares > Always used, normal file and data size, steady growth, high performance,
replaceable fair burden
Artbook/Manga/Other > Always used, normal file and data size, steady growth, high performance,
replaceable fair burden
I keep as few non-replaceable collections on the new disk during the first 90-180 days. I figured out the importance of keeping the cluster size low and putting anything LARGE with zero growth at the front of the disk for maximum performance behavior. You want to
set and forget. Go down the size chart until you encounter a growth pattern and make a choice to
mirror non-replaceable collections. Remember to
copy, not move. Collections with tons of small files like Unity/Unreal projects should probably exist as compressed collections with the uncompressed copies on SSDs (because it's a work set). I'm in the middle of figuring out a similar situation for myself, so this is good motivation to get on that. I have a lot to sort.
When the smaller disks start to look barren, you can consolidate, copy stuff over to a faster disk with less hours and keep the smaller disks around as offline backups or something. It sounds dumb but has worked out for me and others just fine. Also pay attention to drive health reports once in a while. That's about all.