If I understand this correctly, with this DAS only supporting RAID 0 or 1 for the first two drives, and Windows not allowing software raid on it (e.g. RAID 1+0 or 5 with four drives, which is probably the main motivation for buying this thing), then what is really the point of this vs. just buying individual external HDD cases?
(I assume this will work fine in Linux though, as it supports RAID over USB sticks if you wanted to.)
I get that this can be used as an extension for an existing NAS, but the features and flexibility of this seems far to limited.
Seems mildly interesting as a simple storage expansion for a NAS, but I had mostly negative experiences with USB to SATA bridges so I wonder how well does this one handle HDD power management, especially disk spindown. Most of USB bridges just ignore host commands and have their own nononfigurable timers, hdparm doesn't work and sdparm works with only some of them.
That's my concern too, along with SMART support, all of which is essential if you try to run any kind of RAID on top of it.
It definitely is too expensive though, encroaching on cheap NAS territory - in fact, the normal retail price is the same as their entry level 4 bay NAS, F4-212.
It's certainly too expensive.
But I think a typical NAS is a fairly bad deal anyways, as the only real advantage is getting something that should almost work "out of the box" (except for installing disks and setup). But I believe that those who don't strictly need it to be network shared, and doesn't have data sets large enough, should just manage with internal drives and external drives (extra copies), and only cross that bridge when they have to. And for the other group, either individuals or small companies, who needs it network attached or have larger data sets, then just building a proper file server should also be considered. The typical NAS boxes have really poor underpowered and usually "consumer grade" hardware, despite having a premium price. For that price you can almost buy server or workstation grade parts (excl. redundant PSUs, controllers etc.), which will greatly outperform most of these boxes, be properly cooled, have ECC and all desired features, upgradable RAM, and most importantly be modular when something eventually fails. Rack mounted cases are desirable if it's intended to be in a server closet, but if it's in a (home) office, then going for a standard quiet case is smarter (like a Fractal Design 7/XL), as hotswapping is really not necessary when running a single server of 2-12 disks. You can find pretty capable motherboards (e.g. Supermicro) for ~$300-400, a quad core low power Xeon (even if it's LGA1200, they are still great deals), etc. for acceptable prices. And OS setup isn't hard either, a basic install of a Linux distro, drive setup and sharing is al that's needed. Plus there is no worry that the manufacturer will discontinue software updates, you can keep rocking this thing for 10+ years.
And its very useful for any user, since it provides cool compression features and data integrity checks.
In some cases its even easier to use, than classic Ext4/XFS. And its kinda working better than brtfs.
There are certainly use cases where ZFS makes sense, especially those companies who need a large pool for a varied mix of VMs, with some mostly static data and some heavily used, and the need to grow with larger/more drives over time. And for such use cases, data deduplication, compression, SSD and RAM caching etc. is very useful too.
But if we're talking about users with up to 8 drives (probably single user or a tiny office), then it's really not worth the hassle and all the downsides. Not only is it very demanding of resources, especially RAM, but can also have some overhead, so this needs to be run on a dedicated "server" (so a DAS is not the right option for ZFS). The management and maintenance will require lots of more effort and experience than a basic software RAID in either Linux or Windows (I can't speak for Mac), plus ZFS isn't better than e.g. md/ext4 in terms of data integrity (md supports scrub too).
I'm not a fan of RAID5, not just because of higher risk, but the performance is usually horrible, I'd usually recommend RAID 1 for 2 disks, RAID 1+0 for 4 disks, RAID6 for larger amounts (albeit RAID6 usually being slow). RAID 1+0 with four disks will currently allow at least 2x24 TB using CMR disks or 2x28 TB using SMR disks, which will give most prosumers/small business owners pretty good mileage. And it might be worth considering just replacing a "simple" RAID setup with larger drives when you grow out of it rather than having a fancy setup with old and new drives.