Friday, June 9th 2023
Sabrent Introduces its Quad NVMe SSD to PCIe 4.0 x16 Card
The Sabrent Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF) is the perfect complement for a desktop that requires additional high-performance storage. Add one, two, three, or four NVMe SSDs with a single, physical x16 PCIe slot adapter. A bifurcation setting in the BIOS is required. Only M.2 M key SSDs are supported, but older and newer generation SSDs in the 2230/2242/2260/2280 form factors will work at up to PCIe 4.0 speeds. The adapter is also backward compatible with PCIe 3.0/2.0 slots. Drives can be accessed individually or placed into a RAID via Intel VROC, AMD Ryzen NVMe RAID, UEFI RAID, or software-based RAID through Windows Storage Spaces when respective criteria are met.
High-performance drives and systems may require high-end cooling, and this adapter has you covered. It's constructed out of aluminium material for physical stability and improved heat dissipation. It also includes thermal padding for all four SSDs to keep things cool and in place. Active cooling for high-performance environments is optional with a switchable fan. The adapter is plug-and-play with driverless operation. Rear-mounted LEDs quickly show the drive status for a quick visual update. The host must support PCIe bifurcation (lane splitting) to access more than one drive, so be sure to check your motherboard's manual ahead of time.More Storage
Add up to four high-performance NVMe SSDs to a system with a single adapter in a physical x16 PCIe slot with the Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF). The system requires the PCIe bifurcation (lane splitting) function to add more than one SSD with the full 16 lanes required for 3 or 4 drives.Built Cool
Designed with quality aluminium for physical stability and top-notch cooling. Thermal padding is included to ensure the best cooling interface for your SSDs. Optional active cooling (fan) via a rear-positioned switch for high-performance environments. Your drives won't throttle in here.PCIe 4.0 Compliant
Supports even the fastest PCIe 4.0 SSDs but also works with older and newer generation SSDs at up to 4.0 speeds. Works in older 3.0/2.0 systems that have PCIe bifurcation support. Compatible with NVMe SSDs in the M.2 2230/2242/2260/2280 form factors for your convenience.Supported By Sabrent
This card requires M.2 M key NVMe SSDs and UEFI PCIe bifurcation support to work properly. The destination PCIe slot must be x16 in physical length. Please visit sabrent.com for more information and contact our technical support team for assistance.The SABRENT Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF) is on sale now at Amazon.
Source:
Sabrent Blog
High-performance drives and systems may require high-end cooling, and this adapter has you covered. It's constructed out of aluminium material for physical stability and improved heat dissipation. It also includes thermal padding for all four SSDs to keep things cool and in place. Active cooling for high-performance environments is optional with a switchable fan. The adapter is plug-and-play with driverless operation. Rear-mounted LEDs quickly show the drive status for a quick visual update. The host must support PCIe bifurcation (lane splitting) to access more than one drive, so be sure to check your motherboard's manual ahead of time.More Storage
Add up to four high-performance NVMe SSDs to a system with a single adapter in a physical x16 PCIe slot with the Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF). The system requires the PCIe bifurcation (lane splitting) function to add more than one SSD with the full 16 lanes required for 3 or 4 drives.Built Cool
Designed with quality aluminium for physical stability and top-notch cooling. Thermal padding is included to ensure the best cooling interface for your SSDs. Optional active cooling (fan) via a rear-positioned switch for high-performance environments. Your drives won't throttle in here.PCIe 4.0 Compliant
Supports even the fastest PCIe 4.0 SSDs but also works with older and newer generation SSDs at up to 4.0 speeds. Works in older 3.0/2.0 systems that have PCIe bifurcation support. Compatible with NVMe SSDs in the M.2 2230/2242/2260/2280 form factors for your convenience.Supported By Sabrent
This card requires M.2 M key NVMe SSDs and UEFI PCIe bifurcation support to work properly. The destination PCIe slot must be x16 in physical length. Please visit sabrent.com for more information and contact our technical support team for assistance.The SABRENT Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF) is on sale now at Amazon.
45 Comments on Sabrent Introduces its Quad NVMe SSD to PCIe 4.0 x16 Card
One useful case for PCIe 5.0 will be with several devices in x16, x8 and x4 slots not competing for bandwidth anymore. You could have GPU Gen5 in x16 slot using x8 connection bifurcated to the second x8 slot where AIC could be attached. But... which Gen5 AIC? NVMe? There are already 3-5 NVMe slots on a good motherboard, two of which are Gen5 on AM4.
So, the issue is that there is plenty of Gen5 connectivity from Zen4 CPU (24 lanes), but few devices that could meaningfully use all available bandwidth due to slow development of Gen5 peripherals. You certainly can, but their performance would vary depending on what you need those drives for. DRAM-less drives tipically cannot use cache, so some workloads would be affected. For RAID 1 or 0 set-up, it's good to have two similarly capable drives with the same capacity. If you set-up mirrored RAID on 1TB and 2TB drives, you will lose 1TB on that 2TB drive. So, there are things to consider. You should never blindly just buy any NVMe drives, but you can, of course.
I should clarify in my mind (among this raid conversation) I'm not considering the SSD raid array as participating as the boot drive but as a secondary drive. SSD raid as boot drive on typical consumer hardware doesn't make much sense to me unless it's mirroring.
Of course, even SATA SSDs can be very viable for latency-focused workloads, they're just not the best. And at some point Gen5 won't be the best, either, which will probably be the time I actually want to try doing this, if at all.
For more complex situations I'd tend towards btrfs.
Windows Storage Spaces has improved greatly over the years; I used to be 'against' its use, but in my own incidental testing it's been reliable and tolerant of system changes/swapping.
Storage Space arrays absolutely will and do 'transfer' between (compatible Windows) systems. (Though, I don't recall if a RAID5 from a Windows Server install will 'work' on an install of 10/11).
Also, if you research some of the last reviews on RAIDing Optane drives, Storage Spaces 'can' outperform VROC.
In my own experimentation, AMD-RAID and Optane 'don't get along' well. (Severe performance regression beyond 2-drive striped array on AMD-RAID)
Storage Spaces Striped was reliably 'faster' than 4x 118GB P1600Xs in RAID0 AMD-RAID (no matter what cache tag size was used).
With a controller it would be interessting for a good price.
Prosumer-enthusiast-gamer mobo manufacturers have highly-inconsistent support for a feature that's (underneath it all) actually, extraordinarily common. (Also, these 'simple' bifurcated cards seem to be sold at some seriously high prices, for how simply constructed they are)
To an enthusiast, gamer, tinkerer, etc. the mere mention of 'bifurcation' can stir up sourness.
I'm aware of how common and useful bifurcated devices are in server/industrial use:
I have a couple 'sets' of MaxCloudON bifurcated risers for x8, x16, etc. Those maxcloudon risers were made for and by a Remote Rendering and GPGPU server company overseas.
I also own a Gen4 Asus Hyper M.2 Quad-NVMe card that I've filled w/ both 16GB Optane M10s, and 118GB Optane P1600Xs in testing.
To a enthusiast-tinkerer like me, the switch-based cards, are much more 'broad' in how they can be used. Switched PCIe expander-adapter cards can even act as a 'southbridge' to attach new features to older/unsupported equipment.
Ex. I have an Amfeltec Gen2 x16 -> 4x x4-lane M.2 M-key card; it's probably gonna get slotted into a PCIe-Gen1.0 dual S940 K8N-DL or a re-purposed H81-BTC.
All that said, I'd bet bifurcation is usually preferred in-industry as it's less power and heat, with less latency. -in 'big data' applications, that teensy bit of latency, could be a (stacking) issue.
I could see a professional having a generalized preference for bifurcation over switching.
In 'serious' use cases, bifurcation would be more efficient, no?
I believe QNAP and OWC(?) makes a few like that as well.
Heck, even my old Gen2 Amfeltec (switched) card is also built like that.
In fact, now that I've ran down through it, I might venture to say that (double-sided) configuration is 'most common' outside of the gamer-enthusiast market(s).
TBQH, I think Asus, MSI, Gigabyte, etc. have cards laid out 4-to-a-single-side in part because they look more like a fancy slim GPU, and partially due to the inconsistent airflow patterns amongst DIYers' and SIs' custom builds.