Friday, June 9th 2023

Sabrent Introduces its Quad NVMe SSD to PCIe 4.0 x16 Card

The Sabrent Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF) is the perfect complement for a desktop that requires additional high-performance storage. Add one, two, three, or four NVMe SSDs with a single, physical x16 PCIe slot adapter. A bifurcation setting in the BIOS is required. Only M.2 M key SSDs are supported, but older and newer generation SSDs in the 2230/2242/2260/2280 form factors will work at up to PCIe 4.0 speeds. The adapter is also backward compatible with PCIe 3.0/2.0 slots. Drives can be accessed individually or placed into a RAID via Intel VROC, AMD Ryzen NVMe RAID, UEFI RAID, or software-based RAID through Windows Storage Spaces when respective criteria are met.

High-performance drives and systems may require high-end cooling, and this adapter has you covered. It's constructed out of aluminium material for physical stability and improved heat dissipation. It also includes thermal padding for all four SSDs to keep things cool and in place. Active cooling for high-performance environments is optional with a switchable fan. The adapter is plug-and-play with driverless operation. Rear-mounted LEDs quickly show the drive status for a quick visual update. The host must support PCIe bifurcation (lane splitting) to access more than one drive, so be sure to check your motherboard's manual ahead of time.
More Storage
Add up to four high-performance NVMe SSDs to a system with a single adapter in a physical x16 PCIe slot with the Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF). The system requires the PCIe bifurcation (lane splitting) function to add more than one SSD with the full 16 lanes required for 3 or 4 drives.
Built Cool
Designed with quality aluminium for physical stability and top-notch cooling. Thermal padding is included to ensure the best cooling interface for your SSDs. Optional active cooling (fan) via a rear-positioned switch for high-performance environments. Your drives won't throttle in here.
PCIe 4.0 Compliant
Supports even the fastest PCIe 4.0 SSDs but also works with older and newer generation SSDs at up to 4.0 speeds. Works in older 3.0/2.0 systems that have PCIe bifurcation support. Compatible with NVMe SSDs in the M.2 2230/2242/2260/2280 form factors for your convenience.

Supported By Sabrent
This card requires M.2 M key NVMe SSDs and UEFI PCIe bifurcation support to work properly. The destination PCIe slot must be x16 in physical length. Please visit sabrent.com for more information and contact our technical support team for assistance.
The SABRENT Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF) is on sale now at Amazon.
Source: Sabrent Blog
Add your own comment

45 Comments on Sabrent Introduces its Quad NVMe SSD to PCIe 4.0 x16 Card

#26
damric
Tek-CheckOf course it does. Each NVMe drive has its distinct features, such as controller, speeds supported, number of channels for NAND module attachment, DRAM or DRAM-less operations, etc.
If you wish to fully utilize the Gen4 NVMe drive can do, check the reviews and look out for drives that have one of these controllers: Phison PS5018 E18, Silicon Motion SM2264, Samsung Pascal S4LV008, etc. If you do not need top notch drives, you can go for drives with one tier down controllers, such as Phison PS5021T, SM2267 or Samsung Elpis S4V003.

What worries me is such AICs is blocking air flow towards GPU. If a motherboard has two x16 slots with x8/x8 bifurcation, I'd install NVMe AIC into the first one closer to CPU and GPU into the second one. This way air flow towards GPU is free from obstacles.


Almost one one in the world needs AIC with PCIe 5.0 support. What would you do with it?
So I wouldn't be able to take a random bunch of m.2 drives and just stick them in there and expect them to work?
Posted on Reply
#27
A Computer Guy
enb141The other problem is that if your motherboard breaks or your BIOS crashes, so it does your RAID.
If I remember correctly if you use software RAID (like Windows storage spaces) you could move the array between different hardware but well yea if you have an extreme hardware failure your storage can be broken.
enb141That one doesn't has hardware raid either, so it has the same problem.
Why do you need hardware RAID for SSD's especially in a time where we have CPU's with a lot of cores to spare?
Posted on Reply
#28
Tek-Check
GreenReaperPlus, the sooner it comes out, the sooner it'll be available at a price I can actually justify!
You will have GPUs next year with Gen5 interface, however you will not be "using" it to its capability. This is because current high-end GPUs can barely saturate Gen4 x8 link, as shown in TPU testing a few months ago.

One useful case for PCIe 5.0 will be with several devices in x16, x8 and x4 slots not competing for bandwidth anymore. You could have GPU Gen5 in x16 slot using x8 connection bifurcated to the second x8 slot where AIC could be attached. But... which Gen5 AIC? NVMe? There are already 3-5 NVMe slots on a good motherboard, two of which are Gen5 on AM4.

So, the issue is that there is plenty of Gen5 connectivity from Zen4 CPU (24 lanes), but few devices that could meaningfully use all available bandwidth due to slow development of Gen5 peripherals.
damricSo I wouldn't be able to take a random bunch of m.2 drives and just stick them in there and expect them to work?
You certainly can, but their performance would vary depending on what you need those drives for. DRAM-less drives tipically cannot use cache, so some workloads would be affected. For RAID 1 or 0 set-up, it's good to have two similarly capable drives with the same capacity. If you set-up mirrored RAID on 1TB and 2TB drives, you will lose 1TB on that 2TB drive. So, there are things to consider. You should never blindly just buy any NVMe drives, but you can, of course.
Posted on Reply
#29
enb141
A Computer GuyIf I remember correctly if you use software RAID (like Windows storage spaces) you could move the array between different hardware but well yea if you have an extreme hardware failure your storage can be broken.


Why do you need hardware RAID for SSD's especially in a time where we have CPU's with a lot of cores to spare?
No, software raid is limited to that specific computer, if for example windows crashes so hard, it could destroy your software raid. On hardware raid, you just move the card with all your SSD to another computer and that's it.
Posted on Reply
#30
Tek-Check
enb141The problem isn't the PCIe lanes, the real problem is that doesn't has hardware RAID, if the BIOS and or the motherboard crashes, so your RAID goes as well.
True that.
Posted on Reply
#31
A Computer Guy
enb141No, software raid is limited to that specific computer, if for example windows crashes so hard, it could destroy your software raid.
No that doesn't seem right. Separate topics crashing vs. moving (I'm talking about moving) you should be able to move Windows created raid disks to another windows machine and Windows should be able to mount the array. If your computer is crashing any number of things can go wrong if data in the process of updating wasn't completed to the disk raid or not.
enb141On hardware raid, you just move the card with all your SSD to another computer and that's it.
Yea that would work.
Posted on Reply
#32
damric
Tek-CheckYou will have GPUs next year with Gen5 interface, however you will not be "using" it to its capability. This is because current high-end GPUs can barely saturate Gen4 x8 link, as shown in TPU testing a few months ago.

One useful case for PCIe 5.0 will be with several devices in x16, x8 and x4 slots not competing for bandwidth anymore. You could have GPU Gen5 in x16 slot using x8 connection bifurcated to the second x8 slot where AIC could be attached. But... which Gen5 AIC? NVMe? There are already 3-5 NVMe slots on a good motherboard, two of which are Gen5 on AM4.

So, the issue is that there is plenty of Gen5 connectivity from Zen4 CPU (24 lanes), but few devices that could meaningfully use all available bandwidth due to slow development of Gen5 peripherals.


You certainly can, but their performance would vary depending on what you need those drives for. DRAM-less drives tipically cannot use cache, so some workloads would be affected. For RAID 1 or 0 set-up, it's good to have two similarly capable drives with the same capacity. If you set-up mirrored RAID on 1TB and 2TB drives, you will lose 1TB on that 2TB drive. So, there are things to consider. You should never blindly just buy any NVMe drives, but you can, of course.
I would only want to consolidate existing drives into one slot. Like say I had some mixed 1TB, 2TB, PCI 3.0 and 4.0 m.2 drives it would read them all fine yes? No need for raid
Posted on Reply
#33
kapone32
damricSo I wouldn't be able to take a random bunch of m.2 drives and just stick them in there and expect them to work?
If they are split in the BIOS you would see each individual drive. It is best practice though to try to have the drives have the same controller. In fact with RAID you want to also make sure they are the same capacity. There is nothing to say though that you cannot actually do it.
A Computer GuyNo that doesn't seem right. Separate topics crashing vs. moving (I'm talking about moving) you should be able to move Windows created raid disks to another windows machine and Windows should be able to mount the array. If your computer is crashing any number of things can go wrong if data in the process of updating wasn't completed to the disk raid or not.

Yea that would work.
The only issue is Windows 11 TPM. I don't know how but it seems that is why NVME is such a pain in the butt to format. If you are using an existing Windows you can update your entire OS without needing to worry about Software RAID but if you take your drive out of a Windows PC and just put it in another it might not automatically give you the foreign disk option.
Posted on Reply
#34
A Computer Guy
kapone32If they are split in the BIOS you would see each individual drive. It is best practice though to try to have the drives have the same controller. In fact with RAID you want to also make sure they are the same capacity. There is nothing to say though that you cannot actually do it.


The only issue is Windows 11 TPM. I don't know how but it seems that is why NVME is such a pain in the butt to format. If you are using an existing Windows you can update your entire OS without needing to worry about Software RAID but if you take your drive out of a Windows PC and just put it in another it might not automatically give you the foreign disk option.
Ah TPM I didn't consider that. Also if your using Bitlocker that might also be a complication and or if you swap your CPU and using your CPU's TPM instead of an external one.
I should clarify in my mind (among this raid conversation) I'm not considering the SSD raid array as participating as the boot drive but as a secondary drive. SSD raid as boot drive on typical consumer hardware doesn't make much sense to me unless it's mirroring.
Posted on Reply
#35
GreenReaper
Tek-CheckYou could have GPU Gen5 in x16 slot using x8 connection bifurcated to the second x8 slot where AIC could be attached. But... which Gen5 AIC? NVMe? There are already 3-5 NVMe slots on a good motherboard,....
In my case I have ASUS mini-ITX B650E-I with just one each of Gen5 + Gen4 NVMe, plus two SATA3. Case is a Chopin Max that has no external bracket or room for more than a one slot card in the Gen5 x16. So NVMe is a feasible use for it for me, not pressing but not many other ways to use it unless I want to case mod an external GPU or replace the PSU beside it with a GPU (it's been done).

Of course, even SATA SSDs can be very viable for latency-focused workloads, they're just not the best. And at some point Gen5 won't be the best, either, which will probably be the time I actually want to try doing this, if at all.
Posted on Reply
#36
kapone32
A Computer GuyAh TPM I didn't consider that. Also if your using Bitlocker that might also be a complication and or if you swap your CPU and using your CPU's TPM instead of an external one.
I should clarify in my mind (among this raid conversation) I'm not considering the SSD raid array as participating as the boot drive but as a secondary drive. SSD raid as boot drive on typical consumer hardware doesn't make much sense to me unless it's mirroring.
I would not recommend that you should use a RAID array as Boot but even as Secondary storage you can run into issues with Windows 11. F me Windows 11 is so quirky that I had a M2 drive that I was using with an adapter and since I used the adpater to format it I have to use one specific USB C port on my MB to have ti register.
kapone32I would not recommend that you should use a RAID array as Boot but even as Secondary storage you can run into issues with Windows 11. F me Windows 11 is so quirky that I had a M2 drive that I was using with an adapter and since I used the adpater to format it I have to use one specific USB C port on my MB to have ti register.
I for one love RAID. Once you have more than 50 Epic Games and have to re-download them or just move them you will appreciate maxing out Windows 2.5 GB/s write rate with a drive that cost a quarter what it would if you got one for the same capacity. I have to read some more on the controller for my WD AN1500 before I buy another 4 TB NV2.
Posted on Reply
#37
GreenReaper
I like mdadm RAID 1.0 metadata for simple RAID1 since it is effectively the same as no RAID at all for simple reading since the metadata is all at the end.

For more complex situations I'd tend towards btrfs.
Posted on Reply
#38
LabRat 891
-Seeing several comments about the dangers of CPU NVME RAID (Intel-VROC / AMD-RAID):
Windows Storage Spaces has improved greatly over the years; I used to be 'against' its use, but in my own incidental testing it's been reliable and tolerant of system changes/swapping.
Storage Space arrays absolutely will and do 'transfer' between (compatible Windows) systems. (Though, I don't recall if a RAID5 from a Windows Server install will 'work' on an install of 10/11).

Also, if you research some of the last reviews on RAIDing Optane drives, Storage Spaces 'can' outperform VROC.
In my own experimentation, AMD-RAID and Optane 'don't get along' well. (Severe performance regression beyond 2-drive striped array on AMD-RAID)
Storage Spaces Striped was reliably 'faster' than 4x 118GB P1600Xs in RAID0 AMD-RAID (no matter what cache tag size was used).
Posted on Reply
#39
Blitzkuchen
PCIe bifurcation, like every other which sell them. No controller just a garbage PCB for 10$.

With a controller it would be interessting for a good price.
Posted on Reply
#40
Tek-Check
damricI would only want to consolidate existing drives into one slot. Like say I had some mixed 1TB, 2TB, PCI 3.0 and 4.0 m.2 drives it would read them all fine yes? No need for raid
Sure, you can run any of those individually.
Posted on Reply
#41
lexluthermiester
LabRat 891More bifurcated cards... So exciting...
ypsylonAnother stupidly noisy bifurcated card.
Ok, seriously, why is everyone complaining about this?
Posted on Reply
#42
LabRat 891
lexluthermiesterOk, seriously, why is everyone complaining about this?
A bifurcated card, is limited in its use to platforms that support bifurcation (and expose the feature to the end-user).
Prosumer-enthusiast-gamer mobo manufacturers have highly-inconsistent support for a feature that's (underneath it all) actually, extraordinarily common. (Also, these 'simple' bifurcated cards seem to be sold at some seriously high prices, for how simply constructed they are)
To an enthusiast, gamer, tinkerer, etc. the mere mention of 'bifurcation' can stir up sourness.

I'm aware of how common and useful bifurcated devices are in server/industrial use:
I have a couple 'sets' of MaxCloudON bifurcated risers for x8, x16, etc. Those maxcloudon risers were made for and by a Remote Rendering and GPGPU server company overseas.
I also own a Gen4 Asus Hyper M.2 Quad-NVMe card that I've filled w/ both 16GB Optane M10s, and 118GB Optane P1600Xs in testing.

To a enthusiast-tinkerer like me, the switch-based cards, are much more 'broad' in how they can be used. Switched PCIe expander-adapter cards can even act as a 'southbridge' to attach new features to older/unsupported equipment.
Ex. I have an Amfeltec Gen2 x16 -> 4x x4-lane M.2 M-key card; it's probably gonna get slotted into a PCIe-Gen1.0 dual S940 K8N-DL or a re-purposed H81-BTC.


All that said, I'd bet bifurcation is usually preferred in-industry as it's less power and heat, with less latency. -in 'big data' applications, that teensy bit of latency, could be a (stacking) issue.

I could see a professional having a generalized preference for bifurcation over switching.
In 'serious' use cases, bifurcation would be more efficient, no?
Posted on Reply
#43
lexluthermiester
LabRat 891A bifurcated card, is limited in its use to platforms that support bifurcation (and expose the feature to the end-user).
Prosumer-enthusiast-gamer mobo manufacturers have highly-inconsistent support for a feature that's (underneath it all) actually, extraordinarily common. (Also, these 'simple' bifurcated cards seem to be sold at some seriously high prices, for how simply constructed they are)
To an enthusiast, gamer, tinkerer, etc. the mere mention of 'bifurcation' can stir up sourness.

I'm aware of how common and useful bifurcated devices are in server/industrial use:
I have a couple 'sets' of MaxCloudON bifurcated risers for x8, x16, etc. Those maxcloudon risers were made for and by a Remote Rendering and GPGPU server company overseas.
I also own a Gen4 Asus Hyper M.2 Quad-NVMe card that I've filled w/ both 16GB Optane M10s, and 118GB Optane P1600Xs in testing.

To a enthusiast-tinkerer like me, the switch-based cards, are much more 'broad' in how they can be used. Switched PCIe expander-adapter cards can even act as a 'southbridge' to attach new features to older/unsupported equipment.
Ex. I have an Amfeltec Gen2 x16 -> 4x x4-lane M.2 M-key card; it's probably gonna get slotted into a PCIe-Gen1.0 dual S940 K8N-DL or a re-purposed H81-BTC.
Ah! This makes sense. The frustration is easier to understand now. Was not aware of these particular problems. I was under the impression that "most" chipsets supported that function natively across all platforms. Thank You for explaining, much appreciated!
LabRat 891All that said, I'd bet bifurcation is usually preferred in-industry as it's less power and heat, with less latency. -in 'big data' applications, that teensy bit of latency, could be a (stacking) issue.

I could see a professional having a generalized preference for bifurcation over switching.
In 'serious' use cases, bifurcation would be more efficient, no?
This is correct. It is more efficient as it a "direct" connection. In rack deployments, bifurcation is preferable, even if it's less flexible.
Posted on Reply
#44
A Computer Guy
lexluthermiesterOk, seriously, why is everyone complaining about this?
It would be nice if these cards could be smaller. Maybe double sided. (2 on each side)
Posted on Reply
#45
LabRat 891
A Computer GuyIt would be nice if these cards could be smaller. Maybe double sided. (2 on each side)
That precise configuration is extraordinarily common with the 'Cheap Chinese Import' expanders (Both switched and bifurcated varieties).
I believe QNAP and OWC(?) makes a few like that as well.
Heck, even my old Gen2 Amfeltec (switched) card is also built like that.

In fact, now that I've ran down through it, I might venture to say that (double-sided) configuration is 'most common' outside of the gamer-enthusiast market(s).

TBQH, I think Asus, MSI, Gigabyte, etc. have cards laid out 4-to-a-single-side in part because they look more like a fancy slim GPU, and partially due to the inconsistent airflow patterns amongst DIYers' and SIs' custom builds.
Posted on Reply
Add your own comment
Dec 21st, 2024 10:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts