Friday, June 9th 2023

Sabrent Introduces its Quad NVMe SSD to PCIe 4.0 x16 Card

The Sabrent Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF) is the perfect complement for a desktop that requires additional high-performance storage. Add one, two, three, or four NVMe SSDs with a single, physical x16 PCIe slot adapter. A bifurcation setting in the BIOS is required. Only M.2 M key SSDs are supported, but older and newer generation SSDs in the 2230/2242/2260/2280 form factors will work at up to PCIe 4.0 speeds. The adapter is also backward compatible with PCIe 3.0/2.0 slots. Drives can be accessed individually or placed into a RAID via Intel VROC, AMD Ryzen NVMe RAID, UEFI RAID, or software-based RAID through Windows Storage Spaces when respective criteria are met.

High-performance drives and systems may require high-end cooling, and this adapter has you covered. It's constructed out of aluminium material for physical stability and improved heat dissipation. It also includes thermal padding for all four SSDs to keep things cool and in place. Active cooling for high-performance environments is optional with a switchable fan. The adapter is plug-and-play with driverless operation. Rear-mounted LEDs quickly show the drive status for a quick visual update. The host must support PCIe bifurcation (lane splitting) to access more than one drive, so be sure to check your motherboard's manual ahead of time.
More Storage
Add up to four high-performance NVMe SSDs to a system with a single adapter in a physical x16 PCIe slot with the Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF). The system requires the PCIe bifurcation (lane splitting) function to add more than one SSD with the full 16 lanes required for 3 or 4 drives.
Built Cool
Designed with quality aluminium for physical stability and top-notch cooling. Thermal padding is included to ensure the best cooling interface for your SSDs. Optional active cooling (fan) via a rear-positioned switch for high-performance environments. Your drives won't throttle in here.
PCIe 4.0 Compliant
Supports even the fastest PCIe 4.0 SSDs but also works with older and newer generation SSDs at up to 4.0 speeds. Works in older 3.0/2.0 systems that have PCIe bifurcation support. Compatible with NVMe SSDs in the M.2 2230/2242/2260/2280 form factors for your convenience.

Supported By Sabrent
This card requires M.2 M key NVMe SSDs and UEFI PCIe bifurcation support to work properly. The destination PCIe slot must be x16 in physical length. Please visit sabrent.com for more information and contact our technical support team for assistance.
The SABRENT Quad NVMe SSD to PCIe 4.0 x16 Card (EC-P4BF) is on sale now at Amazon.
Source: Sabrent Blog
Add your own comment

45 Comments on Sabrent Introduces its Quad NVMe SSD to PCIe 4.0 x16 Card

#1
LabRat 891
More bifurcated cards... So exciting...

Ya know, other than Gen4 redrivers (which, not all bifurcating cards use) these cards are stupid-simple and (presumably) stupid-cheap to make.

What would actually be exciting would be an x16 card with a Gen4 switch on-board.
Even in a Gen 2 x16 slot (as long as the drives were not RAIDed or constantly accessed simultaneously) a single gen4 x4 SSD could 'see' full performance.
Posted on Reply
#2
Ferrum Master
LabRat 891More bifurcated cards... So exciting...

Ya know, other than Gen4 redrivers (which, not all bifurcating cards use) these cards are stupid-simple and (presumably) stupid-cheap to make.

What would actually be exciting would be an x16 card with a Gen4 switch on-board.
Even in a Gen 2 x16 slot (as long as the drives were not RAIDed or constantly accessed simultaneously) a single gen4 x4 SSD could 'see' full performance.
I share your pain here too. It is just ridiculous seeing where all the development goes. GEN5, ultra hot, useless linear speeds and way worse durabilities.
Posted on Reply
#4
chrcoluk
How common is burification support?

Both my intel boards dont have it as an option, but my b450 does.
Posted on Reply
#5
LabRat 891
chrcolukHow common is burification support?

Both my intel boards dont have it as an option, but my b450 does.
In the last few generations from both brands, it's become a lot more common.
Technically, it's been common for ages, but rarely/never easily configured by the end user (SLI/CrossFire boards come to mind)

Most modern Mid-tier and higher chipsets seem to support it. But, not all allow finite control over the bifurcation config.
Posted on Reply
#6
ir_cow
. A bifurcation setting in the BIOS is required


Aka AMD only
Posted on Reply
#7
LabRat 891
ir_cow. A bifurcation setting in the BIOS is required


Aka AMD only
I don't know about modern Intel boards, but several "Early M.2 era" chipsets and mobos had bifurcation support and config options in BIOS/UEFI.

Also, the especially adventurous can sometimes edit firmware to force a bifurcation config. (Common with "Legacy" x58-x99 users, IIRC)

"Unsupported" is merely a challenge, after all.
Posted on Reply
#8
Space Lynx
Astronaut
eh fuck Sabrent.

they launched new 2230 nvme drives recently, and like 33 days after launch after return periods were over for early adopters, they cut prices in half.

lol fucking shady as fuck, fuck them
Posted on Reply
#9
kapone32
In a world where you cannot get a full x4 card fully loaded, due to the constraints of modern systems seems foolish. This is a product that really has no place in the market just ask Asus how their expansion card is selling. Then most boards that support lane splitting are a maximum of x2 and also come with adapter cards already.

Just a PSA for AM5 users. If you plan on getting one of these do not buy the Asus X670E-Strix as that board shares the 2nd PCIe slot with a M2 slot on the board. This should have been a x2 PCIe 5.0 card to make sense.

There is another thing this has no controller either so even though the WD AN1500 is more expensive and only 3.0, the fact that it supports 2 drives and has a controller means that it works in 4 wired slot just fine and is therefore much more applicable for most modern PCs. It not like most of us have Threaddripper or those of us who did have it anymore.
Posted on Reply
#10
damric
Does it matter what brand, model and size of the drives you put into something like this?
Posted on Reply
#11
kapone32
damricDoes it matter what brand, model and size of the drives you put into something like this?
Not really
Posted on Reply
#12
enb141
LabRat 891More bifurcated cards... So exciting...

Ya know, other than Gen4 redrivers (which, not all bifurcating cards use) these cards are stupid-simple and (presumably) stupid-cheap to make.

What would actually be exciting would be an x16 card with a Gen4 switch on-board.
Even in a Gen 2 x16 slot (as long as the drives were not RAIDed or constantly accessed simultaneously) a single gen4 x4 SSD could 'see' full performance.
Yep, bifurcated cards like this suck.
Posted on Reply
#13
LabRat 891
damricDoes it matter what brand, model and size of the drives you put into something like this?
Nope.
In fact, it doesn't even have to be an SSD.

One could use M.2 add-in cards; non-storage M.2 devices *do* exist.
SATA controller and NIC varieties are 'common'. SDRs, Video Adapters, etc. also exist, but are very much 'industry-targeted' devices.

While it makes 0 sense to do on a bifurcating Quad M.2 card, one could slot in a M.2 M-key to PCI-e Slot Riser/adapter and use *any* slotted PCI-e card.

PCI Express is dreamy when it comes to non-standard and unconventional use. Since the data is packetized, you can run PCIe devices over all sorts of 'PHYs'
enb141Yep, bifurcated cards like this suck.
Also, nothing new.
Asus has had Gen4 quad-m.2 bifurcating cards for years now.
Gigabyte also has one that is *actually* 'kinda neat', for a bifurcated card. -Gigabyte's has a buttload of supercaps on the adapter for write-thru protection. (It's also more expensive than Gen2 and many Gen3 switched multi-M.2 cards)
Posted on Reply
#14
lexluthermiester
LabRat 891More bifurcated cards... So exciting...
And? Who cares?
Space Lynxthey launched new 2230 nvme drives recently, and like 33 days after launch after return periods were over for early adopters, they cut prices in half.
Yeah, that is fairly shady.
Posted on Reply
#15
A Computer Guy
enb141Yep, bifurcated cards like this suck.
I have the Asus PCIe 4.0 one and the fan is loud and the card is long enough to be annoying but it seems to work. Haven't fully loaded it yet but I was experimenting with using it in a headless ITX board running ESXI 7. The problem with these cards is they need the 16x lanes slot to get the full use of the card so you either need a board with IMPI or enough lanes for a GPU in a different slot that doesn't steal/share lanes with the 16x slot.
Posted on Reply
#16
enb141
A Computer GuyI have the Asus PCIe 4.0 one and the fan is loud and the card is long enough to be annoying but it seems to work. Haven't fully loaded it yet but I was experimenting with using it in a headless ITX board running ESXI 7. The problem with these cards is they need the 16x lanes slot to get the full use of the card so you either need a board with IMPI or enough lanes for a GPU in a different slot that doesn't steal/share lanes with the 16x slot.
The other problem is that if your motherboard breaks or your BIOS crashes, so it does your RAID.
Posted on Reply
#17
Chaitanya
LabRat 891I don't know about modern Intel boards, but several "Early M.2 era" chipsets and mobos had bifurcation support and config options in BIOS/UEFI.

Also, the especially adventurous can sometimes edit firmware to force a bifurcation config. (Common with "Legacy" x58-x99 users, IIRC)

"Unsupported" is merely a challenge, after all.
For both AMD and Intel it always has been motherboard makers castrating features(Shitsus is notorious in this aspect) like PCI-e bifurcation in BIOS in order to push users into spending more money on features that CPU+Chipset already support.

For more adventerous here is fun project-
www.grayxu.cn/wiki/pcie-bifurcation/
Posted on Reply
#18
GreenReaper
PCIe 5.0 support when? Clearly this is the next frontier, and it's not like I can plug a GPU into my Chopin Max that easily - there's no hole for a bracket.
Posted on Reply
#20
Tek-Check
damricDoes it matter what brand, model and size of the drives you put into something like this?
Of course it does. Each NVMe drive has its distinct features, such as controller, speeds supported, number of channels for NAND module attachment, DRAM or DRAM-less operations, etc.
If you wish to fully utilize the Gen4 NVMe drive can do, check the reviews and look out for drives that have one of these controllers: Phison PS5018 E18, Silicon Motion SM2264, Samsung Pascal S4LV008, etc. If you do not need top notch drives, you can go for drives with one tier down controllers, such as Phison PS5021T, SM2267 or Samsung Elpis S4V003.

What worries me is such AICs is blocking air flow towards GPU. If a motherboard has two x16 slots with x8/x8 bifurcation, I'd install NVMe AIC into the first one closer to CPU and GPU into the second one. This way air flow towards GPU is free from obstacles.
GreenReaperPCIe 5.0 support when? Clearly this is the next frontier, and it's not like I can plug a GPU into my Chopin Max that easily - there's no hole for a bracket.
Almost one one in the world needs AIC with PCIe 5.0 support. What would you do with it?
Posted on Reply
#21
GreenReaper
Tek-CheckAlmost one one in the world needs AIC with PCIe 5.0 support. What would you do with it?
4x PCIe 5.0 NVMe SSD? Ideally once they become more power efficient and don't pump up case heat.

Honestly not expecting this right away, but it seems a waste to leave it there unused forever on my mini-ITX.
Posted on Reply
#22
Tek-Check
enb141Yep, bifurcated cards like this suck.
Not in workstaton or server systems where there is plenty of PCIe x16 slots and lanes. This product, without PCIe switch chip onboard, is more aimed towards those systems rather than desktop.
GreenReaper4x PCIe 5.0 NVMe SSD? Ideally once they become more power efficient and don't pump up case heat.

Honestly not expecting this right away, but it seems a waste to leave it there unused forever on my mini-ITX.
Do you run any specific workloads that can meaningfully benefit from having Gen5 NVMe drive?

I hope you realize that almost no one in desktop space uses any PCIe 5.0 peripherals apart from a few NVMe Gen5 enthusiasts willing to pay a hefty price for those drives. Both Intel and AMD miscalculated Gen5 adoption in desktop space. It's too early. Sure, in server and workstation markets it's fine, but vast majority of everyday users of AM5 and 1700 platforms do not need Gen5 connectivity.
Posted on Reply
#23
GreenReaper
In truth, not yet; but it is likely that this machine will eventually replace a HP MicroServer Gen8 that acts as a regional content server, web analytics box, cascading database replica and recommendations OLAP for my furry art community, Inkbunny.

The databases concerned already exceed main memory on the server and will likely do so for the new machine by the time it is used for this purpose, so NVMe performance may be a factor over the ~90GB/sec that the RAM can handle.

Now, does that require Gen 5? Almost certainly not - my original post was a tad tongue-in-cheek - but one can never be sure what the future holds. Some of our competitors are looking a bit shaky. And it's true that not using Gen 5 when I have the slot seems like a waste. Plus, the sooner it comes out, the sooner it'll be available at a price I can actually justify!
Posted on Reply
#24
kapone32
Even if this was 5.0 it still would make no sense. All of the boards for AM5 that have lane splitting to accomodate this already come with adapter cards.
chrcolukHow common is burification support?

Both my intel boards dont have it as an option, but my b450 does.
Lane splitting is on any board that has more than 2 M2 slots. The mitigating factor though is how it is wired. The B450 board you have will have it on the first slot(All Amd boards have it). There is a caveat though. If you use a AM4 APU like the 5700G you don't get that full 16 but it runs at x8. The only time things like this were popular was X299 and X399 not even TRX40. There was a time where you could buy a TR4 CPU for $300 an X399 board for $400. Asus has had an M2 card for sale since 3.0 so those were also like $50. It made sense when they killed Multi GPU and RAID 0 is nice on NAND. Then AMD released the 3000 CPUs for $3000 and boards were $500 and then AMD released the 5900X which blew my 2920X away in everything except PCIe lanes made it so that I came back down to AM4. There are no boards that this makes any sense in as Intel is even worse with lane splitting. There is a caveat to that too though as my board has more M2 slots than PCie slots and you can easily RAID those cards too so M2 seems to be where we are headed.

There is a caveat to what I said before too as AMD could be re releasing a HEDT line in the Prosumer space. If they price the CPUs properly we could get scenarios where this could work.
Posted on Reply
#25
enb141
ypsylonAnother stupidly noisy bifurcated card.

Why not something simple like:

www.amazon.de/-/en/gp/product/B0B5G6MN79/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1

Great cards ( have stack of them), just watch out for little diodes next to the M.2 connector. If you have 2 sided 4TB NVMe Gen.4 Phison controller devices it will be tough to plug them, but not impossible. Just careful.

I love OWC Accelsior 8M.2, but pricing & importing to EU is so meh.
That one doesn't has hardware raid either, so it has the same problem.
Tek-CheckNot in workstaton or server systems where there is plenty of PCIe x16 slots and lanes. This product, without PCIe switch chip onboard, is more aimed towards those systems rather than desktop.
The problem isn't the PCIe lanes, the real problem is that doesn't has hardware RAID, if the BIOS and or the motherboard crashes, so your RAID goes as well.
Posted on Reply
Add your own comment
May 2nd, 2024 03:52 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts