• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Sabrent Introduces its Quad NVMe SSD to PCIe 4.0 x16 Card

Another stupidly noisy bifurcated card.

Why not something simple like:


Great cards ( have stack of them), just watch out for little diodes next to the M.2 connector. If you have 2 sided 4TB NVMe Gen.4 Phison controller devices it will be tough to plug them, but not impossible. Just careful.

I love OWC Accelsior 8M.2, but pricing & importing to EU is so meh.

That one doesn't has hardware raid either, so it has the same problem.

Not in workstaton or server systems where there is plenty of PCIe x16 slots and lanes. This product, without PCIe switch chip onboard, is more aimed towards those systems rather than desktop.

The problem isn't the PCIe lanes, the real problem is that doesn't has hardware RAID, if the BIOS and or the motherboard crashes, so your RAID goes as well.
 
Of course it does. Each NVMe drive has its distinct features, such as controller, speeds supported, number of channels for NAND module attachment, DRAM or DRAM-less operations, etc.
If you wish to fully utilize the Gen4 NVMe drive can do, check the reviews and look out for drives that have one of these controllers: Phison PS5018 E18, Silicon Motion SM2264, Samsung Pascal S4LV008, etc. If you do not need top notch drives, you can go for drives with one tier down controllers, such as Phison PS5021T, SM2267 or Samsung Elpis S4V003.

What worries me is such AICs is blocking air flow towards GPU. If a motherboard has two x16 slots with x8/x8 bifurcation, I'd install NVMe AIC into the first one closer to CPU and GPU into the second one. This way air flow towards GPU is free from obstacles.


Almost one one in the world needs AIC with PCIe 5.0 support. What would you do with it?
So I wouldn't be able to take a random bunch of m.2 drives and just stick them in there and expect them to work?
 
The other problem is that if your motherboard breaks or your BIOS crashes, so it does your RAID.
If I remember correctly if you use software RAID (like Windows storage spaces) you could move the array between different hardware but well yea if you have an extreme hardware failure your storage can be broken.

That one doesn't has hardware raid either, so it has the same problem.
Why do you need hardware RAID for SSD's especially in a time where we have CPU's with a lot of cores to spare?
 
Plus, the sooner it comes out, the sooner it'll be available at a price I can actually justify!
You will have GPUs next year with Gen5 interface, however you will not be "using" it to its capability. This is because current high-end GPUs can barely saturate Gen4 x8 link, as shown in TPU testing a few months ago.

One useful case for PCIe 5.0 will be with several devices in x16, x8 and x4 slots not competing for bandwidth anymore. You could have GPU Gen5 in x16 slot using x8 connection bifurcated to the second x8 slot where AIC could be attached. But... which Gen5 AIC? NVMe? There are already 3-5 NVMe slots on a good motherboard, two of which are Gen5 on AM4.

So, the issue is that there is plenty of Gen5 connectivity from Zen4 CPU (24 lanes), but few devices that could meaningfully use all available bandwidth due to slow development of Gen5 peripherals.

So I wouldn't be able to take a random bunch of m.2 drives and just stick them in there and expect them to work?
You certainly can, but their performance would vary depending on what you need those drives for. DRAM-less drives tipically cannot use cache, so some workloads would be affected. For RAID 1 or 0 set-up, it's good to have two similarly capable drives with the same capacity. If you set-up mirrored RAID on 1TB and 2TB drives, you will lose 1TB on that 2TB drive. So, there are things to consider. You should never blindly just buy any NVMe drives, but you can, of course.
 
If I remember correctly if you use software RAID (like Windows storage spaces) you could move the array between different hardware but well yea if you have an extreme hardware failure your storage can be broken.


Why do you need hardware RAID for SSD's especially in a time where we have CPU's with a lot of cores to spare?

No, software raid is limited to that specific computer, if for example windows crashes so hard, it could destroy your software raid. On hardware raid, you just move the card with all your SSD to another computer and that's it.
 
No, software raid is limited to that specific computer, if for example windows crashes so hard, it could destroy your software raid.
No that doesn't seem right. Separate topics crashing vs. moving (I'm talking about moving) you should be able to move Windows created raid disks to another windows machine and Windows should be able to mount the array. If your computer is crashing any number of things can go wrong if data in the process of updating wasn't completed to the disk raid or not.
On hardware raid, you just move the card with all your SSD to another computer and that's it.
Yea that would work.
 
Last edited:
You will have GPUs next year with Gen5 interface, however you will not be "using" it to its capability. This is because current high-end GPUs can barely saturate Gen4 x8 link, as shown in TPU testing a few months ago.

One useful case for PCIe 5.0 will be with several devices in x16, x8 and x4 slots not competing for bandwidth anymore. You could have GPU Gen5 in x16 slot using x8 connection bifurcated to the second x8 slot where AIC could be attached. But... which Gen5 AIC? NVMe? There are already 3-5 NVMe slots on a good motherboard, two of which are Gen5 on AM4.

So, the issue is that there is plenty of Gen5 connectivity from Zen4 CPU (24 lanes), but few devices that could meaningfully use all available bandwidth due to slow development of Gen5 peripherals.


You certainly can, but their performance would vary depending on what you need those drives for. DRAM-less drives tipically cannot use cache, so some workloads would be affected. For RAID 1 or 0 set-up, it's good to have two similarly capable drives with the same capacity. If you set-up mirrored RAID on 1TB and 2TB drives, you will lose 1TB on that 2TB drive. So, there are things to consider. You should never blindly just buy any NVMe drives, but you can, of course.
I would only want to consolidate existing drives into one slot. Like say I had some mixed 1TB, 2TB, PCI 3.0 and 4.0 m.2 drives it would read them all fine yes? No need for raid
 
So I wouldn't be able to take a random bunch of m.2 drives and just stick them in there and expect them to work?
If they are split in the BIOS you would see each individual drive. It is best practice though to try to have the drives have the same controller. In fact with RAID you want to also make sure they are the same capacity. There is nothing to say though that you cannot actually do it.

No that doesn't seem right. Separate topics crashing vs. moving (I'm talking about moving) you should be able to move Windows created raid disks to another windows machine and Windows should be able to mount the array. If your computer is crashing any number of things can go wrong if data in the process of updating wasn't completed to the disk raid or not.

Yea that would work.
The only issue is Windows 11 TPM. I don't know how but it seems that is why NVME is such a pain in the butt to format. If you are using an existing Windows you can update your entire OS without needing to worry about Software RAID but if you take your drive out of a Windows PC and just put it in another it might not automatically give you the foreign disk option.
 
If they are split in the BIOS you would see each individual drive. It is best practice though to try to have the drives have the same controller. In fact with RAID you want to also make sure they are the same capacity. There is nothing to say though that you cannot actually do it.


The only issue is Windows 11 TPM. I don't know how but it seems that is why NVME is such a pain in the butt to format. If you are using an existing Windows you can update your entire OS without needing to worry about Software RAID but if you take your drive out of a Windows PC and just put it in another it might not automatically give you the foreign disk option.
Ah TPM I didn't consider that. Also if your using Bitlocker that might also be a complication and or if you swap your CPU and using your CPU's TPM instead of an external one.
I should clarify in my mind (among this raid conversation) I'm not considering the SSD raid array as participating as the boot drive but as a secondary drive. SSD raid as boot drive on typical consumer hardware doesn't make much sense to me unless it's mirroring.
 
You could have GPU Gen5 in x16 slot using x8 connection bifurcated to the second x8 slot where AIC could be attached. But... which Gen5 AIC? NVMe? There are already 3-5 NVMe slots on a good motherboard,....
In my case I have ASUS mini-ITX B650E-I with just one each of Gen5 + Gen4 NVMe, plus two SATA3. Case is a Chopin Max that has no external bracket or room for more than a one slot card in the Gen5 x16. So NVMe is a feasible use for it for me, not pressing but not many other ways to use it unless I want to case mod an external GPU or replace the PSU beside it with a GPU (it's been done).

Of course, even SATA SSDs can be very viable for latency-focused workloads, they're just not the best. And at some point Gen5 won't be the best, either, which will probably be the time I actually want to try doing this, if at all.
 
Last edited:
Ah TPM I didn't consider that. Also if your using Bitlocker that might also be a complication and or if you swap your CPU and using your CPU's TPM instead of an external one.
I should clarify in my mind (among this raid conversation) I'm not considering the SSD raid array as participating as the boot drive but as a secondary drive. SSD raid as boot drive on typical consumer hardware doesn't make much sense to me unless it's mirroring.
I would not recommend that you should use a RAID array as Boot but even as Secondary storage you can run into issues with Windows 11. F me Windows 11 is so quirky that I had a M2 drive that I was using with an adapter and since I used the adpater to format it I have to use one specific USB C port on my MB to have ti register.

I would not recommend that you should use a RAID array as Boot but even as Secondary storage you can run into issues with Windows 11. F me Windows 11 is so quirky that I had a M2 drive that I was using with an adapter and since I used the adpater to format it I have to use one specific USB C port on my MB to have ti register.
I for one love RAID. Once you have more than 50 Epic Games and have to re-download them or just move them you will appreciate maxing out Windows 2.5 GB/s write rate with a drive that cost a quarter what it would if you got one for the same capacity. I have to read some more on the controller for my WD AN1500 before I buy another 4 TB NV2.
 
I like mdadm RAID 1.0 metadata for simple RAID1 since it is effectively the same as no RAID at all for simple reading since the metadata is all at the end.

For more complex situations I'd tend towards btrfs.
 
-Seeing several comments about the dangers of CPU NVME RAID (Intel-VROC / AMD-RAID):
Windows Storage Spaces has improved greatly over the years; I used to be 'against' its use, but in my own incidental testing it's been reliable and tolerant of system changes/swapping.
Storage Space arrays absolutely will and do 'transfer' between (compatible Windows) systems. (Though, I don't recall if a RAID5 from a Windows Server install will 'work' on an install of 10/11).

Also, if you research some of the last reviews on RAIDing Optane drives, Storage Spaces 'can' outperform VROC.
In my own experimentation, AMD-RAID and Optane 'don't get along' well. (Severe performance regression beyond 2-drive striped array on AMD-RAID)
Storage Spaces Striped was reliably 'faster' than 4x 118GB P1600Xs in RAID0 AMD-RAID (no matter what cache tag size was used).
 
PCIe bifurcation, like every other which sell them. No controller just a garbage PCB for 10$.

With a controller it would be interessting for a good price.
 
I would only want to consolidate existing drives into one slot. Like say I had some mixed 1TB, 2TB, PCI 3.0 and 4.0 m.2 drives it would read them all fine yes? No need for raid
Sure, you can run any of those individually.
 
Ok, seriously, why is everyone complaining about this?

A bifurcated card, is limited in its use to platforms that support bifurcation (and expose the feature to the end-user).
Prosumer-enthusiast-gamer mobo manufacturers have highly-inconsistent support for a feature that's (underneath it all) actually, extraordinarily common. (Also, these 'simple' bifurcated cards seem to be sold at some seriously high prices, for how simply constructed they are)
To an enthusiast, gamer, tinkerer, etc. the mere mention of 'bifurcation' can stir up sourness.

I'm aware of how common and useful bifurcated devices are in server/industrial use:
I have a couple 'sets' of MaxCloudON bifurcated risers for x8, x16, etc. Those maxcloudon risers were made for and by a Remote Rendering and GPGPU server company overseas.
I also own a Gen4 Asus Hyper M.2 Quad-NVMe card that I've filled w/ both 16GB Optane M10s, and 118GB Optane P1600Xs in testing.

To a enthusiast-tinkerer like me, the switch-based cards, are much more 'broad' in how they can be used. Switched PCIe expander-adapter cards can even act as a 'southbridge' to attach new features to older/unsupported equipment.
Ex. I have an Amfeltec Gen2 x16 -> 4x x4-lane M.2 M-key card; it's probably gonna get slotted into a PCIe-Gen1.0 dual S940 K8N-DL or a re-purposed H81-BTC.


All that said, I'd bet bifurcation is usually preferred in-industry as it's less power and heat, with less latency. -in 'big data' applications, that teensy bit of latency, could be a (stacking) issue.

I could see a professional having a generalized preference for bifurcation over switching.
In 'serious' use cases, bifurcation would be more efficient, no?
 
A bifurcated card, is limited in its use to platforms that support bifurcation (and expose the feature to the end-user).
Prosumer-enthusiast-gamer mobo manufacturers have highly-inconsistent support for a feature that's (underneath it all) actually, extraordinarily common. (Also, these 'simple' bifurcated cards seem to be sold at some seriously high prices, for how simply constructed they are)
To an enthusiast, gamer, tinkerer, etc. the mere mention of 'bifurcation' can stir up sourness.

I'm aware of how common and useful bifurcated devices are in server/industrial use:
I have a couple 'sets' of MaxCloudON bifurcated risers for x8, x16, etc. Those maxcloudon risers were made for and by a Remote Rendering and GPGPU server company overseas.
I also own a Gen4 Asus Hyper M.2 Quad-NVMe card that I've filled w/ both 16GB Optane M10s, and 118GB Optane P1600Xs in testing.

To a enthusiast-tinkerer like me, the switch-based cards, are much more 'broad' in how they can be used. Switched PCIe expander-adapter cards can even act as a 'southbridge' to attach new features to older/unsupported equipment.
Ex. I have an Amfeltec Gen2 x16 -> 4x x4-lane M.2 M-key card; it's probably gonna get slotted into a PCIe-Gen1.0 dual S940 K8N-DL or a re-purposed H81-BTC.
Ah! This makes sense. The frustration is easier to understand now. Was not aware of these particular problems. I was under the impression that "most" chipsets supported that function natively across all platforms. Thank You for explaining, much appreciated!

All that said, I'd bet bifurcation is usually preferred in-industry as it's less power and heat, with less latency. -in 'big data' applications, that teensy bit of latency, could be a (stacking) issue.

I could see a professional having a generalized preference for bifurcation over switching.
In 'serious' use cases, bifurcation would be more efficient, no?
This is correct. It is more efficient as it a "direct" connection. In rack deployments, bifurcation is preferable, even if it's less flexible.
 
Last edited:
It would be nice if these cards could be smaller. Maybe double sided. (2 on each side)
That precise configuration is extraordinarily common with the 'Cheap Chinese Import' expanders (Both switched and bifurcated varieties).
I believe QNAP and OWC(?) makes a few like that as well.
Heck, even my old Gen2 Amfeltec (switched) card is also built like that.

In fact, now that I've ran down through it, I might venture to say that (double-sided) configuration is 'most common' outside of the gamer-enthusiast market(s).

TBQH, I think Asus, MSI, Gigabyte, etc. have cards laid out 4-to-a-single-side in part because they look more like a fancy slim GPU, and partially due to the inconsistent airflow patterns amongst DIYers' and SIs' custom builds.
 
Back
Top