Monday, March 13th 2023

Apex Storage Add-In-Card Hosts 21 M.2 SSDs, up to 168 TBs of Storage

Apex Storage, a new company in the storage world, has announced that its X21 add-in-card (AIC) has room for 21 (you read that right) PCIe 4.0 M.2 NVMe SSDs. The card supports up to 168 TBs with 8 TB M.2 NVMe SSDs and 336 TBs of storage with future 16 TB M.2 SSDs drives and can withstand speeds of up to 30.5 GB/s. Packed inside a single-slot, full-length, full-height AIC, the X21 card is built for a snug fit inside workstations and applications such as machine learning and hyper-converged infrastructure that enterprises need to develop inside servers and workstations across the site.

The X21 AIC has 100 PCIe lanes on the board, which indicates the presence of a PCIe switch, likely placed under the heatsink. To power all the storage, the PCIe slot itself needs to be more, and the card also has two 6-pin PCIe power connectors that provide 225 Watts of power in total. Interestingly, the heatsink is passively cooled, but Apex Storage suggests that there should be an active airflow with a minimum of 400 LFM to ensure the regular operation of the card. In the example application, the company laid out X21 with Samsung's 990 Pro SSDs; however, the card also supports Intel Optane drives. Read and Write IOPS are higher than 10 million. Additionally, the average read and write access latencies are 79 ms and 52 ms. Apex Storage didn't reveal the pricing and availability of the card; however, expect it to come with a premium.
Sources: Apex Storage, via Tom's Hardware
Add your own comment

27 Comments on Apex Storage Add-In-Card Hosts 21 M.2 SSDs, up to 168 TBs of Storage

#1
enb141
So this is for people that want huge capacities in SSD form, because in performance, still intel optane smokes it.
Posted on Reply
#2
TumbleGeorge
enb141So this is for people that want huge capacities in SSD form, because in performance, still intel optane smokes it.
You didn't guess, try again. ;)
AleksandarKRead and Write IOPS are higher than 10 million
Posted on Reply
#3
enb141
TumbleGeorgeYou didn't guess, try again. ;)
IOPS are high, but latency is high too (higher than one single optane unit) , so if you build a raid of Optanes, it will smoke it in performance.
Posted on Reply
#4
TumbleGeorge
enb141IOPS are high, but latency is high too (higher than one single optane unit) , so if you build a raid of Optanes, it will smoke it in performance.
Optanes are eol and with too small capacity. Yes, it's can still be bought and they are still too expensive. But that's not really my point, and in fact, the user point of view from the angle of a home computer and gamer or poor semi-professional is completely irrelevant here.
Posted on Reply
#5
TheDeeGee
This is kinda wild looking, reminds me of the GTX 295.
Posted on Reply
#6
enb141
TumbleGeorgeOptanes are eol and with too small capacity. Yes, it's can still be bought and they are still too expensive. But that's not really my point, and in fact, the user point of view from the angle of a home computer and gamer or poor semi-professional is completely irrelevant here.
EOL but still the kings of solid state disks, that's a fact.
Posted on Reply
#8
LabRat 891
*The Power of 7 begins playing*

was... was this made for me?
Can I get a 'sample'? (I'll buy the 17 additional 118GB P1600Xs)



Seriously though, after catching so much 'shit' about enjoying PCIe switches and NVMe RAID, seeing products like this makes me feel a lot less mad.

I wonder what Switch it's using? PLX's offerings max out at 98 lanes/ports on Gen4.
PEX88096
98 lane, 98 port, PCI Express Gen 4.0 ExpressFabric Platform
Maybe, it's Microchip / Microsemi (<- what an unfortunate name...) ?
X21 AIC has 100 PCIe lanes on the board
Could be a 116-lane switch, termed that way. The 'Uplink' x16 might be subtracted.
Posted on Reply
#9
Nordic
I really want to see this tested with all optane drives.
Posted on Reply
#10
Quitessa
LabRat 891*The Power of 7 begins playing*

was... was this made for me?
Can I get a 'sample'? (I'll buy the 17 additional 118GB P1600Xs)



Seriously though, after catching so much 'shit' about enjoying PCIe switches and NVMe RAID, seeing products like this makes me feel a lot less mad.

I wonder what Switch it's using? PLX's offerings max out at 98 lanes/ports on Gen4.

Maybe, it's Microchip / Microsemi (<- what an unfortunate name...) ?


Could be a 116-lane switch, termed that way. The 'Uplink' x16 might be subtracted.
With the look on the back where all the caps and stuff are, I'm thinking it's a Pair of switches in parralel or serial maybe
Posted on Reply
#11
LabRat 891
enb141So this is for people that want huge capacities in SSD form, because in performance, still intel optane smokes it.
Makes utilitarian sense, if you're a studio-level UHD+ 'media pro'.
My madness has me imagining a hydra's nest of oculink-m.2 cards running to 21x U.2 P5810Xs. Just as unrealistic (for me), but I could probably add 17 more P1600Xs (which are @ liquidation pricing) if I eventually find one of these (years down the road). Optane 'lasts' I expect to have my Optane drives for decades to come (which, was part of its 'problem' as a "consumer product")
QuitessaWith the look on the back where all the caps and stuff are, I'm thinking it's a Pair of switches in parralel or serial maybe
Good eye.
I haven't taken the time to 'play with the concept' but I've recently been researching PCI-e switches. Can confirm 'series' switches are a pretty common thing, even on finished-products (like mobos and HBAs, etc). TBQH, I'd liken PCIe a lot to Ethernet, but the PCB is actually like routing WANs and LANs (inter-strewn across and aside power and other comm. 'circuits') in a tiny cityscape.
Posted on Reply
#12
enb141
LabRat 891Makes utilitarian sense, if you're a studio-level UHD+ 'media pro'.
My madness has me imagining a hydra's nest of oculink-m.2 cards running to 21x U.2 P5810Xs. Just as unrealistic (for me), but I could probably add 17 more P1600Xs (which are @ liquidation pricing) if I eventually find one of these (years down the road). Optane 'lasts' I expect to have my Optane drives for decades to come (which, was part of its 'problem' as a "consumer product")
Well the P5810X are rare, I couldn't find one so I got a P5800.

And those P5800 are still at MSRP.
Posted on Reply
#13
cchi
Additionally, the average read and write access latencies are 79 ms and 52 ms.
Is that really several times longer latencies than HDDs? Or should it be microseconds (us)?
Posted on Reply
#14
enb141
cchiIs that really several times longer latencies than HDDs? Or should it be microseconds (us)?
Yes, the fastest (lower latency) I remember for Hard Drives was on a Velociraptor 10k RPM with 1ms, SSD are measured in μs, intel optane P5810x is supposed to have 5 μs on average.
Posted on Reply
#15
Wirko
For what use case would this card + 21 hot M.2 drives be preferable to a smaller bunch of U.2 drives?

21 x Corsair or Sabrent M.2 (8 TB) = ~24,000 € for 168 TB - not including the card
11 x Micron 7450 Pro U.3 (15.36 TB) = 18,700 € for 169 TB
6 x Micron 9400 Pro U.3 (30.72 TB) = 24,600 € for 184 TB

This card is meant for workstations anyway, so PCIe lane count and bifurcation abilities should not be an issue, without the need for a hot PCIe switch.
Posted on Reply
#16
Berfs1
enb141EOL but still the kings of solid state disks, that's a fact.
This is for those that actually want the fastest sustained speeds possible. Optane does like 7 GBps sustained. 21x 990 Pros that do 1.4 GBps each is 29.4 GBps, PCIe 4.0 x16 bandwidth is 32 GBps. Yes the 990 Pro is absolute fucking trash for sustained writes (and doesn't even deserve the "Pro" nomenclature), but hey if someone wants 42TB of storage, or maybe 20TB + 22TB, just get 21x 2TB 990 Pros, and boom. Though I am curious how this would even be powered, because 21 NVMes is capable of taking over 75W under full load. I'd buy it if I absolutely needed super fast storage for my video editing, but not really worth the cost to me. Maybe for movie makers though. What would make more sense though, is getting 16x 980 Pro 2TBs and putting those in RAID 0, still 42TB of storage but your sustained writes will be 39.9 GBps (limited by the 32GBps limit of PCIe 4.0 x16), and it would be probably half the cost.
Posted on Reply
#17
enb141
Berfs1This is for those that actually want the fastest sustained speeds possible. Optane does like 7 GBps sustained. 21x 990 Pros that do 1.4 GBps each is 29.4 GBps, PCIe 4.0 x16 bandwidth is 32 GBps. Yes the 990 Pro is absolute fucking trash for sustained writes (and doesn't even deserve the "Pro" nomenclature), but hey if someone wants 42TB of storage, or maybe 20TB + 22TB, just get 21x 2TB 990 Pros, and boom. Though I am curious how this would even be powered, because 21 NVMes is capable of taking over 75W under full load. I'd buy it if I absolutely needed super fast storage for my video editing, but not really worth the cost to me. Maybe for movie makers though. What would make more sense though, is getting 16x 980 Pro 2TBs and putting those in RAID 0, still 42TB of storage but your sustained writes will be 39.9 GBps (limited by the 32GBps limit of PCIe 4.0 x16), and it would be probably half the cost.
Still, if you want better specs, a raid of Optanes will smoke this.
Posted on Reply
#18
Berfs1
enb141Still, if you want better specs, a raid of Optanes will smoke this.
Kindly show me a faster optane m.2 drive? Because AFAIK that doesn't exist. They do make fast NVMe optane drives but those aren't m.2, those are u.2 and in 2.5" form factor.
Posted on Reply
#19
enb141
Berfs1Kindly show me a faster optane m.2 drive? Because AFAIK that doesn't exist. They do make fast NVMe optane drives but those aren't m.2, those are u.2 and in 2.5" form factor.
Yes, they are u.2, but in a server you can connect them with adapters to M.2
Posted on Reply
#20
Berfs1
enb141Yes, they are u.2, but in a server you can connect them with adapters to M.2
So... you didn't understand the entire point of this. This is a compact solution, versus having twenty one 2.5" drives stacked side by side.
Posted on Reply
#21
Calenhad
LabRat 891Good eye.
I haven't taken the time to 'play with the concept' but I've recently been researching PCI-e switches. Can confirm 'series' switches are a pretty common thing, even on finished-products (like mobos and HBAs, etc). TBQH, I'd liken PCIe a lot to Ethernet, but the PCB is actually like routing WANs and LANs (inter-strewn across and aside power and other comm. 'circuits') in a tiny cityscape.
21 x4 slots equal 84 lanes. Add the x16 connection and you have x100. Perfect marketing logic at work...

Which is my bet for where the 100 lanes onboard figure comes from
Posted on Reply
#22
enb141
Berfs1So... you didn't understand the entire point of this. This is a compact solution, versus having twenty one 2.5" drives stacked side by side.
I doubt people are looking for a "compact" solution at this price point.
Posted on Reply
#23
Calenhad
enb141I doubt people are looking for a "compact" solution at this price point.
Do you really not see the difference between having one expansion card you can fit in a regular sized workstation computer, versus having to find a damn chassis with room for 21x 2.5" drives (and cables)? Which probably end up having to be a couple rack mounted chassis for "convenience".

Same thing goes for server usage. Any space you don't need for drives can be used for other purposes
Posted on Reply
#24
enb141
CalenhadDo you really not see the difference between having one expansion card you can fit in a regular sized workstation computer, versus having to find a damn chassis with room for 21x 2.5" drives (and cables)? Which probably end up having to be a couple rack mounted chassis for "convenience".

Same thing goes for server usage. Any space you don't need for drives can be used for other purposes
If we are talking about performance, there's no other way, if not, look at apple, even them had no other choice to use a Mac Pro, which still uses a huge space.

So if performance is what you are looking at, this drive is not for you.
Posted on Reply
#25
Berfs1
enb141If we are talking about performance, there's no other way, if not, look at apple, even them had no other choice to use a Mac Pro, which still uses a huge space.

So if performance is what you are looking at, this drive is not for you.
iF PerFoRMaNCe iS WhAT yO... okay do you not realize there is a 32GBps cap with PCIe 4.0 x16, or are you intentionally choosing to ignore that fact? BECAUSE there is a 32GBps ceiling, you cannot get faster performance with faster drives if that goes past the 32GBps mark. So no, optane drives won't magically make it faster. Plus, a lot of folks that want this card, would like to buy this and put in a secondary machine as a NAS, or maybe they have one of the new HEDT platforms with a lot of PCIe lanes at their disposal, and want to put it in their primary workstation as maybe a data transfer drive. I mean, if I were to use this and money were no object, I probably would have twin cards, 2 pairs of RAID 0, 8TB NVMes, 168TB per card, and I would probably record gameplay in near lossless quality, shit maybe even 1440p or 4K lossless, and storage would not be an issue at all. Granted having an HEDT CPU would have so many cores you could just do x264 recording and use up all the threads, or even AV1 recording tbh... back to the original point, that's where there is a legitimate use for this, and you seem to think the only use for a card with a bunch of m.2 cards is only for risers... that's not what everyone needs or wants. You need to understand that everyone's needs can be different, and this card was designed primarily for m.2 SSDs in mind (because that's what the product pictures show).
Posted on Reply
Add your own comment
Jun 10th, 2024 20:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts