Monday, March 13th 2023
Apex Storage Add-In-Card Hosts 21 M.2 SSDs, up to 168 TBs of Storage
Apex Storage, a new company in the storage world, has announced that its X21 add-in-card (AIC) has room for 21 (you read that right) PCIe 4.0 M.2 NVMe SSDs. The card supports up to 168 TBs with 8 TB M.2 NVMe SSDs and 336 TBs of storage with future 16 TB M.2 SSDs drives and can withstand speeds of up to 30.5 GB/s. Packed inside a single-slot, full-length, full-height AIC, the X21 card is built for a snug fit inside workstations and applications such as machine learning and hyper-converged infrastructure that enterprises need to develop inside servers and workstations across the site.
The X21 AIC has 100 PCIe lanes on the board, which indicates the presence of a PCIe switch, likely placed under the heatsink. To power all the storage, the PCIe slot itself needs to be more, and the card also has two 6-pin PCIe power connectors that provide 225 Watts of power in total. Interestingly, the heatsink is passively cooled, but Apex Storage suggests that there should be an active airflow with a minimum of 400 LFM to ensure the regular operation of the card. In the example application, the company laid out X21 with Samsung's 990 Pro SSDs; however, the card also supports Intel Optane drives. Read and Write IOPS are higher than 10 million. Additionally, the average read and write access latencies are 79 ms and 52 ms. Apex Storage didn't reveal the pricing and availability of the card; however, expect it to come with a premium.
Sources:
Apex Storage, via Tom's Hardware
The X21 AIC has 100 PCIe lanes on the board, which indicates the presence of a PCIe switch, likely placed under the heatsink. To power all the storage, the PCIe slot itself needs to be more, and the card also has two 6-pin PCIe power connectors that provide 225 Watts of power in total. Interestingly, the heatsink is passively cooled, but Apex Storage suggests that there should be an active airflow with a minimum of 400 LFM to ensure the regular operation of the card. In the example application, the company laid out X21 with Samsung's 990 Pro SSDs; however, the card also supports Intel Optane drives. Read and Write IOPS are higher than 10 million. Additionally, the average read and write access latencies are 79 ms and 52 ms. Apex Storage didn't reveal the pricing and availability of the card; however, expect it to come with a premium.
27 Comments on Apex Storage Add-In-Card Hosts 21 M.2 SSDs, up to 168 TBs of Storage
was... was this made for me?
Can I get a 'sample'? (I'll buy the 17 additional 118GB P1600Xs)
Seriously though, after catching so much 'shit' about enjoying PCIe switches and NVMe RAID, seeing products like this makes me feel a lot less mad.
I wonder what Switch it's using? PLX's offerings max out at 98 lanes/ports on Gen4. Maybe, it's Microchip / Microsemi (<- what an unfortunate name...) ? Could be a 116-lane switch, termed that way. The 'Uplink' x16 might be subtracted.
My madness has me imagining a hydra's nest of oculink-m.2 cards running to 21x U.2 P5810Xs. Just as unrealistic (for me), but I could probably add 17 more P1600Xs (which are @ liquidation pricing) if I eventually find one of these (years down the road). Optane 'lasts' I expect to have my Optane drives for decades to come (which, was part of its 'problem' as a "consumer product") Good eye.
I haven't taken the time to 'play with the concept' but I've recently been researching PCI-e switches. Can confirm 'series' switches are a pretty common thing, even on finished-products (like mobos and HBAs, etc). TBQH, I'd liken PCIe a lot to Ethernet, but the PCB is actually like routing WANs and LANs (inter-strewn across and aside power and other comm. 'circuits') in a tiny cityscape.
And those P5800 are still at MSRP.
21 x Corsair or Sabrent M.2 (8 TB) = ~24,000 € for 168 TB - not including the card
11 x Micron 7450 Pro U.3 (15.36 TB) = 18,700 € for 169 TB
6 x Micron 9400 Pro U.3 (30.72 TB) = 24,600 € for 184 TB
This card is meant for workstations anyway, so PCIe lane count and bifurcation abilities should not be an issue, without the need for a hot PCIe switch.
Which is my bet for where the 100 lanes onboard figure comes from
Same thing goes for server usage. Any space you don't need for drives can be used for other purposes
So if performance is what you are looking at, this drive is not for you.