Monday, October 14th 2024
Advantech Announces CXL 2.0 Memory to Boost Data Center Efficiency
Advantech, a global leader in embedded computing, is excited to announce the release of the SQRAM CXL 2.0 Type 3 Memory Module. Compute Express Link (CXL) 2.0 is the next evolution in memory technology, providing memory expansion with a high-speed, low-latency interconnect designed to meet the demands of large AI Training and HPC clusters. CXL 2.0 builds on the foundation of the original CXL specification, introducing advanced features such as memory sharing, and expansion, enabling more efficient utilization of resources across heterogeneous computing environments.
Memory Expansion via E3.S 2T Form Factor
Traditional memory architectures are often limited by fixed allocations, which can result in underutilized resources and bottlenecks in data-intensive workloads. With the E3.S form factor, based on the EDSFF standard, the CXL 2.0 Memory Module overcomes these limitations, allowing for dynamic resource management. This not only improves performance but reduces costs by maximizing existing resources.High-Speed Interconnect via PCIe 5.0 Interface
CXL memory module operate over the PCIe 5.0 interface. This high-speed connection ensures that even as memory is expanded across different systems, data transfer remains rapid and efficient. The PCIe 5.0 interface provides up to 32 GT/s per lane, allowing the CXL memory module to deliver the bandwidth necessary for data-intensive applications. Adding more capacity will have better performance and increased memory bandwidth without the need for more servers, and capital expenses.
Memory Pooling
CXL 2.0 enables multiple host to access a shared memory pool, optimizing resource allocation and improving overall system efficiency. Through CXL's memory pooling technology, computing components such as CPUs and accelerators of multiple servers on the same shelf can share memory resources, reducing resource redundancy and solving the problem of low memory utilization.
Hot-Plug and Scalability
CXL memory module can be added or removed from the system without shutting down the server, allowing for on-the-fly memory expansion. For data centers, this translates into the ability to scale memory resources as needed, ensuring optimal performance without disruption.
Key Features
Source:
Advantech
Memory Expansion via E3.S 2T Form Factor
Traditional memory architectures are often limited by fixed allocations, which can result in underutilized resources and bottlenecks in data-intensive workloads. With the E3.S form factor, based on the EDSFF standard, the CXL 2.0 Memory Module overcomes these limitations, allowing for dynamic resource management. This not only improves performance but reduces costs by maximizing existing resources.High-Speed Interconnect via PCIe 5.0 Interface
CXL memory module operate over the PCIe 5.0 interface. This high-speed connection ensures that even as memory is expanded across different systems, data transfer remains rapid and efficient. The PCIe 5.0 interface provides up to 32 GT/s per lane, allowing the CXL memory module to deliver the bandwidth necessary for data-intensive applications. Adding more capacity will have better performance and increased memory bandwidth without the need for more servers, and capital expenses.
Memory Pooling
CXL 2.0 enables multiple host to access a shared memory pool, optimizing resource allocation and improving overall system efficiency. Through CXL's memory pooling technology, computing components such as CPUs and accelerators of multiple servers on the same shelf can share memory resources, reducing resource redundancy and solving the problem of low memory utilization.
Hot-Plug and Scalability
CXL memory module can be added or removed from the system without shutting down the server, allowing for on-the-fly memory expansion. For data centers, this translates into the ability to scale memory resources as needed, ensuring optimal performance without disruption.
Key Features
- EDSFF E3.S 2T form-factor
- CXL 2.0 is compatible with PCIe-Gen5 speeds running at 32 GT/s
- Supports ECC error detection and correction
- PCB: 30μ'' gold finger
- Operating Environment: 0 ~70°C (Tc)
- Compliant with CXL 1.1 & CXL 2.0
9 Comments on Advantech Announces CXL 2.0 Memory to Boost Data Center Efficiency
Chips and Cheese recently tested 12-channel DDR5-6000 MT/s, reaching ~99% of the theoretical 576 GB/s.
Turin offers up to 3 TB in 1 DPC and 6 TB in 2 DPC (in up to 4400 MT/s, so ~422 GB/s) configuration.
128 lanes of PCIe 5.0 / CXL 2.0 offer theoretical ~504 GB/s.
This card offers 64 GB in x8 lanes, so in 128 lanes you get 1 TB. I can definitely see potential for 128 GB, or maybe even 256 GB versions of this card.
Depending on price of this device and RAM pricing (top capacity modules are reportedly very costly) CXL memory may be more cost-effective to roughly double the bandwidth and 1.5x..2x the capacity in, say, 96 lanes per socket (768 GB, ~378 GB/s), leaving the rest for SSDs etc.
It would be nice if someone gave to e.g. Phoronix 12-24 such modules to test how it looks from the system perspective. Can it be configured as "far NUMA node memory" or similar - so transparent memory extension? Or the application would have to be CXL-memory aware? What is total system latency for this memory?
While I agree on 2x 8 lane device being worse off than 1x 16 lane one, maybe they know their target market and E3.S in 8 lane variant is more common? Like, "in a common platform for 16 SSD slots per socket, use 12 slots per socket for memory expansion"? Less cost effective because of additional controllers: probably yes compared to standard, say, 768 GB per 12ch socket, i.e. 64 GB *DIMMs. The price for bigger modules may be disproportionately higher, so equation may change fast. Applications "aware of the slow pool and the fast pool of memory": NUMA-aware? There is SNC3 vs HEX on Granite Rapids, NPS4 vs NPS1 on Zens, and HBM plus DDR5 on Sapphire Rapids Xeon Max. The last one is even available in HBM caching mode, so maybe the most relevant:
www.phoronix.com/review/xeon-max-hbm2e-amx