Friday, November 15th 2024

Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

Expanding the system's computing capability with an additional accelerator like a GPU is common. However, expanding the system's memory capacity with room for more DIMM is something new. Thanks to ServeTheHome, we see that at the OCP Summit 2024, Lenovo showcased its ThinkSystem SR860 V3 server, leveraging CXL technology and Astera Labs Leo memory controllers to accommodate a staggering 16 TB of DDR5 memory across 128 DIMM slots. Traditional four-socket servers face limitations due to the memory channels supported by Intel Xeon processors. With each CPU supporting up to 16 DDR5 DIMMs, a four-socket configuration maxes out at 64 DIMMs, equating to 8 TB when using 128 GB RDIMMs. Lenovo's new approach expands this ceiling significantly by incorporating an additional 64 DIMM slots through CXL memory expansion.

The ThinkSystem SR860 V3 integrates Astera Labs Leo controllers to enable the CXL-connected DIMMs. These controllers manage up to four DDR5 DIMMs each, resulting in a layered memory design. The chassis base houses four Xeon processors, each linked to 16 directly connected DIMMs, while the upper section—called the "memory forest"—houses the additional CXL-enabled DIMMs. Beyond memory capabilities, the server supports up to four double-width GPUs, making it also a solution for high-performance computing and AI workloads. This design caters to scale-up applications requiring vast memory resources, such as large-scale database management, and allows the resources to stay in memory instead of waiting on storage. CXL-based memory architectures are expected to become more common next year. Future developments may see even larger systems with shared memory pools, enabling dynamic allocation across multiple servers. For more pictures and video walkthrough, check out ServeTheHome's post.
Source: ServeTheHome
Add your own comment

7 Comments on Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

#1
LabRat 891
All I can think of, is how sad it is that Optane isn't still being actively marketed to take advantage of expanding CXL support...
Intel's proprietary NV-DIMMs for Optane, were quite literally retarded; it held back mass adoption and restricted availability greatly.
Posted on Reply
#2
TumbleGeorge
LabRat 891All I can think of, is how sad it is that Optane isn't still being actively marketed to take advantage of expanding CXL support...
Intel's proprietary NV-DIMMs for Optane, were quite literally retarded; it held back mass adoption and restricted availability greatly.
I'm interested in what storage capacity you imagine Intel Optane with 2025 technology could deliver? Without interfering SXL in the answer, thanks!
Posted on Reply
#3
persondb
LabRat 891All I can think of, is how sad it is that Optane isn't still being actively marketed to take advantage of expanding CXL support...
Intel's proprietary NV-DIMMs for Optane, were quite literally retarded; it held back mass adoption and restricted availability greatly.
Optane has been dead for years, it can't be marketed towards anything.
Posted on Reply
#4
igormp
That's interesting, but I wonder about the bottlenecks due to the CXL (which lives on top of PCIe) bandwidth.
Current CXL 2.0 would cap out at ~64GB/s with 16 lanes, and that expansion board can do 6x x16, so 384GB/s aggregate bandwidth.

For comparison, your off-the-shelf DDR5-5600 config for a single channel channel provides a max theoretical bandwidth of ~45GB/s (so 90GB/s for a dual channel system).
That expansion board would be similar to an 8-channel config, not bad too be honest. Latency seems to be 2~3x higher than your regular RAM config (according to this), which is not great, but not that bad for applications that are mostly bound to memory quantity as opposed to speed.

A single expansion board would require 96 lanes (6 x16 slots), so it'd be almost all of your CPU's lanes dedicated just for that.

CXL 3.0 (with PCIe 6.0) should make things really interesting with double the bandwidth.
Posted on Reply
#5
LabRat 891
persondbOptane has been dead for years, it can't be marketed towards anything.
Optane isn't actively marketed for reasons other than the qualities of the technology. (It's also not the only PCM IP out there)
However, there have been recent Enterprise only Gen4x4 Optane drives and AFAIK you can still get affordable P1600X M.2 Gen3x4 drives. (as of last year, they were still being actively restocked @ Newegg)
TumbleGeorgeI'm interested in what storage capacity you imagine Intel Optane with 2025 technology could deliver?
At least 3TB, as Intel had demonstrated that back in 2019; it'd be limited to what Intel could make economical.
The capacity was never 3DXpoint's main goal. It was meant to be extremely low-latency storage, slower than slow DRAM, faster than the fastest NAND.
TumbleGeorgeWithout interfering SXL in the answer, thanks!
www.sxl.net/ :confused:
igormpThat's interesting, but I wonder about the bottlenecks due to the CXL (which lives on top of PCIe) bandwidth.
Current CXL 2.0 would cap out at ~64GB/s with 16 lanes, and that expansion board can do 6x x16, so 384GB/s aggregate bandwidth.

For comparison, your off-the-shelf DDR5-5600 config for a single channel channel provides a max theoretical bandwidth of ~45GB/s (so 90GB/s for a dual channel system).
That expansion board would be similar to an 8-channel config, not bad too be honest. Latency seems to be 2~3x higher than your regular RAM config (according to this), which is not great, but not that bad for applications that are mostly bound to memory quantity as opposed to speed.

A single expansion board would require 96 lanes (6 x16 slots), so it'd be almost all of your CPU's lanes dedicated just for that.

CXL 3.0 (with PCIe 6.0) should make things really interesting with double the bandwidth.
Bandwidth is a concern, but AFAIK putting DRAM/fast cache on PCIe is driven by the need for lower latency 'big data' transactions.
Also, the kinds of systems this tech is prospectively to be deployed in, have far more PCIe lanes than anything a consumer could normally get their hands on.
Posted on Reply
#6
Scrizz
LabRat 891All I can think of, is how sad it is that Optane isn't still being actively marketed to take advantage of expanding CXL support...
Intel's proprietary NV-DIMMs for Optane, were quite literally retarded; it held back mass adoption and restricted availability greatly.
Development of Optane stopped long ago and the department was dissolved. I don't know what marketing you're expecting under those circumstances.
I do find it interesting that only a few years after Optane's death is there a market that it could've satisfied.
Posted on Reply
#7
igormp
LabRat 891Also, the kinds of systems this tech is prospectively to be deployed in, have far more PCIe lanes than anything a consumer could normally get their hands on.
While I'm well aware of that, for that given product each CPU has a max of 80 lanes (SR Xeon lineup), so the expansion board requires more lanes than what's available per socket.
Not an issue with Granite Rapids or Epyc, but that was not the CPU they were using.
Posted on Reply
Dec 11th, 2024 20:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts