Thursday, May 2nd 2024

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.
During a performance demonstration, CXL Memory Module-Double Data Rate 5 (CMM-DDR5) expanded system bandwidth by up to 50% and capacity by up to 100% compared to systems equipped with only DDR5 DRAM. Additionally, SK hynix emphasized the benefits of a software that supports CMM-DDR5 called the Heterogeneous Memory Software Development Kit (HMSDK). When equipped in systems with both CMM-DDR5 and standard DRAM modules, HMSDK can significantly enhance a system's capability by relocating data to the appropriate memory device based on frequency of use.

SK hynix also displayed Niagara 2.0, a solution that connects multiple CXL memories together to allow numerous hosts such as CPUs and GPUs to optimally share their capacity. This eliminates idle memory usage while reducing power consumption.

Compared with the previous generation Niagara 1.0 which only allowed systems to share capacity with one another, Niagara 2.0 also enables the sharing of data. In turn, this reduces inefficiencies such as redundant data processing and, therefore, improves overall system performance. As a result, these CXL products are expected to be used in AI and high-performance computing (HPC) systems in the future.

During the presentation session at the conference, SK hynix's Distinguished Engineer Wonha Choi of the Next-Gen Memory & Storage team gave a talk titled "Enabling CXL Memory Module, Exploring Memory Expansion Use Cases & Beyond". The presentation covered the background of CXL's adoption through to the technology's components, research cases and performance, and anticipated applications in the future.

Following its participation at CXL DevCon 2024, SK hynix plans to strengthen its AI memory leadership by advancing CXL technology for its expanding product lineup.
Source: SK hynix
Add your own comment

2 Comments on SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

#1
Minus Infinity
Really, have a look at typical AI gpgpu like H100/H200/B100/B200 etc and you'll see there's no space for CXL around edges of the chip. Why don't Nvidia use it? This won't be found in next gen AI boards like Nvidia's Rubin

CXL is dead for AI
Posted on Reply
#2
InVasMani
It kind of sits in the same limbo space as Optane. Ideally what we need is more of a technology that situates itself between CPU cache and system memory. I think the best solution to that is running fiber optical to the first paired memory channels and having the second paired memory channel with copper traces. Another thing that can probably be done to get system memory latency down further is making a individual DDR6 DIMM quad channel and a pair octa channel. In a four channel board they could also just forgo using the latter two dimm slots for system memory and use them instead for dimm slotted storage that could leverage the slot bandwidth advantages. I don't feel like storage capacity as as big a deal these days at least in general for consumer space. Overall capacity is only some what of a issue for GPU VRAM And CPU cache in consumer space these days. For traditional storage it's really affordable and abundant enough that it's hard to call it a problem at least outside of Optane that's been shelved.
Minus InfinityReally, have a look at typical AI gpgpu like H100/H200/B100/B200 etc and you'll see there's no space for CXL around edges of the chip. Why don't Nvidia use it? This won't be found in next gen AI boards like Nvidia's Rubin

CXL is dead for AI
One reason I see is VRAM itself is plenty fast once it reaches the GPU. It has latency arriving at the GPU, but once it on the GPU the storage itself is plenty quick much more so than traditional storage. It's feeding new external data to storage in VRAM that takes time. That said if you're manipulating and storing it in VRAM itself it's plenty fast.
Posted on Reply
Dec 22nd, 2024 01:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts