• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

GFreeman

News Editor
Staff member
Joined
Mar 6, 2023
Messages
1,166 (2.66/day)
SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.



During a performance demonstration, CXL Memory Module-Double Data Rate 5 (CMM-DDR5) expanded system bandwidth by up to 50% and capacity by up to 100% compared to systems equipped with only DDR5 DRAM. Additionally, SK hynix emphasized the benefits of a software that supports CMM-DDR5 called the Heterogeneous Memory Software Development Kit (HMSDK). When equipped in systems with both CMM-DDR5 and standard DRAM modules, HMSDK can significantly enhance a system's capability by relocating data to the appropriate memory device based on frequency of use.

SK hynix also displayed Niagara 2.0, a solution that connects multiple CXL memories together to allow numerous hosts such as CPUs and GPUs to optimally share their capacity. This eliminates idle memory usage while reducing power consumption.

Compared with the previous generation Niagara 1.0 which only allowed systems to share capacity with one another, Niagara 2.0 also enables the sharing of data. In turn, this reduces inefficiencies such as redundant data processing and, therefore, improves overall system performance. As a result, these CXL products are expected to be used in AI and high-performance computing (HPC) systems in the future.

During the presentation session at the conference, SK hynix's Distinguished Engineer Wonha Choi of the Next-Gen Memory & Storage team gave a talk titled "Enabling CXL Memory Module, Exploring Memory Expansion Use Cases & Beyond". The presentation covered the background of CXL's adoption through to the technology's components, research cases and performance, and anticipated applications in the future.

Following its participation at CXL DevCon 2024, SK hynix plans to strengthen its AI memory leadership by advancing CXL technology for its expanding product lineup.

View at TechPowerUp Main Site | Source
 
Joined
May 3, 2018
Messages
2,355 (1.07/day)
Really, have a look at typical AI gpgpu like H100/H200/B100/B200 etc and you'll see there's no space for CXL around edges of the chip. Why don't Nvidia use it? This won't be found in next gen AI boards like Nvidia's Rubin

CXL is dead for AI
 
Last edited:
Joined
Mar 21, 2016
Messages
2,208 (0.74/day)
It kind of sits in the same limbo space as Optane. Ideally what we need is more of a technology that situates itself between CPU cache and system memory. I think the best solution to that is running fiber optical to the first paired memory channels and having the second paired memory channel with copper traces. Another thing that can probably be done to get system memory latency down further is making a individual DDR6 DIMM quad channel and a pair octa channel. In a four channel board they could also just forgo using the latter two dimm slots for system memory and use them instead for dimm slotted storage that could leverage the slot bandwidth advantages. I don't feel like storage capacity as as big a deal these days at least in general for consumer space. Overall capacity is only some what of a issue for GPU VRAM And CPU cache in consumer space these days. For traditional storage it's really affordable and abundant enough that it's hard to call it a problem at least outside of Optane that's been shelved.

Really, have a look at typical AI gpgpu like H100/H200/B100/B200 etc and you'll see there's no space for CXL around edges of the chip. Why don't Nvidia use it? This won't be found in next gen AI boards like Nvidia's Rubin

CXL is dead for AI

One reason I see is VRAM itself is plenty fast once it reaches the GPU. It has latency arriving at the GPU, but once it on the GPU the storage itself is plenty quick much more so than traditional storage. It's feeding new external data to storage in VRAM that takes time. That said if you're manipulating and storing it in VRAM itself it's plenty fast.
 
Last edited:
Top