Friday, July 19th 2024
Samsung Planning for CXL 2.0 DRAM Mass Production Later This Year
Samsung Electronics Co. is putting a lot of effort into securing its involvement in next-generation memory technology, CXL (Compute Express Link). In a media briefing on Thursday, Jangseok Choi, vice president of Samsung's new business planning team, announced plans to mass-produce 256 GB DRAM supporting CXL 2.0 by the end of this year. CXL technology promises to significantly enhance the efficiency of high-performance server systems by providing a unified interface for accelerators, DRAM, and storage devices used with CPUs and GPUs.
The company projects that CXL technology will increase memory capacity per server by eight to ten times, marking a significant leap in computing power. Samsung's long investment in CXL development is now in the final stages with the company currently testing products with partners for performance verification, Samsung recently established the industry's first CXL infrastructure certified by Red Hat. "We expect the CXL market to start blooming in the second half and explosively grow from 2028," Choi stated, highlighting the technology's potential to expand memory capacity and bandwidth far beyond current limitations.Executive Director Choi provided insights into the company's new Compute Express Link (CXL) Memory Module (CMM) technology. "In essence, CMM is a product that allows us to add memory in the space typically reserved for SSDs," Choi explained. He further elaborated that this technology would enhance the CPU's ability to transmit large volumes of data around the main memory, complementing existing DRAM functionality.
One key feature of Samsung's CXL 2.0 DRAM, developed in May 2023, is the memory pooling feature. This technology enables the creation of a shared pool of memory by combining multiple CXL memories on a server platform. Hosts can then access and utilize memory from this pool as needed, optimizing resource allocation. The outcomes are a more efficient use of the entire CXL memory capacity without empty areas, reduced data transfer bottlenecks, and improved overall system performance and flexibility.
Sources:
KED, ZDNet Korea
The company projects that CXL technology will increase memory capacity per server by eight to ten times, marking a significant leap in computing power. Samsung's long investment in CXL development is now in the final stages with the company currently testing products with partners for performance verification, Samsung recently established the industry's first CXL infrastructure certified by Red Hat. "We expect the CXL market to start blooming in the second half and explosively grow from 2028," Choi stated, highlighting the technology's potential to expand memory capacity and bandwidth far beyond current limitations.Executive Director Choi provided insights into the company's new Compute Express Link (CXL) Memory Module (CMM) technology. "In essence, CMM is a product that allows us to add memory in the space typically reserved for SSDs," Choi explained. He further elaborated that this technology would enhance the CPU's ability to transmit large volumes of data around the main memory, complementing existing DRAM functionality.
One key feature of Samsung's CXL 2.0 DRAM, developed in May 2023, is the memory pooling feature. This technology enables the creation of a shared pool of memory by combining multiple CXL memories on a server platform. Hosts can then access and utilize memory from this pool as needed, optimizing resource allocation. The outcomes are a more efficient use of the entire CXL memory capacity without empty areas, reduced data transfer bottlenecks, and improved overall system performance and flexibility.
3 Comments on Samsung Planning for CXL 2.0 DRAM Mass Production Later This Year
Let's say a computer system with 32GB of RAM booted from an SSD. If a Virtual Memory ( VM ) file is created, for example 64GB, then the total amount of available memory will be 96GB when a Memory Mapped allocation API is used. However, it does Not mean that using CRT-function 'malloc' it will be possible to allocate 96GB of memory. It could fail if more than 32GB of memory should be allocated.
In overall, all you've just described could be done with the Memory Mapped allocation API used on SSDs.