• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung Develops Industry's First CXL DRAM Supporting CXL 2.0

GFreeman

News Editor
Staff member
Joined
Mar 6, 2023
Messages
1,584 (2.40/day)
Samsung Electronics, a world leader in advanced semiconductor technology, today announced its development of the industry's first 128-gigabyte (GB) DRAM to support Compute Express Link (CXL) 2.0. Samsung worked closely with Intel on this landmark advancement on an Intel Xeon platform.

Building on its development of the industry's first CXL 1.1-based CXL DRAM in May of 2022, Samsung's introduction of the 128 GB CXL DRAM based on CXL 2.0 is expected to accelerate commercialization of next-generation memory solutions. The new CXL DRAM supports PCle 5.0 interface (x8 lanes) and provides bandwidth of up to 35 GB per second.



"As a member of the CXL Consortium Board of Directors, Samsung Electronics remains at the forefront of CXL technology," said Jangseok Choi, vice president of New Business Planning Team at Samsung Electronics. "This breakthrough development underlines our commitment to expanding the CXL ecosystem even further through partnerships with data center, server and chipset companies across the industry."

"Intel is delighted to work with Samsung on their investment towards a vibrant CXL ecosystem, said Jim Pappas, director of Technology Initiatives at Intel Corporation. Intel will continue to work with Samsung to foster the growth and adoption of innovative CXL products throughout the industry."

"Montage is excited to mass produce the first controllers to support CXL 2.0," said Stephen Tai, president of Montage Technology. "We look forward to continuing our partnership with Samsung to advance CXL technology and expand its ecosystem."

For the first time ever, CXL 2.0 supports memory pooling - a memory management technique that binds multiple CXL memory blocks on a server platform to form a pool and enables hosts to dynamically allocate memory from the pool as needed. The new technology allows customers to maximize efficiency while lowering operating costs, which will in turn help customers reinvest resources into reinforcing their server memory.

Samsung plans to start mass producing CXL 2.0 DRAM later this year and is poised to deliver additional offerings in various capacities to address demand for future computing applications.

CXL is a next-generation interface that adds efficiency to accelerators, DRAM and storage devices used with CPUs in high-performance server systems. Given that its bandwidth and capacity can be expanded when used with the main DRAM, the technology's advancement is expected to make waves across the next-generation computing market, where key technologies such as artificial intelligence (AI) and machine learning (ML) have led to a rapid rise in demand for high-speed data processing.

View at TechPowerUp Main Site | Source
 
Joined
Jul 8, 2022
Messages
261 (0.29/day)
Location
USA
Processor i9-11900K
Motherboard Asus ROG Maximus XIII Hero
Cooling Arctic Liquid Freezer II 360
Memory 4x8GB DDR4
Video Card(s) Alienware RTX 3090 OEM
Storage OEM Kioxia 2tb NVMe (OS), 4TB WD Blue HDD (games)
Display(s) LG 27GN950-B
Case Lian Li Lancool II Mesh Performance (black)
Audio Device(s) Logitech Pro X Wireless
Power Supply Corsair RM1000x
Keyboard HyperX Alloy Elite 2
I suppose it makes sense, with the high bandwidth interface and all that… but how’s the latency? I’d imagine there’s some latency added with the fact that data needs to go through a switch, in addition to trace lengths likely being farther from the CPUs than actual DIMM slots. On the contrary, I don’t know squat about technical needs of server applications and maybe the latency doesn’t matter a whole lot.
 
Joined
Jun 1, 2016
Messages
48 (0.02/day)
I suppose it makes sense, with the high bandwidth interface and all that… but how’s the latency? I’d imagine there’s some latency added with the fact that data needs to go through a switch, in addition to trace lengths likely being farther from the CPUs than actual DIMM slots. On the contrary, I don’t know squat about technical needs of server applications and maybe the latency doesn’t matter a whole lot.
Read this and your question will be answered: Just How Bad Is CXL Memory Latency?
 
Joined
Apr 8, 2008
Messages
342 (0.06/day)
System Name Xajel Main
Processor AMD Ryzen 7 5800X
Motherboard ASRock X570M Steel Legened
Cooling Corsair H100i PRO
Memory G.Skill DDR4 3600 32GB (2x16GB)
Video Card(s) ZOTAC GAMING GeForce RTX 3080 Ti AMP Holo
Storage (OS) Gigabyte AORUS NVMe Gen4 1TB + (Personal) WD Black SN850X 2TB + (Store) WD 8TB HDD
Display(s) LG 38WN95C Ultrawide 3840x1600 144Hz
Case Cooler Master CM690 III
Audio Device(s) Built-in Audio + Yamaha SR-C20 Soundbar
Power Supply Thermaltake 750W
Mouse Logitech MK710 Combo
Keyboard Logitech MK710 Combo (M705)
Software Windows 11 Pro
AMD said to expect CXL to get into consumer products as well which is interesting, specially for APUs and laptops.

Currently, to get the most of AMD APUs, they need to go the LPDDR5 route, which is good and bad, good because of how fast the memory are yet power efficient, bad because of lack of upgradability. CXL can change that by using extra PCIe lanes for added RAM. Of course; CPUs will need extra PCIe lanes (8 PCIe 5.0 at least) which will require more power and pins, but not as much pins as a regular RAM slot (262 pins compared to 98 IIRC for PCIe 8x), and because this will be a tiered memory hierarchy, this "extra" RAM can be completely turned on and off depending on the need and power states !!

They can either have a standard SO-DIMM in the motherboard (where the CXL controller is in as well), or they can have a Memory standard, optimized for mobiles, thin and compact.. IDK if Dell's proposed new standard can support CXL, but I think a version optimized for CXL should be much smaller, even using x16 pcie lanes.
 
Top