News Posts matching #Compute Express Link

Return to Keyword Browsing

Primemas Announces Availability of Customer Samples of Its CXL 3.0 SoC Memory Controller

Primemas Inc., a fabless semiconductor company specializing in chiplet-based SoC solutions through its Hublet architecture, today announced the availability of customer samples of the world's first Compute Express Link (CXL) memory 3.0 controller. Primemas has been delivering engineering samples and development boards to select strategic customers and partners, who have played a key role in validating the performance and capabilities of Hublet compared to alternative CXL controllers. Building on this successful early engagement, Primemas is now pleased to announce that Hublet product samples are ready for shipment to memory vendors, customers, and ecosystem partners.

While conventional CXL memory expansion controllers are limited by fixed form factors and capped DRAM capacities, Primemas leverages cutting-edge chiplet technology to deliver unmatched scalability and modularity. At the core of this innovation is the Hublet—a versatile building block that enables a wide variety of configurations.

New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance

Intel today unveiled three new additions to its Intel Xeon 6 series of central processing units (CPUs), designed specifically to manage the most advanced graphics processing unit (GPU)-powered AI systems. These new processors with Performance-cores (P-cores) include Intel's innovative Priority Core Turbo (PCT) technology and Intel Speed Select Technology - Turbo Frequency (Intel SST-TF), delivering customizable CPU core frequencies to boost GPU performance across demanding AI workloads. The new Xeon 6 processors are available today, with one of the three currently serving as the host CPU for the NVIDIA DGX B300, the company's latest generation of AI-accelerated systems. The NVIDIA DGX B300 integrates the Intel Xeon 6776P processor, which plays a vital role in managing, orchestrating and supporting the AI-accelerated system. With robust memory capacity and bandwidth, the Xeon 6776P supports the growing needs of AI models and datasets.

"These new Xeon SKUs demonstrate the unmatched performance of Intel Xeon 6, making it the ideal CPU for next-gen GPU-accelerated AI systems," said Karin Eibschitz Segal, corporate vice president and interim general manager of the Data Center Group at Intel. "We're thrilled to deepen our collaboration with NVIDIA to deliver one of the industry's highest-performing AI systems, helping accelerate AI adoption across industries."

XConn Technologies Demonstrates Dynamic Memory Allocation Using CXL Switch and AMD Technologies at CXL DevCon 2025

XConn Technologies, the innovation leader in next-generation interconnect technology for the future of high-performance computing and AI applications, today announced a groundbreaking demonstration of dynamic memory allocation using Compute Express Link (CXL) switch technology at CXL DevCon 2025, taking place April 29-30 at the Santa Clara Marriott hotel. The demonstration highlights a major advancement in memory flexibility, showcasing how CXL switching can enable seamless, on-demand memory pooling and expansion across heterogeneous systems.

The milestone, achieved in collaboration with AMD, unlocks a new level of efficiency for cloud, artificial intelligence (AI), and high-performance computing (HPC) workloads. By dynamically allocating memory via the XConn Apollo CXL switch, data centers can eliminate over-provisioning, enhance performance, and significantly reduce total cost of ownership (TCO).

Marvell Announces Successful Interoperability of Structera CXL Portfolio with AMD EPYC CPU and 5th Gen Intel Xeon Scalable Platforms

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced the successful interoperability of the Marvell Structera portfolio of Compute Express Link (CXL) with AMD EPYC CPUs and 5th Gen Intel Xeon platforms. This achievement underscores the commitment of Marvell to advancing an open and interoperable CXL ecosystem, addressing the growing demands for memory bandwidth and capacity in next-generation cloud data centers.

Marvell collaborated with AMD and Intel to extensively test Structera CXL products with AMD EPYC and 5th Gen Intel Xeon Scalable platforms across various configurations, workloads, and operating conditions. The results demonstrated seamless interoperability, delivering stability, scalability, and high-performance memory expansion that cloud data center providers need for mass deployment.

Altera Starts Production Shipments of Agilex 7 FPGA M-Series

Altera Corporation, a leader in FPGA innovations, today announced production shipments of its Agilex 7 FPGA M-Series, the industry's first high-end, high-density FPGA to feature integrated high bandwidth memory and support for DDR5 and LPDDR5 memory technologies. Offering over 3.8 million logic elements, Agilex 7 FPGA M-Series is optimized for applications that demand the highest performance and highest memory bandwidth, including AI, data centers, next-generation firewalls, 5G communications infrastructure and 8K broadcast equipment.

As data traffic continues to increase exponentially due to the growth of AI, cloud computing and video streaming services, the demand for higher memory bandwidth, increased capacity, and improved power efficiency has never been greater. Agilex 7 FPGA M-Series addresses these challenges by offering users high logic densities, a high-performance fabric and a memory interface that accelerates data throughput speeds while reducing memory bottlenecks and latency.

Montage Technology Samples PCIe 6.x / CXL 3.x Retimer Chips

Montage Technology today announced the customer sampling of its PCIe 6.x/CXL 3.x Retimer -- M88RT61632, which is designed to enhance connectivity performance for demanding high-bandwidth applications such as AI and cloud computing. This milestone extends the company's PCIe product portfolio, building upon its successful PCIe 4.0 and PCIe 5.0/CXL 2.0 Retimer solutions.

The PCIe 6.x/CXL 3.x Retimer delivers excellent performance with data rates up to 64 GT/s, twice that of PCIe 5.0. Powered by Montage Technology's proprietary PAM4 SerDes IP, the chip achieves superior signal integrity with link budget up to 43dB while maintaining low latency. Its innovative DSP architecture effectively addresses PCIe 6.x system design challenges including crosstalk and signal reflection. In addition, the chip features advanced link training and enhanced telemetry, enabling comprehensive link monitoring and fault diagnostics for high-reliability AI cluster deployments.

SMART Modular Add-In Cards Now Listed on CXL Consortium Integrators' List

SMART Modular Technologies, Inc. ("SMART"), a Penguin Solutions, Inc. brand (Nasdaq: PENG) and a global leader in integrated memory solutions, solid-state drives, and hybrid storage products, today announced that its 4-DIMM and 8-DIMM CXL (Compute Express Link ) memory Add-in Cards (AICs) have successfully passed CXL 2.0 compliance testing. These products are now officially listed on the CXL Consortium's Integrators' List, marking a significant milestone in the company's commitment to delivering high-quality, interoperable memory solutions.

The inclusion of SMART Modular Technologies' products on the CXL Integrator's List underscores the company's dedication to adhering to industry standards and ensuring compatibility across a wide range of computing environments. The CXL Compliance Program, developed by the CXL Consortium, provides members with opportunities to test the functionality and interoperability of their products as defined in the CXL specification. This achievement not only highlights SMART Modular Technologies' technical expertise, but also reinforces its role as a leader in advancing integrated memory technology.

CXL Consortium Announces Compute Express Link 3.2 Specification Release

The CXL Consortium, an industry standard body advancing coherent connectivity, announces the release of its Compute Express Link (CXL) 3.2 Specification. The 3.2 Specification optimizes CXL Memory Device monitoring and management, enhances functionality of CXL Memory Devices for OS and Applications, and extends security with the Trusted Security Protocol (TSP).

"We are excited to announce the release of the CXL 3.2 Specification to advance the CXL ecosystem by providing enhancements to security, compliance, and functionality of CXL Memory Devices," said Larrie Carr, CXL Consortium President. "The Consortium continues to develop an open, coherent interconnect and enable an interoperable ecosystem for heterogeneous memory and computing solutions."

Kioxia Adopted for NEDO Project to Develop Manufacturing Technology for Innovative Memory Under Post-5G System Infrastructure Project

Kioxia Corporation, a world leader in memory solutions, today announced that it has been adopted by Japan's national research and development agency, New Energy and Industrial Technology Development Organization (NEDO), for its groundbreaking proposal on the Development of Manufacturing Technology for Innovative Memory to enhance the post-5G information and communication system infrastructure.

In the post-5G information and communication era, AI is estimated to generate an unprecedented volume of data. This surge will likely escalate the data processing demands of data centers and increase power consumption. To address this, it is crucial that the next-generation memories facilitate rapid data transfer with high-performance processors while increasing capacity and reducing power consumption.

Credo Announces PCI Express 6/7, Compute Express Link CXL 3.x Retimers, and AEC PCI Express Product Line at OCP Summit 2024

Credo Technology Group Holding Ltd (Credo), an innovator in providing secure, high-speed connectivity solutions that deliver improved energy efficiency as data rates and corresponding bandwidth requirements increase throughout the data infrastructure market, is excited to announce the company's first Toucan PCI Express (PCIe) 6, Compute Express Link (CXL) 3.x and Magpie PCIe 7, CXL 4.x retimers and OSFP-XD 16x 64GT/s (1 Tb) PCIe 6/CXL HiWire AECs. Credo will demonstrate the Toucan PCIe 6 retimers and HiWire AECs at the upcoming Open Compute Project (OCP) Summit October 15-17 in Booth 31 and the OCP Innovation Center.

Building on Credo's renowned Serializer/Deserializer (SerDes) technology, the new PCIe 6 and PCIe 7 retimers deliver industry-leading performance and power efficiency while being built on lower cost, more mature process nodes than competing devices. Credo will also include enhanced diagnostic tools, including an embedded logic analyzer and advanced SerDes tools driven by a new GUI designed to enable rapid bring up and debug of customer systems.

Advantech Announces CXL 2.0 Memory to Boost Data Center Efficiency

Advantech, a global leader in embedded computing, is excited to announce the release of the SQRAM CXL 2.0 Type 3 Memory Module. Compute Express Link (CXL) 2.0 is the next evolution in memory technology, providing memory expansion with a high-speed, low-latency interconnect designed to meet the demands of large AI Training and HPC clusters. CXL 2.0 builds on the foundation of the original CXL specification, introducing advanced features such as memory sharing, and expansion, enabling more efficient utilization of resources across heterogeneous computing environments.

Memory Expansion via E3.S 2T Form Factor
Traditional memory architectures are often limited by fixed allocations, which can result in underutilized resources and bottlenecks in data-intensive workloads. With the E3.S form factor, based on the EDSFF standard, the CXL 2.0 Memory Module overcomes these limitations, allowing for dynamic resource management. This not only improves performance but reduces costs by maximizing existing resources.

ScaleFlux Announces Two New SSD Controllers and One CXL Controller

In the past 13 years, global data production has surged, increasing an estimated 74 times. (1) Looking forward, McKinsey projects AI to spur 35% annual growth in enterprise SSD capacity demand, from 181 Exabytes (EB) in 2024 to 1,078EB in 2030. (2) To address this growing demand, ScaleFlux, a leader in data storage and memory technology, is announcing a significant expansion of its product portfolio. The company is introducing cutting-edge controllers for both NVMe SSDs and Compute Express Link (CXL) modules, reinforcing its leadership in innovative technology for the data pipeline. "With the release of three new ASIC controllers and key updates to its existing lineup, ScaleFlux continues to push the boundaries of SSD and memory performance, power efficiency, and data integrity," points out Hao Zhong, CEO and Co-Founder of the company.

Three New SoC Controllers to Transform Data Center Storage
ScaleFlux is proud to unveil three new SoC controllers designed to enhance data center, AI and enterprise infrastructure:

JEDEC Adds Two New Standards Supporting Compute Express Link (CXL) Technology

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of two new standards supporting Compute Express Link (CXL ) technology. These additions complete a comprehensive family of four standards that provide the industry with unparalleled flexibility to develop a wide range of CXL memory products. All four standards are available for free download from the JEDEC website.

JESD319: JEDEC Memory Controller Standard - for Compute Express Link (CXL ) defines the overall specifications, interface parameters, signaling protocols, and features for a CXL Memory Controller ASIC. Key aspects include pinout reference information and a functional description that includes CXL interface, memory controller, memory RAS, metadata, clocking, reset, performance, and controller configuration requirements. JESD319 focuses on the CXL 3.1 based direct attached memory expansion application, providing a baseline of standardized functionality while allowing for additional innovations and customizations.

SK hynix Applies CXL Optimization Solution to Linux

SK hynix Inc. announced today that the key features of its Heterogeneous Memory Software Development Kit (HMSDK) are now available on Linux, the world's largest open source operating system. HMSDK is SK hynix's proprietary software for optimizing the operation of Compute Express Link (CXL), which is gaining attention as a next-generation AI memory technology along with High Bandwidth Memory (HBM). Having received global recognition for HMSDK's performance, SK hynix is now integrating it with Linux. This accomplishment marks a significant milestone for the company as it highlights the company's competitiveness in software, adding to the recognition for its high-performance memory hardware such as HBM.

In the future, developers around the world working on Linux will be able to use SK hynix's technology as the industry standard for CXL memory, putting the company in an advantageous position for global collaboration on next-generation memory. SK hynix's HMSDK enhances memory package's bandwidth by over 30% without modifying existing applications. It achieves this by selectively allocating memory based on the bandwidth between existing memory and expanded CXL memory. Additionally, the software improves performance by more than 12% over conventional systems through optimization based on access frequency, a feature which relocates frequently accessed data to faster memory.

Innodisk Unveils Advanced CXL Memory Module to Power AI Servers

Innodisk, a leading global AI solution provider, continues to push the boundaries of innovation with the launch of its cutting-edge Compute Express Link (CXL) Memory Module, which is designed to meet the rapid growth demands of AI servers and cloud data centers. As one of the few module manufacturers offering this technology, Innodisk is at the forefront of AI and high-performance computing.

The demand for AI servers is rising quickly, with these systems expected to account for approximately 65% of the server market by 2024, according to Trendforce (2024). This growth has created an urgent need for greater memory bandwidth and capacity, as AI servers now require at least 1.2 TB of memory to operate effectively. Traditional DDR memory solutions are increasingly struggling to meet these demands, especially as the number of CPU cores continues to multiply, leading to challenges such as underutilized CPU resources and increasing latency between different protocols.

Intel Dives Deep into Lunar Lake, Xeon 6, and Gaudi 3 at Hot Chips 2024

Demonstrating the depth and breadth of its technologies at Hot Chips 2024, Intel showcased advancements across AI use cases - from the data center, cloud and network to the edge and PC - while covering the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet for high-speed AI data processing. The company also unveiled new details about the Intel Xeon 6 SoC (code-named Granite Rapids-D), scheduled to launch during the first half of 2025.

"Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems and technologies necessary to redefine what's possible. As AI workloads intensify, Intel's broad industry experience enables us to understand what our customers need to drive innovation, creativity and ideal business outcomes. While more performant silicon and increased platform bandwidth are essential, Intel also knows that every workload has unique challenges: A system designed for the data center can no longer simply be repurposed for the edge. With proven expertise in systems architecture across the compute continuum, Intel is well-positioned to power the next generation of AI innovation." -Pere Monclus, chief technology officer, Network and Edge Group at Intel.

SiFive Announces Performance P870-D RISC-V Datacenter Processor

Today SiFive, Inc., the gold standard for RISC-V computing, announced its new SiFive Performance P870-D datacenter processor to meet customer requirements for highly parallelizable infrastructure workloads including video streaming, storage, and web appliances. When used in combination with products from the SiFive Intelligence product family, datacenter architects can also build an extremely high-performance, energy efficient compute subsystem for AI-powered applications.

Building on the success of the P870, the P870-D supports the open AMBA CHI protocol so customers have more flexibility to scale the number of clusters. This scalability allows customers to boost performance while minimizing power consumption. By harnessing a standard CHI bus, the P870-D enables SiFive's customers to scale up to 256 cores while harnessing industry-standard protocols, including Compute Express Link (CXL) and CHI chip to chip (C2C), to enable coherent high core count heterogeneous SoCs and chiplet configurations.

MSI Showcases CXL Memory Expansion Server at FMS 2024 Event

MSI, a leading global server provider, is showcasing its new CXL (Compute Express Link)-based server platform powered by 4th Gen AMD EPYC processors at The Future of Memory and Storage 2024, at the Samsung booth (#407) and MemVerge booth (#1251) in the Santa Clara Convention Center from August 6-8. The CXL memory expansion server is designed to enhance In-Memory Database, Electronic Design Automation (EDA), and High Performance Computing (HPC) application performance.

"By adopting innovative CXL technology to expand memory capacity and bandwidth, MSI's CXL memory expansion server integrates cutting-edge technology from AMD EPYC processors, CXL memory devices, and advanced management software," said Danny Hsu, General Manager of Enterprise Platform Solutions. "In collaboration with key players in the CXL ecosystem, including AMD, Samsung, and MemVerge, MSI and its partners are driving CXL technology to meet the demands of high-performance data center computing."

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

Samsung Planning for CXL 2.0 DRAM Mass Production Later This Year

Samsung Electronics Co. is putting a lot of effort into securing its involvement in next-generation memory technology, CXL (Compute Express Link). In a media briefing on Thursday, Jangseok Choi, vice president of Samsung's new business planning team, announced plans to mass-produce 256 GB DRAM supporting CXL 2.0 by the end of this year. CXL technology promises to significantly enhance the efficiency of high-performance server systems by providing a unified interface for accelerators, DRAM, and storage devices used with CPUs and GPUs.

The company projects that CXL technology will increase memory capacity per server by eight to ten times, marking a significant leap in computing power. Samsung's long investment in CXL development is now in the final stages with the company currently testing products with partners for performance verification, Samsung recently established the industry's first CXL infrastructure certified by Red Hat. "We expect the CXL market to start blooming in the second half and explosively grow from 2028," Choi stated, highlighting the technology's potential to expand memory capacity and bandwidth far beyond current limitations.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

JEDEC Publishes Compute Express Link (CXL) Support Standards

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of JESD405-1B JEDEC Memory Module Label - for Compute Express Link (CXL ) V1.1. JESD405-1B joins JESD317A JEDEC Memory Module Reference Base Standard - for Compute Express Link (CXL ) V1.0, first introduced in March 2023, in defining the function and configuration of memory modules that support CXL specifications, as well as the standardized content for labels for these modules. JESD405-1B and JESD317A were developed in coordination with the Compute Express Link standards organization. Both standards are available for free download from the JEDEC website.

JESD317A provides detailed guidelines for CXL memory modules including mechanical, electrical, pinout, power and thermal, and environmental guidelines for emerging CXL Memory Modules (CMMs). These modules conform to SNIA (Storage Networking Industry Association) EDSFF form factors E1.S and E3.S to provide end-user friendly hot pluggable assemblies for data centers and similar server applications.

MSI Demonstrates Advanced Applications of AIoT Simulated Smart City with Five Exhibition Topics

MSI, a world leader in AI PC and AIoT solutions, is going to participate in COMPUTEX 2024 from 6/4 to 6/7. With cutting-edge skills, MSI's AIoT team has been focusing on product development and hardware-software integration for AI applications in recent years, achieving great results on application development in various fields. MSI will create an exclusive exhibition area for Smart City to introduce AIoT application scenarios which have five topics including AI & Datacenter, Automation, Industrial Solutions, Commercial Solutions, and Automotive Solutions.

The most iconic products this year are diverse GPU platforms for AI markets and a new CXL (Compute Express Link) memory expansion server which was developed by the cooperation of key players in the CXL technology field, including AMD, Samsung, and Micron. Besides, the latest Autonomous Mobile Robot (AMR) powered by NVIDIA Jetson AGX Orin is also one of the major highlights. For new energy vehicles, we will first disclose the complete AC/DC chargers coupled with MSI E-Connect dashboard (EMS) and AI-powered car recognition applications to show the one-stop service of HW/SW integration.

Micron First to Achieve Qualification Sample Milestone to Accelerate Ecosystem Adoption of CXL 2.0 Memory

Micron Technology, a leader in innovative data center solutions, today announced it has achieved its qualification sample milestone for the Micron CZ120 memory expansion modules using Compute Express Link (CXL). Micron is the first in the industry to achieve this milestone, which accelerates the adoption of CXL solutions within the data center to tackle the growing memory challenges stemming from existing data-intensive workloads and emerging artificial intelligence (AI) and machine learning (ML) workloads.

Using a new and emerging CXL standard, the CZ120 required substantial hardware testing for reliability, quality and performance across CPU providers and OEMs, along with comprehensive software testing for compatibility and compliance with OS and hypervisor vendors. This achievement reflects the collaboration and commitment across the data center ecosystem to validate the advantages of CXL memory. By testing the combined products for interoperability and compatibility across hardware and software, the Micron CZ120 memory expansion modules satisfy the rigorous standards for reliability, quality and performance required by customers' data centers.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.
Return to Keyword Browsing
Jul 7th, 2025 12:52 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts