News Posts matching #CXL

Return to Keyword Browsing

Intel Dives Deep into Lunar Lake, Xeon 6, and Gaudi 3 at Hot Chips 2024

Demonstrating the depth and breadth of its technologies at Hot Chips 2024, Intel showcased advancements across AI use cases - from the data center, cloud and network to the edge and PC - while covering the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet for high-speed AI data processing. The company also unveiled new details about the Intel Xeon 6 SoC (code-named Granite Rapids-D), scheduled to launch during the first half of 2025.

"Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems and technologies necessary to redefine what's possible. As AI workloads intensify, Intel's broad industry experience enables us to understand what our customers need to drive innovation, creativity and ideal business outcomes. While more performant silicon and increased platform bandwidth are essential, Intel also knows that every workload has unique challenges: A system designed for the data center can no longer simply be repurposed for the edge. With proven expertise in systems architecture across the compute continuum, Intel is well-positioned to power the next generation of AI innovation." -Pere Monclus, chief technology officer, Network and Edge Group at Intel.

SiFive Announces Performance P870-D RISC-V Datacenter Processor

Today SiFive, Inc., the gold standard for RISC-V computing, announced its new SiFive Performance P870-D datacenter processor to meet customer requirements for highly parallelizable infrastructure workloads including video streaming, storage, and web appliances. When used in combination with products from the SiFive Intelligence product family, datacenter architects can also build an extremely high-performance, energy efficient compute subsystem for AI-powered applications.

Building on the success of the P870, the P870-D supports the open AMBA CHI protocol so customers have more flexibility to scale the number of clusters. This scalability allows customers to boost performance while minimizing power consumption. By harnessing a standard CHI bus, the P870-D enables SiFive's customers to scale up to 256 cores while harnessing industry-standard protocols, including Compute Express Link (CXL) and CHI chip to chip (C2C), to enable coherent high core count heterogeneous SoCs and chiplet configurations.

SK hynix Presents Extensive AI Memory Lineup at Expanded FMS 2024

SK hynix has returned to Santa Clara, California to present its full array of groundbreaking AI memory technologies at FMS: the Future of Memory and Storage (FMS) 2024 from August 6-8. Previously known as Flash Memory Summit, the conference changed its name to reflect its broader focus on all types of memory and storage products amid growing interest in AI. Bringing together industry leaders, customers, and IT professionals, FMS 2024 covers the latest trends and innovations shaping the memory industry.

Participating in the event under the slogan "Memory, The Power of AI," SK hynix is showcasing its outstanding memory capabilities through a keynote presentation, multiple technology sessions, and product exhibits.

MSI Showcases CXL Memory Expansion Server at FMS 2024 Event

MSI, a leading global server provider, is showcasing its new CXL (Compute Express Link)-based server platform powered by 4th Gen AMD EPYC processors at The Future of Memory and Storage 2024, at the Samsung booth (#407) and MemVerge booth (#1251) in the Santa Clara Convention Center from August 6-8. The CXL memory expansion server is designed to enhance In-Memory Database, Electronic Design Automation (EDA), and High Performance Computing (HPC) application performance.

"By adopting innovative CXL technology to expand memory capacity and bandwidth, MSI's CXL memory expansion server integrates cutting-edge technology from AMD EPYC processors, CXL memory devices, and advanced management software," said Danny Hsu, General Manager of Enterprise Platform Solutions. "In collaboration with key players in the CXL ecosystem, including AMD, Samsung, and MemVerge, MSI and its partners are driving CXL technology to meet the demands of high-performance data center computing."

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

Samsung Planning for CXL 2.0 DRAM Mass Production Later This Year

Samsung Electronics Co. is putting a lot of effort into securing its involvement in next-generation memory technology, CXL (Compute Express Link). In a media briefing on Thursday, Jangseok Choi, vice president of Samsung's new business planning team, announced plans to mass-produce 256 GB DRAM supporting CXL 2.0 by the end of this year. CXL technology promises to significantly enhance the efficiency of high-performance server systems by providing a unified interface for accelerators, DRAM, and storage devices used with CPUs and GPUs.

The company projects that CXL technology will increase memory capacity per server by eight to ten times, marking a significant leap in computing power. Samsung's long investment in CXL development is now in the final stages with the company currently testing products with partners for performance verification, Samsung recently established the industry's first CXL infrastructure certified by Red Hat. "We expect the CXL market to start blooming in the second half and explosively grow from 2028," Choi stated, highlighting the technology's potential to expand memory capacity and bandwidth far beyond current limitations.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

JEDEC Publishes Compute Express Link (CXL) Support Standards

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of JESD405-1B JEDEC Memory Module Label - for Compute Express Link (CXL ) V1.1. JESD405-1B joins JESD317A JEDEC Memory Module Reference Base Standard - for Compute Express Link (CXL ) V1.0, first introduced in March 2023, in defining the function and configuration of memory modules that support CXL specifications, as well as the standardized content for labels for these modules. JESD405-1B and JESD317A were developed in coordination with the Compute Express Link standards organization. Both standards are available for free download from the JEDEC website.

JESD317A provides detailed guidelines for CXL memory modules including mechanical, electrical, pinout, power and thermal, and environmental guidelines for emerging CXL Memory Modules (CMMs). These modules conform to SNIA (Storage Networking Industry Association) EDSFF form factors E1.S and E3.S to provide end-user friendly hot pluggable assemblies for data centers and similar server applications.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

SK hynix Showcases Its Next-Gen Solutions at Computex 2024

SK hynix presented its leading AI memory solutions at COMPUTEX Taipei 2024 from June 4-7. As one of Asia's premier IT shows, COMPUTEX Taipei 2024 welcomed around 1,500 global participants including tech companies, venture capitalists, and accelerators under the theme "Connecting AI". Making its debut at the event, SK hynix underlined its position as a first mover and leading AI memory provider through its lineup of next-generation products.

"Connecting AI" With the Industry's Finest AI Memory Solutions
Themed "Memory, The Power of AI," SK hynix's booth featured its advanced AI server solutions, groundbreaking technologies for on-device AI PCs, and outstanding consumer SSD products. HBM3E, the fifth generation of HBM1, was among the AI server solutions on display. Offering industry-leading data processing speeds of 1.18 terabytes (TB) per second, vast capacity, and advanced heat dissipation capability, HBM3E is optimized to meet the requirements of AI servers and other applications. Another technology which has become crucial for AI servers is CXL as it can increase system bandwidth and processing capacity. SK hynix highlighted the strength of its CXL portfolio by presenting its CXL Memory Module-DDR5 (CMM-DDR5), which significantly expands system bandwidth and capacity compared to systems only equipped with DDR5. Other AI server solutions on display included the server DRAM products DDR5 RDIMM and MCR DIMM. In particular, SK hynix showcased its tall 128-gigabyte (GB) MCR DIMM for the first time at an exhibition.

Next-Gen Computing: MiTAC and TYAN Launch Intel Xeon 6 Processor-Based Servers for AI, HPC, Cloud, and Enterprise Workloads at COMPUTEX 2024

The subsidiary of MiTAC Holdings Corp, MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership," said Rick Hwang, President of MiTAC Computing Technology Corporation.

MSI Unveils New AI and Computing Platforms with 4th Gen AMD EPYC Processors at Computex 2024

MSI, a leading global server provider, will introduce its latest server platforms based on the 4th Gen AMD EPYC processors at Computex 2024, booth #M0806 in Taipei, Taiwan, from June 4-7. These new platforms, designed for growing cloud-native environments, deliver a combination of performance and efficiency for data centers.

"Leveraging the advantages of 4th Gen AMD EPYC processors, MSI's latest server platforms feature scalability and flexibility with new adoption of CXL technology and DC-MHS architecture, helping data centers achieve the most scalable cloud applications while delivering leading performance," said Danny Hsu, General Manager of Enterprise Platform Solutions.

MSI Demonstrates Advanced Applications of AIoT Simulated Smart City with Five Exhibition Topics

MSI, a world leader in AI PC and AIoT solutions, is going to participate in COMPUTEX 2024 from 6/4 to 6/7. With cutting-edge skills, MSI's AIoT team has been focusing on product development and hardware-software integration for AI applications in recent years, achieving great results on application development in various fields. MSI will create an exclusive exhibition area for Smart City to introduce AIoT application scenarios which have five topics including AI & Datacenter, Automation, Industrial Solutions, Commercial Solutions, and Automotive Solutions.

The most iconic products this year are diverse GPU platforms for AI markets and a new CXL (Compute Express Link) memory expansion server which was developed by the cooperation of key players in the CXL technology field, including AMD, Samsung, and Micron. Besides, the latest Autonomous Mobile Robot (AMR) powered by NVIDIA Jetson AGX Orin is also one of the major highlights. For new energy vehicles, we will first disclose the complete AC/DC chargers coupled with MSI E-Connect dashboard (EMS) and AI-powered car recognition applications to show the one-stop service of HW/SW integration.

Micron First to Achieve Qualification Sample Milestone to Accelerate Ecosystem Adoption of CXL 2.0 Memory

Micron Technology, a leader in innovative data center solutions, today announced it has achieved its qualification sample milestone for the Micron CZ120 memory expansion modules using Compute Express Link (CXL). Micron is the first in the industry to achieve this milestone, which accelerates the adoption of CXL solutions within the data center to tackle the growing memory challenges stemming from existing data-intensive workloads and emerging artificial intelligence (AI) and machine learning (ML) workloads.

Using a new and emerging CXL standard, the CZ120 required substantial hardware testing for reliability, quality and performance across CPU providers and OEMs, along with comprehensive software testing for compatibility and compliance with OS and hypervisor vendors. This achievement reflects the collaboration and commitment across the data center ecosystem to validate the advantages of CXL memory. By testing the combined products for interoperability and compatibility across hardware and software, the Micron CZ120 memory expansion modules satisfy the rigorous standards for reliability, quality and performance required by customers' data centers.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.

SK hynix Strengthens AI Memory Leadership & Partnership With Host at the TSMC 2024 Tech Symposium

SK hynix showcased its next-generation technologies and strengthened key partnerships at the TSMC 2024 Technology Symposium held in Santa Clara, California on April 24. At the event, the company displayed its industry-leading HBM AI memory solutions and highlighted its collaboration with TSMC involving the host's CoWoS advanced packaging technology.

TSMC, a global semiconductor foundry, invites its major partners to this annual conference in the first half of each year so they can share their new products and technologies. Attending the event under the slogan "Memory, the Power of AI," SK hynix received significant attention for presenting the industry's most powerful AI memory solution, HBM3E. The product has recently demonstrated industry-leading performance, achieving input/output (I/O) transfer speed of up to 10 gigabits per second (Gbps) in an AI system during a performance validation evaluation.

SMART Modular Technologies Introduces New Family of CXL Add-in Cards for Memory Expansion

SMART Modular Technologies, Inc. ("SMART"), a division of SGH (Nasdaq: SGH) and a global leader in memory solutions, solid-state drives, and advanced memory, announces its new family of Add-In Cards (AICs) which implements the Compute Express Link (CXL) standard and also supports industry standard DDR5 DIMMs. These are the first in their class, high-density DIMM AICs to adopt the CXL protocol. The SMART 4-DIMM and 8-DIMM products enable server and data center architects to add up to 4 TB of memory in a familiar, easy-to-deploy form factor.

"The market for CXL memory components for data center applications is expected to grow rapidly. Initial production shipments are expected in late 2024 and will surpass the $2 billion mark by 2026. Ultimately, CXL attach rates in the server market will reach 30% including both expansion and pooling use cases," stated Mike Howard, vice president of DRAM and memory markets at TechInsights, an intelligence source to semiconductor innovation and related markets.

Samsung Demonstrates New CXL Capabilities and Introduces New Memory Module for Scalable, Composable Disaggregated Infrastructure

Samsung Electronics, a world leader in advanced semiconductor technology, unveiled the expansion of its Compute Express Link (CXL) memory module portfolio and showcased its latest HBM3E technology, reinforcing leadership in high-performance and high-capacity solutions for AI applications.

In a keynote address to a packed crowd at Santa Clara's Computer History Museum, Jin-Hyeok Choi, Corporate Executive Vice President, Device Solutions Research America - Memory at Samsung Electronics, along with SangJoon Hwang, Corporate Executive Vice President, Head of DRAM Product and Technology at Samsung Electronics, took the stage to introduce new memory solutions and discuss how Samsung is leading HBM and CXL innovations in the AI era. Joining Samsung on stage was Paul Turner, Vice President, Product Team, VCF Division at VMware by Broadcom and Gunnar Hellekson, Vice President and General Manager at Red Hat to discuss how their software solutions combined with Samsung's hardware technology is pushing the boundaries of memory innovation.

SK hynix Presents the Future of AI Memory Solutions at NVIDIA GTC 2024

SK hynix is displaying its latest AI memory technologies at NVIDIA's GPU Technology Conference (GTC) 2024 held in San Jose from March 18-21. The annual AI developer conference is proceeding as an in-person event for the first time since the start of the pandemic, welcoming industry officials, tech decision makers, and business leaders. At the event, SK hynix is showcasing new memory solutions for AI and data centers alongside its established products.

Showcasing the Industry's Highest Standard of AI Memory
The AI revolution has continued to pick up pace as AI technologies spread their reach into various industries. In response, SK hynix is developing AI memory solutions capable of handling the vast amounts of data and processing power required by AI. At GTC 2024, the company is displaying some of these products, including its 12-layer HBM3E and Compute Express Link (CXL)1, under the slogan "Memory, The Power of AI". HBM3E, the fifth generation of HBM2, is the highest-specification DRAM for AI applications on the market. It offers the industry's highest capacity of 36 gigabytes (GB), a processing speed of 1.18 terabytes (TB) per second, and exceptional heat dissipation, making it particularly suitable for AI systems. On March 19, SK hynix announced it had become the first in the industry to mass-produce HBM3E.

SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024

SK hynix unveiled a new consumer product based on its latest solid-state drive (SSD), PCB01, which boasts industry-leading performance levels at GPU Technology Conference (GTC) 2024. Hosted by NVIDIA in San Jose, California from March 18-21, GTC is one of the world's leading conferences for AI developers. Applied to on-device AI PCs, PCB01 is a PCIe fifth-generation SSD which recently had its performance and reliability verified by a major global customer. After completing product development in the first half of 2024, SK hynix plans to launch two versions of PCB01 by the end of the year which target both major technology companies and general consumers.

Optimized for AI PCs, Capable of Loading LLMs Within One Second
Offering the industry's highest sequential read speed of 14 gigabytes per second (GB/s) and a sequential write speed of 12 GB/s, PCB01 doubles the speed specifications of its previous generation. This enables the loading of LLMs required for AI learning and inference in less than one second. To make on-device AIs operational, PC manufacturers create a structure that stores an LLM in the PC's internal storage and quickly transfers the data to DRAMs for AI tasks. In this process, the PCB01 inside the PC efficiently supports the loading of LLMs. SK hynix expects these characteristics of its latest SSD to greatly increase the speed and quality of on-device AIs.

MemVerge and Micron Boost NVIDIA GPU Utilization with CXL Memory

MemVerge, a leader in AI-first Big Memory Software, has joined forces with Micron to unveil a groundbreaking solution that leverages intelligent tiering of CXL memory, boosting the performance of large language models (LLMs) by offloading from GPU HBM to CXL memory. This innovative collaboration is being showcased in Micron booth #1030 at GTC, where attendees can witness firsthand the transformative impact of tiered memory on AI workloads.

Charles Fan, CEO and Co-founder of MemVerge, emphasized the critical importance of overcoming the bottleneck of HBM capacity. "Scaling LLM performance cost-effectively means keeping the GPUs fed with data," stated Fan. "Our demo at GTC demonstrates that pools of tiered memory not only drive performance higher but also maximize the utilization of precious GPU resources."

Cadence Digital and Custom/Analog Flows Certified for Latest Intel 18A Process Technology

Cadence's digital and custom/analog flows are certified on the Intel 18A process technology. Cadence design IP supports this node from Intel Foundry, and the corresponding process design kits (PDKs) are delivered to accelerate the development of a wide variety of low-power consumer, high-performance computing (HPC), AI and mobile computing designs. Customers can now begin using the production-ready Cadence design flows and design IP to achieve design goals and speed up time to market.

"Intel Foundry is very excited to expand our partnership with Cadence to enable key markets for the leading-edge Intel 18A process technology," said Rahul Goyal, Vice President and General Manager, Product and Design Ecosystem, Intel Foundry. "We will leverage Cadence's world-class portfolio of IP, AI design technologies, and advanced packaging solutions to enable high-volume, high-performance, and power-efficient SoCs in Intel Foundry's most advanced process technology. Cadence is an indispensable partner supporting our IDM2.0 strategy and the Intel Foundry ecosystem."

Samsung Electronics and Red Hat Partnership to Lead Expansion of CXL Memory Ecosystem with Key Milestone

Samsung Electronics Co., Ltd a world leader in advanced memory technology, today announced that for the first time in the industry, it has successfully verified Compute Express Link (CXL) memory operations in a real user environment with open-source software provider Red Hat, leading the expansion of its CXL ecosystem. Due to the exponential growth of data throughput and memory requirements for emerging fields like generative AI, autonomous driving and in-memory databases (IMDBs), the demand for systems with greater memory bandwidth and capacity is also increasing. CXL is a unified interface standard that connects various processors, such as CPUs, GPUs and memory devices through a PCIe interface that can serve as a solution for limitations in existing systems in terms of speed, latency and expandability.

"Samsung has been working closely with a wide range of industry partners in areas from software, data centers and servers to chipset providers, and has been at the forefront of building up the CXL memory ecosystem," said Yongcheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "Our CXL partnership with Red Hat is an exemplary case of collaboration between advanced software and hardware, which will enrich and accelerate the CXL ecosystem as a whole."

Alphawave Semi Partners with Keysight to Deliver a Complete PCIe 6.0 Subsystem Solution

Alphawave Semi (LSE: AWE), a global leader in high-speed connectivity for the world's technology infrastructure, today announced successful collaboration with Keysight Technologies, a market-leading design, emulation, and test solutions provider, demonstrating interoperability between Alphawave Semi's PCIe 6.0 64 GT/s Subsystem (PHY and Controller) Device and Keysight PCIe 6.0 64 GT/s Protocol Exerciser, negotiating a link to the maximum PCIe 6.0 data rate. Alphawave Semi, already on the PCI-SIG 5.0 Integrators list, is accelerating next-generation PCIe 6.0 Compliance Testing through this collaboration.

Alphawave Semi's leading-edge silicon implementation of the new PCIe 6.0 64 GT/s Flow Control Unit (FLIT)-based protocol enables higher data rates for hyperscale and data infrastructure applications. Keysight and Alphawave Semi achieved another milestone by successfully establishing a CXL 2.0 link setting the stage for future cache coherency in the datacenter.

Samsung Electronics and Red Hat Partnership To Lead Expansion of CXL Memory Ecosystem With Key Milestone

Samsung Electronics, a world leader in advanced memory technology, today announced that for the first time in the industry, it has successfully verified Compute Express Link (CXL) memory operations in a real user environment with open-source software provider Red Hat, leading the expansion of its CXL ecosystem.

Due to the exponential growth of data throughput and memory requirements for emerging fields like generative AI, autonomous driving and in-memory databases (IMDBs), the demand for systems with greater memory bandwidth and capacity is also increasing. CXL is a unified interface standard that connects various processors, such as CPUs, GPUs and memory devices through a PCIe interface that can serve as a solution for limitations in existing systems in terms of speed, latency and expandability.
Return to Keyword Browsing
Apr 7th, 2025 01:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts