News Posts matching #CXL

Return to Keyword Browsing

AMD EPYC "Genoa" Zen 4 Product Stack Leaked

With its recent announcement of the Ryzen 7000 desktop processors, the action now shifts to the server, with AMD preparing a wide launch of its EPYC "Genoa" and "Bergamo" processors this year. Powered by the "Zen 4" microarchitecture, and contemporary I/O that includes PCI-Express Gen 5, CXL, and DDR5, these processors dial the CPU core-counts per socket up to 96 in case of "Genoa," and up to 128 in case of "Bergamo." The EPYC "Genoa" series represents the main trunk of the company's server processor lineup, with various internal configurations targeting specific use-cases.

The 96 cores are spread twelve 5 nm 8-core CCDs, each with a high-bandwidth Infinity Fabric path to the sIOD (server I/O die), which is very likely built on the 6 nm node. Lower core-count models can be built either by lowering the CCD count (ensuring more cores/CCD), or by reducing the number of cores/CCD and keeping the CCD-count constant, to yield more bandwidth/core. The leaked product-stack table below shows several of these sub-classes of "Genoa" and "Bergamo," classified by use-cases. The leaked slide also details the nomenclature AMD is using with its new processors. The leaked roadmap also mentions the upcoming "Genoa-X" processor for HPC and cloud-compute uses, which features the 3D Vertical Cache technology.

UEFI Forum Releases the UEFI 2.10 Specification and the ACPI 6.5 Specification

The UEFI Forum today announced the release of the Unified Extensible Firmware Interface (UEFI) 2.10 specification and Advanced Configuration and Power Interface (ACPI) 6.5 specification. The new specification versions expand support for new processor types, memory interfaces and platform types, while allowing for crypto agility in post-quantum system security.

"We are excited to share the new Conformance Profiles feature, responsive to community pull for a way to make the UEFI Forum's work useful," said Mark Doran, UEFI Forum President. "The Conformance Profiles feature will expand the platform types UEFI can support to an ever wider range of platform types like IoT, embedded and automotive spaces - beyond general purpose computers."

Server Shipment Growth and Spiking Pricing Push Total 2Q22 Enterprise SSD Revenue Growth to 31% QoQ, Says TrendForce

According to TrendForce research, material supply improvement and spiking demand for enterprise SSDs from North American hyperscale data center and enterprise clients in 2Q22 coupled with the Kioxia contamination incident in 1Q22 prompted customers to ramp up procurement to avoid future supply shortages. Manufacturers also give priority to meeting the needs of server customers due to the high pricing of enterprise SSD. In the second quarter, overall revenue of the enterprise SSD market increased by 31.3% to US$7.32 billion.

As the market leader, Samsung has grown its enterprise SSD revenue to US$3.26 billion with the recovery of enterprise SSD procurement. Especially in the second quarter, when orders for other consumer products continued to decline, enterprise SSD became the company's outlet for reducing production capacity. At present, Samsung has been continuously investing in the development of next-generation transmission specification products such as the CXL 2.0 product released at the Flash Summit in early August, in order to maintain a leading position in the market.

CXL Consortium and JEDEC Sign MOU Agreement to Advance DRAM and Persistent Memory Technology

JEDEC Solid State Technology Association and Compute Express Link (CXL) Consortium today announced the signing of a Memorandum of Understanding (MOU) to formalize collaboration between the two organizations. The agreement outlines the formation of a joint work group to provide a forum that facilitates communication and sharing of information, requirements, recommendations and requests with the intent that this exchange of information will help standards developed by each organization augment one another.

"The MOU between JEDEC and CXL Consortium will establish a framework for ongoing communication to align future efforts between the two organizations. The joint work group will collaborate on useful solutions for form factors, management, security, and DRAM and other memory technologies," said Siamak Tavallaei, CXL Consortium President.

Samsung Unveils Far-Reaching, Next-Generation Memory Solutions at FMS 2022

Samsung Electronics, the world leader in advanced memory technology, today unveiled an array of next-generation memory and storage technologies during Flash Memory Summit 2022, held at the Santa Clara (California) Convention Center, Aug. 2-4. In a keynote titled "Memory Innovations Navigating the Big Data Era," Samsung spotlighted four areas of technological advancement driving the big data market—data movement, data storage, data processing and data management—and revealed its leading-edge memory solutions addressing each field.

To maximize data center efficiency in an increasingly data-driven world, Samsung introduced a next-generation storage technology, "Petabyte Storage." The new solution will allow a single server unit to pack more than one petabyte of storage, enabling server manufacturers to sharply increase their storage capacity within the same floor space with a minimal number of servers. High server utilization will also help to lower power consumption.

SMART Modular Technologies Launches its First Compute Express Link Memory Module

SMART Modular Technologies, Inc. ("SMART"), a division of SGH and a global leader in memory solutions, solid-state drives, and hybrid storage announces its new Compute Express Link (CXL) Memory Module, the XMM CXL memory module. SMART's new DDR5 XMM CXL modules helps boost server and data center performance by enabling cache coherent memory to be added behind the CXL interface, further expanding big data processing capabilities beyond the current 8-channel/12-channel limitations of most servers.

The industry adoption of composable serial-attached memory architecture enables a whole new era for the memory module industry. Serial-attached memory adds capacity and bandwidth capabilities beyond main memory DIMM modules. Servers with XMM CXL modules can be dynamically configured for different applications and workloads without being shut down. Memory can be shared across nodes to meet throughput and latency requirements.

CXL Consortium Releases Compute Express Link 3.0 Specification to Expand Fabric Capabilities and Management

The CXL Consortium, an industry standards body dedicated to advancing Compute Express Link (CXL) technology, today announced the release of the CXL 3.0 specification. The CXL 3.0 specification expands on previous technology generations to increase scalability and to optimize system level flows with advanced switching and fabric capabilities, efficient peer-to-peer communications, and fine-grained resource sharing across multiple compute domains.

"Modern datacenters require heterogenous and composable architectures to support compute intensive workloads for applications such as Artificial Intelligence and Machine Learning - and we continue to evolve CXL technology to meet industry requirements," said Siamak Tavallaei, president, CXL Consortium. "Developed by our dedicated technical workgroup members, the CXL 3.0 specification will enable new usage models in composable disaggregated infrastructure."

BittWare Announces PCIe 5.0/CXL FPGA Accelerators Featuring Intel Agilex M-Series and I-Series to Drive Memory and Interconnectivity Improvements

BittWare, a Molex company, a leading supplier of enterprise-class accelerators for edge and cloud-computing applications, today introduced new card and server-level solutions featuring Intel Agilex FPGAs. The new BittWare IA-860m helps customers alleviate memory-bound application workloads by leveraging up to 32 GB of HBM2E in-package memory and 16-lanes of PCIe 5.0 (with CXL upgrade option). BittWare also added new Intel Agilex I-Series FPGA-based products with the introduction of the IA-440i and IA-640i accelerators, which support high-performance interfaces, including 400G Ethernet and PCIe 5.0 (CXL option). These newest models complement BittWare's existing lineup of Intel Agilex F-Series products to comprise one of the broadest portfolios of Intel Agilex FPGA-based offerings on the market. This announcement reinforces BittWare's commitment to addressing ever-increasing demands of high-performance compute, storage, network and sensor processing applications.

"BittWare is excited to apply Intel's advanced technology to solve increasingly difficult application problems, quickly and at low risk," said Craig Petrie, vice president, Sales and Marketing of BittWare. "Our longstanding collaboration with Intel, expertise with the latest development tools, including OneAPI, as well as alignment with Molex's global supply chain and manufacturing capabilities enable BittWare to reduce development time by 12-to-18 months while ensuring smooth transitions from proof-of-concept to volume product deployment."

OpenCAPI Consortium Merges Into CXL

The industry has been undergoing significant changes in computing. Application specific hardware acceleration is becoming commonplace and new memory technologies are influencing the economics of computing. To address the need for an open architecture to allow full industry participation, the OpenCAPI Consortium (OCC) was founded in 2016. The architecture that was defined allowed any microprocessor to attach to coherent user-level accelerators, advanced memories, and was agnostic to the processor architecture. In 2021, OCC announced the Open Memory Interface (OMI). Based on OpenCAPI, OMI is a serial attached near memory interface that provides low latency and high bandwidth connections for main memory.

In 2019, the Compute Express Link (CXL) Consortium was launched to deliver an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. In 2020, the CXL and Gen-Z Consortiums announced plans to implement interoperability between their respective technologies, and in early 2022, Gen-Z transferred its specifications and assets to the CXL Consortium.

Kioxia Launches Second Generation of High-Performance, Cost-Effective XL-FLASH Storage Class Memory Solution

Kioxia Corporation, the world leader in memory solutions, today announced the launch of the second generation of XL-FLASH, a Storage Class Memory (SCM) solution based on its BiCS FLASH 3D flash memory technology, which significantly reduces bit cost while providing high performance and low latency. Product sample shipments are scheduled to start in November this year, with volume production expected to begin in 2023.

The second generation XL-FLASH achieves significant reduction in bit cost as a result of the addition of new multi-level cell (MLC) functionality with 2-bit per cell, in addition to the single-level cell (SLC) of the existing model. The maximum number of planes that can operate simultaneously has also increased from the current model, which will allow for improved throughput. The new XL-FLASH will have a memory capacity of 256 gigabits.

SK hynix Develops DDR5 DRAM CXLTM Memory to Expand the CXL Memory Ecosystem

SK hynix has developed its first DDR5 DRAM-based CXL (Compute Express Link) memory samples and strengthened its presence in next-generation memory solutions market. The form factor of the sample is EDSFF (Enterprise & Data Center Standard Form Factor) E3.S and it supports PCIe 5.0 x8 Lane, uses DDR5 standard DRAM and is equipped with CXL controllers. CXL, which is based on PCIe (Peripheral Component Interconnect Express), is a new standardized interface that helps increase the efficiency of utilizing CPUs, GPUs, accelerators, and memory. SK hynix has participated in the CXL consortium from an early stage, and is looking to secure CXL memory market leadership.

The essential point of the CXL memory market is expandability. The CXL memory allows for flexible memory expansion compared to current server market, where the memory capacity and performance are fixed once the server platform is adopted. CXL also has high growth potential as it is an interface spotlighted for high performance computing systems such as AI and big data related applications.

CXL Memory Pooling will Save Millions in DRAM Cost

Hyperscalers such as Microsoft, Google, Amazon, etc., all run their cloud divisions with a specific goal. To provide their hardware to someone else in a form called instance and have the user pay for it by the hour. However, instances are usually bound by a specific CPU and memory configuration, which you can not configure yourself. But instead, you can only choose from the few available options that are listed. For example, when selecting one virtual CPU core, you get two GB of RAM and can go as high as you want with CPU cores. However, the available RAM will also double, even though you might not need it. When renting an instance, the allocated CPU cores and memory are yours until the instance is turned off.

And it is precisely this that hyperscalers are dealing with. Many instances don't fully utilize their DRAM, making the whole data center usage inefficient. Microsoft Azure, one of the largest cloud providers, measured that 50% of all VMs never touch 50% of their rented memory. This makes memory stranded in a rented VM, making it unusable for anything else.
At Azure, we find that a major contributor to DRAM inefficiency is platform-level memory stranding. Memory stranding occurs when a server's cores are fully rented to virtual machines (VMs), but unrented memory remains. With the cores exhausted, the remaining memory is unrentable on its own, and is thus stranded. Surprisingly, we find that up to 25% of DRAM may become stranded at any given moment.

Samsung & Red Hat Announce Collaboration in the Field of Next-Generation Memory Software

Samsung Electronics and Red Hat today announced a broad collaboration on software technologies for next-generation memory solutions. The partnership will focus on the development and validation of open source software for existing and emerging memory and storage products, including NVMe SSDs; CXL memory; computational memory/storage (HBM-PIM, Smart SSDs) and fabrics — in building an expansive ecosystem for closely integrated memory hardware and software. The exponential growth of data driven by AI, AR and the fast-approaching metaverse is bringing disruptive changes to memory designs, requiring more sophisticated software technologies that better link with the latest hardware advancements.

"Samsung and Red Hat will make a concerted effort to define and standardize memory software solutions that embrace evolving server and memory hardware, while building a more robust memory ecosystem," said Yongcheol Bae, Executive Vice President and Head of the Memory Application Engineering Team at Samsung Electronics. "We will invite partners from across the IT industry to join us in expanding the software-hardware memory ecosystem to create greater customer value."

Samsung Electronics Introduces Industry's First 512GB CXL Memory Module

Samsung Electronics, the world leader in advanced memory technology, today announced its development of the industry's first 512-gigabyte (GB) Compute Express Link (CXL) DRAM, taking an important step toward the commercialization of CXL which will enable extremely high memory capacity with low latency in IT systems. Since introducing the industry's first CXL DRAM prototype with a field-programmable gate array (FPGA) controller in May 2021, Samsung has been working closely with data center, enterprise server and chipset companies to develop an improved, customizable CXL device.

The new CXL DRAM is built with an application-specific integrated circuit (ASIC) CXL controller and is the first to pack 512 GB of DDR5 DRAM, featuring four times the memory capacity and one-fifth the system latency over the previous Samsung CXL offering. "CXL DRAM will become a critical turning point for future computing structures by substantially advancing artificial intelligence (AI) and big data services, as we aggressively expand its usage in next-generation memory architectures including software-defined memory (SDM)," said Cheolmin Park, Vice President of Memory Global Sales & Marketing at Samsung Electronics, and Director of the CXL Consortium. "Samsung will continue to collaborate across the industry to develop and standardize CXL memory solutions, while fostering an increasingly solid ecosystem."

Montage Technology Delivers the World's First CXL Memory eXpander Controller

Montage Technology, a leading data processing and interconnect IC design company, today announced that it has delivered the world's first Compute Express Link (CXL ) Memory eXpander Controller (MXC). The device is designed to be used in Add-in Cards (AIC), Backplanes or EDSFF memory modules to enable significant scaling of memory capacity and bandwidth for data-intensive applications such as high-performance computing (HPC) and artificial intelligence (AI). The MXC is a Type 3 CXL DRAM memory controller. The MXC supports and is compliant with both DDR4 & DDR5 JEDEC standards. It is also designed to the CXL 2.0 specification and supports PCIe 5.0 specification speeds. The MXC provides high-bandwidth and low-latency interconnect between the CPU and the CXL-based devices, allowing them to share memory for higher performance, reduced software stack complexity, and lower data center TCO.

Montage Technology's President, Stephen Tai said, "CXL is a key technology that enables innovative ways to do memory expansion and pooling which will play an important role in next-generation server platforms. I'm very excited that Montage is the first company in the industry to successfully deliver the MXC chip, which signals we are making a critical step towards advancing the CXL interconnect technology to the memory market." CXL Consortium's President, Siamak Tavallaei said, "The CXL Consortium is excited to see continued CXL specification adoption to enable technologies and solutions such as the CXL DRAM Memory eXpander Controller." Montage Technology is working closely with industry-leading memory manufacturers to deliver advanced memory products based on the CXL MXC and help develop a robust memory ecosystem around CXL.

Rambus to Acquire Hardent, Accelerating Roadmap for Next-Generation Data Center Solutions

-Rambus Inc., a provider of industry-leading chips and silicon IP making data faster and safer, today announced it has signed an agreement to acquire Hardent, Inc. ("Hardent"), a leading electronic design company. This acquisition augments the world-class team of engineers at Rambus and accelerates the development of CXL processing solutions for next-generation data centers. With 20 years of semiconductor experience, Hardent's world-class silicon design, verification, compression, and Error Correction Code (ECC) expertise provides key resources for the Rambus CXL Memory Interconnect Initiative.

"Driven by the demands of advanced workloads like AI/ML and the move to disaggregated data center architectures, industry momentum for CXL-based solutions continues to grow," said Luc Seraphin, president and CEO of Rambus. "The addition of the highly-skilled Hardent design team brings key resources that will accelerate our roadmap and expand our reach to address customer needs for next-generation data center solutions." "The Rambus culture and track record of technology leadership is an ideal fit for Hardent," said Simon Robin, president and founder of Hardent. "The team is looking forward to joining Rambus and is excited to be part of a global company advancing the future of data center solutions." In addition, Hardent brings complementary IP and services to the Rambus silicon IP portfolio, expanding the customer base and design wins in automotive and consumer electronic applications. The transaction is expected to close in the second calendar quarter of 2022 and will not materially impact results.

Keysight Delivers Single Vendor Validation Solution for Seamless Support of PCIe 5.0 and 6.0

Keysight Technologies, Inc., a leading technology company that delivers advanced design and validation solutions to help accelerate innovation to connect and secure the world, announced an end-to-end PCIe test solution for digital development and senior engineers that enable the simulation, pathfinding, characterization, validation and compliance testing of PCIe designs. The rapid increase of AI (artificial intelligence) related workloads in data centers and edge computing demand new compute designs. Data center system designers are challenged to provide new higher speed devices within reduced design cycles. New PCIe devices will need to keep up with Ethernet network interfaces in data centers and the emergence of CXL (compute express link).

To maintain performance goals and prepare for the PCIe 6.0 move to pulse amplitude modulation 4-level (PAM4), customers need a smooth transition from PCIe 5.0 to 6.0, where the integrity of PCIe measurements are backed by leading-edge tools and comply with PCIe specifications. With shrinking design cycles, end-to-end solutions from simulation to validation through the layers of the stack are required. Keysight provides a comprehensive physical layer test solution, approved by the Peripheral Component Interconnect Special Interest Group (PCI-SIG) to test transmitters and receivers for all generations of the PCIe specification, which is currently supported by the PCI-SIG integrators list. To reflect the increasing time to market pressure for design engineers, Keysight extends the portfolio to cover PCIe protocol, making it the first end-to-end solution from simulation to full stack validation.

Tanzanite Silicon Solutions Demonstrates Industry's First CXL Based Memory Expansion and Memory Pooling Products

Tanzanite Silicon Solutions Inc., the leader in the development of Compute Express Link (CXL) based products, is unveiling its architectural vision and product roadmap with an SoC mapped to FPGA Proof-Of-Concept vehicle demonstrating Memory Expansion and Memory Pooling, with multi-host CXL based connectivity. Explosive demand for memory and compute to meet the needs of emerging applications such as Artificial Intelligence (AI), Machine Learning (ML), blockchain technology, and the metaverse is outpacing monolithic systems. A disaggregated data center design with composable components for CPU, memory, storage, GPU, and XPU is needed to provide flexible and dynamic pooling of resources to meet the varying demands of heterogenous workloads in an optimal and efficient manner.

Tanzanite's visionary TanzanoidTZ architecture and purpose-built design of a "Smart Logic Interface Connector" (SLICTZ) SoC enables independent scaling and sharing of memory and compute in a pool with low latency within and across server racks. The Tanzanite solution provides a highly scalable architecture for exa-scale level memory capacity and compute acceleration, supporting multiple industry standard form-factors, ranging from E1.S, E3.S, memory expansion board, and memory appliance.

Intel Details Ponte Vecchio Accelerator: 63 Tiles, 600 Watt TDP, and Lots of Bandwidth

During the International Solid-State Circuits Conference (ISSCC) 2022, Intel gave us a more significant look at its upcoming Ponte Vecchio HPC accelerator and how it operates. So far, Intel convinced us that the company created Ponte Vecchio out of 47 tiles glued together in one package. However, the ISSCC presentation shows that the accelerator is structured rather interestingly. There are 63 tiles in total, where 16 are reserved for compute, eight are used for RAMBO cache, two are Foveros base tiles, two represent Xe-Link tiles, eight are HBM2E tiles, and EMIB connection takes up 11 tiles. This totals for about 47 tiles. However, an additional 16 thermal tiles used in Ponte Vecchio regulate the massive TDP output of this accelerator.

What is interesting is that Intel gave away details of the RAMBO cache. This novel SRAM technology uses four banks of 3.75 MB groups total of 15 MB per tile. They are connected to the fabric at 1.3 TB/s connection per chip. In contrast, compute tiles are connected at 2.6 TB/s speeds to the chip fabric. With eight RAMBO cache tiles, we get an additional 120 MB SRAM present. The base tile is a 646 mm² die manufactured in Intel 7 semiconductor process and contains 17 layers. It includes a memory controller, the Fully Integrated Voltage Regulators (FIVR), power management, 16-lane PCIe 5.0 connection, and CXL interface. The entire area of Ponte Vecchio is rather impressive, as 47 active tiles take up 2,330 mm², whereas when we include thermal dies, the total area jumps to 3,100 mm². And, of course, the entire package is much larger at 4,844 mm², connected to the system with 4,468 pins.

Intel "Sapphire Rapids" Xeon 4-tile MCM Annotated

Intel Xeon Scalable "Sapphire Rapids" is an upcoming enterprise processor with a CPU core count of up to 60. This core-count is achieved using four dies inter-connected using EMIB. Locuza, who leads social media with logic die annotation, posted one for "Sapphire Rapids," based on a high-resolution die-shot revealed by Intel in its ISSCC 2022 presentation.

Each of the four dies in "Sapphire Rapids" is a fully-fledged multi-core processor in its own right, complete with CPU cores, integrated northbridge, memory and PCIe interfaces, and other platform I/O. What brings four of these together is the use of five EMIB bridges per die. This allows CPU cores of a die to transparantly access the I/O and memory controlled any of the other dies transparently. Logically, "Sapphire Rapids" isn't unlike AMD "Naples," which uses IFOP (Infinity Fabric over package) to inter-connect four 8-core "Zeppelin" dies, but the effort here appears to be to minimize the latency arising from an on-package interconnect, toward a high-bandwidth, low-latency one that uses silicon bridges with high-density microscopic wiring between them (akin to an interposer).

SiPearl Partners With Intel to Deliver Exascale Supercomputer in Europe

SiPearl, the designer of the high computing power and low consumption microprocessor that will be the heart of European supercomputers, has entered into a partnership with Intel in order to offer a common offer dedicated to the first exascale supercomputers in Europe. This partnership will offer their European customers the possibility of combining Rhea, the high computing power and low consumption microprocessor developed by SiPearl, with Intel's Ponte Vecchio accelerator, thus creating a high performance computing node that will promote the deployment of the exascale supercomputing in Europe.

To enable this powerful combination, SiPearl plans to use and optimize for its Rhea microprocessor the open and unified programming interface, oneAPI, created by Intel. Using this single solution across the entire heterogeneous compute node, consisting of Rhea and Ponte Vecchio, will increase developer productivity and application performance.

Samsung Introduces Industry's First Open-source Software Solution for CXL Memory Platform

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today introduced the first open-source software solution, the Scalable Memory Development Kit (SMDK), that has been specially designed to support the Compute Express Link (CXL) memory platform. In May, Samsung unveiled the industry's first CXL memory expander that allows memory capacity and bandwidth to scale to levels far exceeding what is possible in today's server systems. Now, the company's CXL platform is being extended beyond hardware to offer easy-to-integrate software tools, making CXL memory much more accessible to data center system developers for emerging artificial intelligence (AI), machine learning (ML) and 5G-edge markets.

The CXL interconnect is an open, industry-backed standard that enables different types of devices such as accelerators, memory expanders and smart I/O devices to work more efficiently when processing high-performance computational workloads. "In order for data center and enterprise systems to smoothly run next-generation memory solutions like CXL, development of corresponding software is a necessity," said Cheolmin Park, vice president of the Memory Product Planning Team at Samsung Electronics. "Today, Samsung is reinforcing its commitment toward delivering a total memory solution that encompasses hardware and software, so that IT OEMs can incorporate new technologies into their systems much more effectively."

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

TrendForce: Enterprise SSD Contract Prices Likely to Increase by 15% QoQ for 3Q21 Due to High SSD Demand and Short Supply of Upstream IC Components

The ramp-up of the Intel Ice Lake and AMD Milan processors is expected to not only propel growths in server shipment for two consecutive quarters from 2Q21 to 3Q21, but also drive up the share of high-density products in North American hyperscalers' enterprise SSD purchases, according to TrendForce's latest investigations. In China, procurement activities by domestic hyperscalers Alibaba and ByteDance are expected to increase on a quarterly basis as well. With the labor force gradually returning to physical offices, enterprises are now placing an increasing number of IT equipment orders, including servers, compared to 1H21. Hence, global enterprise SSD procurement capacity is expected to increase by 7% QoQ in 3Q21. Ongoing shortages in foundry capacities, however, have led to the supply of SSD components lagging behind demand. At the same time, enterprise SSD suppliers are aggressively raising the share of large-density products in their offerings in an attempt to optimize their product lines' profitability. Taking account of these factors, TrendForce expects contract prices of enterprise SSDs to undergo a staggering 15% QoQ increase for 3Q21.

Samsung Unveils Industry-First Memory Module Incorporating New CXL Interconnect

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today unveiled the industry's first memory module supporting the new Compute Express Link (CXL) interconnect standard. Integrated with Samsung's Double Data Rate 5 (DDR5) technology, this CXL-based module will enable server systems to significantly scale memory capacity and bandwidth, accelerating artificial intelligence (AI) and high-performance computing (HPC) workloads in data centers.

The rise of AI and big data has been fueling the trend toward heterogeneous computing, where multiple processors work in parallel to process massive volumes of data. CXL—an open, industry-supported interconnect based on the PCI Express (PCIe) 5.0 interface—enables high-speed, low latency communication between the host processor and devices such as accelerators, memory buffers and smart I/O devices, while expanding memory capacity and bandwidth well beyond what is possible today. Samsung has been collaborating with several data center, server and chipset manufacturers to develop next-generation interface technology since the CXL consortium was formed in 2019.
Return to Keyword Browsing
May 21st, 2024 10:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts