News Posts matching #OpenCAPI

Return to Keyword Browsing

SMART Modular Technologies Launches its First Compute Express Link Memory Module

SMART Modular Technologies, Inc. a global leader in memory solutions, solid-state drives, and hybrid storage products announces its new Compute Express Link (CXL) Memory Module, the XMM CXL memory module. SMART's new DDR5 XMM CXL modules helps boost server and data center performance by enabling cache coherent memory to be added behind the CXL interface, further expanding big data processing capabilities beyond the current 8-channel/12-channel limitations of most servers.

The industry adoption of composable serial-attached memory architecture enables a whole new era for the memory module industry. Serial-attached memory adds capacity and bandwidth capabilities beyond main memory DIMM modules. Servers with XMM CXL modules can be dynamically configured for different applications and workloads without being shut down. Memory can be shared across nodes to meet throughput and latency requirements.

SMART Modular Technologies Launches its First Compute Express Link Memory Module

SMART Modular Technologies, Inc. ("SMART"), a division of SGH and a global leader in memory solutions, solid-state drives, and hybrid storage announces its new Compute Express Link (CXL) Memory Module, the XMM CXL memory module. SMART's new DDR5 XMM CXL modules helps boost server and data center performance by enabling cache coherent memory to be added behind the CXL interface, further expanding big data processing capabilities beyond the current 8-channel/12-channel limitations of most servers.

The industry adoption of composable serial-attached memory architecture enables a whole new era for the memory module industry. Serial-attached memory adds capacity and bandwidth capabilities beyond main memory DIMM modules. Servers with XMM CXL modules can be dynamically configured for different applications and workloads without being shut down. Memory can be shared across nodes to meet throughput and latency requirements.

OpenCAPI Consortium Merges Into CXL

The industry has been undergoing significant changes in computing. Application specific hardware acceleration is becoming commonplace and new memory technologies are influencing the economics of computing. To address the need for an open architecture to allow full industry participation, the OpenCAPI Consortium (OCC) was founded in 2016. The architecture that was defined allowed any microprocessor to attach to coherent user-level accelerators, advanced memories, and was agnostic to the processor architecture. In 2021, OCC announced the Open Memory Interface (OMI). Based on OpenCAPI, OMI is a serial attached near memory interface that provides low latency and high bandwidth connections for main memory.

In 2019, the Compute Express Link (CXL) Consortium was launched to deliver an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. In 2020, the CXL and Gen-Z Consortiums announced plans to implement interoperability between their respective technologies, and in early 2022, Gen-Z transferred its specifications and assets to the CXL Consortium.

Compute Express Link Consortium (CXL) Officially Incorporates

Today, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, Intel Corporation and Microsoft announce the incorporation of the Compute Express Link (CXL) Consortium, and unveiled the names of its newly-elected members to its Board of Directors. The core group of key industry partners announced their intent to incorporate in March 2019, and remain dedicated to advancing the CXL standard, a new high-speed CPU-to-Device and CPU-to-Memory interconnect which accelerates next-generation data center performance.

The five new CXL board members are as follows: Steve Fields, Fellow and Chief Engineer of Power Systems, IBM; Gaurav Singh, Corporate Vice President, Xilinx; Dong Wei, Standards Architect and Fellow at ARM Holdings; Nathan Kalyanasundharam, Senior Fellow at AMD Semiconductor; and Larrie Carr, Fellow, Technical Strategy and Architecture, Data Center Solutions, Microchip Technology Inc.

AMD Announces the Radeon Instinct Family of Deep-Learning Accelerators

AMD (NASDAQ: AMD) today unveiled its strategy to accelerate the machine intelligence era in server computing through a new suite of hardware
and open-source software offerings designed to dramatically increase performance, efficiency, and ease of implementation of deep learning workloads. New Radeon Instinct accelerators will offer organizations powerful GPU-based solutions for deep learning inference and training. Along with the new hardware offerings, AMD announced MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations, and new, optimized deep learning frameworks on AMD's ROCm software to build the foundation of the next evolution of machine intelligence workloads.

Inexpensive high-capacity storage, an abundance of sensor driven data, and the exponential growth of user-generated content are driving exabytes of data globally. Recent advances in machine intelligence algorithms mapped to high-performance GPUs are enabling orders of magnitude acceleration of the processing and understanding of that data, producing insights in near real time. Radeon Instinct is a blueprint for an open software ecosystem for machine intelligence, helping to speed inference insights and algorithm training.

AMD Announces ROCm Initiative - High-Performance Computing & Open-Standards

AMD on Monday announced their ROCm initiative. Introduced by AMD's Gregory Stoner, Senior Director for the Radeon Open Compute Initiative, ROCm stands for Radeon Open Compute platforM. This open-standard, high-performance, Hyper Scale computing platform stands on the shoulders of AMD's technological expertise and accomplishments, with cards like the Radeon R9 Nano achieving as much as 46 GFLOPS of peak single-precision performance per Watt.

The natural evolution of AMD's Boltzmann Initiative, ROCm grants developers and coders a platform which allows the leveraging of AMD's GPU solutions through a variety of popular programming languages, such as OpenCL, CUDA, ISO C++ and Python. AMD knows that the hardware is but a single piece in an ecosystem, and that having it without any supporting software is a recipe for failure. As such, AMD's ROCm stands as AMD's push towards HPC by leveraging both its hardware, as well as the support for open-standards and the conversion of otherwise proprietary code.

Tech Industry Leaders Unite, Unveil New High-Perf Server Interconnect Technology

On the heels of the recent Gen-Z interconnect announcement, an aggregate of some of the most recognizable names in the tech industry have once again banded together. This time, it's an effort towards the implementation of a fast, coherent and widely compatible interconnect technology that will pave the way towards tighter integration of ever-more heterogeneous systems.

Technology leaders AMD, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron, NVIDIA and Xilinx announced the new open standard to appropriate fanfare, considering the promises of an up-to 10x performance uplift in datacenter server environments, thus accelerating big-data, machine learning, analytics, and other emerging workloads. The interconnect promises to provide a high-speed pathway towards tighter integration between different types of technology currently making up the heterogeneous server computing's needs, ranging through fixed-purpose accelerators, current and future system memory subsistems, and coherent storage and network controllers.
Return to Keyword Browsing
Nov 21st, 2024 10:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts