News Posts matching #MRDIMM

Return to Keyword Browsing

Renesas Unveils Industry's First Complete Chipset for Gen-2 DDR5 Server MRDIMMs

Renesas Electronics Corporation, a premier supplier of advanced semiconductor solutions, today announced that it has delivered the industry's first complete memory interface chipset solutions for the second-generation DDR5 Multi-Capacity Rank Dual In-Line Memory Modules (MRDIMMs).

The new DDR5 MRDIMMs are needed to keep pace with the ever-increasing memory bandwidth demands of Artificial Intelligence (AI), High-Performance Compute (HPC) and other data center applications. They deliver operating speeds up to 12,800 Mega Transfers Per Second (MT/s), a 1.35x improvement in memory bandwidth over first-generation solutions. Renesas has been instrumental in the design, development and deployment of the new MRDIMMs, collaborating with industry leaders including CPU and memory providers, along with end customers.

New Ultrafast Memory Boosts Intel Data Center Chips

While Intel's primary product focus is on the processors, or brains, that make computers work, system memory (that's DRAM) is a critical component for performance. This is especially true in servers, where the multiplication of processing cores has outpaced the rise in memory bandwidth (in other words, the memory bandwidth available per core has fallen). In heavy-duty computing jobs like weather modeling, computational fluid dynamics and certain types of AI, this mismatch could create a bottleneck—until now.

After several years of development with industry partners, Intel engineers have found a path to open that bottleneck, crafting a novel solution that has created the fastest system memory ever and is set to become a new open industry standard. The recently introduced Intel Xeon 6 data center processors are the first to benefit from this new memory, called MRDIMMs, for higher performance—in the most plug-and-play manner imaginable.

Rambus Unveils Industry-First Complete Chipsets for Next-Generation DDR5 MRDIMMs and RDIMMs

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today unveiled industry-first, complete memory interface chipsets for Gen 5 DDR5 RDIMMs and next-generation DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMMs). These innovative new products for RDIMMs and MRDIMMs will seamlessly extend DDR5 performance with unparalleled bandwidth and memory capacity for compute-intensive data center and AI workloads.

"The voracious memory demands of AI and HPC require the relentless pursuit of higher performance through continued innovation and technology leadership," said Sean Fan, chief operating officer at Rambus. "With our 30-plus years of renowned high-speed signal integrity and memory system expertise, the Rambus Gen5 RCD, and next-generation MRCD, MDB, and PMIC will be critical enabling chips in future-generation servers leveraging DDR5 RDIMM 8000 and MRDIMM 12800."

AMD EPYC "Turin" with 192 Cores and 384 Threads Delivers Almost 40% Higher Performance Than Intel Xeon 6

AMD has unveiled its latest EPYC processors, codenamed "Turin," featuring Zen 5 and Zen 5C dense cores. Phoronix's thorough testing reveals remarkable advancements in performance, efficiency, and value. The new lineup includes the EPYC 9575F (64-core), EPYC 9755 (128-core), and EPYC 9965 (192-core) models, all showing impressive capabilities across various server and HPC workloads. In benchmarks, a dual-socket configuration of the 128-core EPYC 9755 Turin outperformed Intel's dual Xeon "Granite Rapids" 6980P setup with MRDIMM-8800 by 40% in the geometric mean of all tests. Surprisingly, even a single EPYC 9755 or EPYC 9965 matched the dual Xeon 6980P in expanded tests with regular DDR5-6400. Within AMD's lineup, the EPYC 9755 showed a 1.55x performance increase over its predecessor, the 96-core EPYC 9654 "Genoa". The EPYC 9965 surpassed the dual EPYC 9754 "Bergamo" by 45%.

These gains come with improved efficiency. While power consumption increased moderately, performance improvements resulted in better overall efficiency. For example, the EPYC 9965 used 32% more power than the EPYC 9654 but delivered 1.55x the performance. Power consumption remains competitive: the EPYC 9965 averaged 275 Watts (peak 461 Watts), the EPYC 9755 averaged 324 Watts (peak 500 Watts), while Intel's Xeon 6980P averaged 322 Watts (peak 547 Watts). AMD's pricing strategy adds to the appeal. The 192-core model is priced at $14,813, compared to Intel's 128-core CPU at $17,800. This competitive pricing, combined with superior performance per dollar and watt, has resonated with hyperscalers. Estimates suggest 50-60% of hyperscale deployments now use AMD processors.

ASUS Introduces All-New Intel Xeon 6 Processor Servers

ASUS today announced its all-new line-up of Intel Xeon 6 processor-powered servers, ready to satisfy the escalating demand for high-performance computing (HPC) solutions. The new servers include the multi-node ASUS RS920Q-E12, which supports Intel Xeon 6900 series processors for HPC applications; and the ASUS RS720Q-E12, RS720-E12 and RS700-E12 server models, embedded with Intel Xeon 6700 series with E-cores, will also support Intel Xeon 6700/6500 series with P-cores in Q1, 2025, to provide seamless integration and optimization for modern data centers and diverse IT environments.

These powerful new servers, built on the solid foundation of trusted and resilient ASUS server design, offer improved scalability, enabling clients to build customized data centers and scale up their infrastructure to achieve their highest computing potential - ready to deliver HPC success across diverse industries and use cases.

Supermicro Adds New Max-Performance Intel-Based X14 Servers

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today adds new maximum performance GPU, multi-node, and rackmount systems to the X14 portfolio, which are based on the Intel Xeon 6900 Series Processors with P-Cores (formerly codenamed Granite Rapids-AP). The new industry-leading selection of workload-optimized servers addresses the needs of modern data centers, enterprises, and service providers. Joining the efficiency-optimized X14 servers leveraging the Xeon 6700 Series Processors with E-cores launched in June 2024, today's additions bring maximum compute density and power to the Supermicro X14 lineup to create the industry's broadest range of optimized servers supporting a wide variety of workloads from demanding AI, HPC, media, and virtualization to energy-efficient edge, scale-out cloud-native, and microservices applications.

"Supermicro X14 systems have been completely re-engineered to support the latest technologies including next-generation CPUs, GPUs, highest bandwidth and lowest latency with MRDIMMs, PCIe 5.0, and EDSFF E1.S and E3.S storage," said Charles Liang, president and CEO of Supermicro. "Not only can we now offer more than 15 families, but we can also use these designs to create customized solutions with complete rack integration services and our in-house developed liquid cooling solutions."

Supermicro Previews New Max Performance Intel-based X14 Servers

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is previewing new, completely re-designed X14 server platforms which will leverage next-generation technologies to maximize performance for compute-intensive workloads and applications. Building on the success of Supermicro's efficiency-optimized X14 servers that launched in June 2024, the new systems feature significant upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8800MT/s, and compatibility with next-generation SXM, OAM, and PCIe GPUs. This combination can drastically accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via Supermicro's Early Ship Program or for remote testing with Supermicro JumpStart.

"We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features," said Charles Liang, president and CEO of Supermicro. "Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry's most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO."

Micron Technology Unveils MRDIMMs to Scale Up Memory Densities on Servers

Micron Technology, Inc., today announced it is now sampling its multiplexed rank dual inline memory module (MRDIMMs). The MRDIMMs will enable Micron customers to run increasingly demanding workloads and obtain maximum value out of their compute infrastructure. For applications requiring more than 128 GB of memory per DIMM slot, Micron MRDIMMs outperform current TSV RDIMMs by enabling the highest bandwidth, largest capacity with the lowest latency and improved performance per watt to accelerate memory-intensive virtualized multi-tenant, HPC and AI data center workloads.1 The new memory offering is the first generation in the Micron MRDIMM family and will be compatible with Intel Xeon 6 processors.

"Micron's latest innovative main memory solution, MRDIMM, delivers the much-needed bandwidth and capacity at lower latency to scale AI inference and HPC applications on next-generation server platforms," said Praveen Vaidyanathan, vice president and general manager of Micron's Compute Products Group. "MRDIMMs significantly lower the amount of energy used per task while offering the same reliability, availability and serviceability capabilities and interface as RDIMMs, thus providing customers a flexible solution that scales performance. Micron's close industry collaborations ensure seamless integration into existing server infrastructures and smooth transitions to future compute platforms."
Return to Keyword Browsing
Dec 21st, 2024 10:55 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts