News Posts matching #RDIMM

Return to Keyword Browsing

Micron Innovates From the Data Center to the Edge With NVIDIA

Secular growth of AI is built on the foundation of high-performance, high-bandwidth memory solutions. These high-performing memory solutions are critical to unlock the capabilities of GPUs and processors. Micron Technology, Inc., today announced it is the world's first and only memory company shipping both HBM3E and SOCAMM (small outline compression attached memory module) products for AI servers in the data center. This extends Micron's industry leadership in designing and delivering low-power DDR (LPDDR) for data center applications.

Micron's SOCAMM, a modular LPDDR5X memory solution, was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace Blackwell Ultra Superchip. The Micron HBM3E 12H 36 GB is also designed into the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms, while the HBM3E 8H 24 GB is available for the NVIDIA HGX B200 and GB200 NVL72 platforms. The deployment of Micron HBM3E products in NVIDIA Hopper and NVIDIA Blackwell systems underscores Micron's critical role in accelerating AI workloads.

MiTAC Computing Showcases Cutting-Edge AI and HPC Servers at Supercomputing Asia 2025

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a global leader in server design and manufacturing, will showcase its latest AI and HPC innovations at Supercomputing Asia 2025, taking place from March 11 at Booth #B10. The event highlights MiTAC's commitment to delivering cutting-edge technology with the introduction of the G4520G6 AI server and the TN85-B8261 HPC server—both engineered to meet the growing demands of artificial intelligence, machine learning, and high-performance computing (HPC) applications.

G4520G6 AI Server: Performance, Scalability, and Efficiency Redefined
The G4520G6AI server redefines computing performance with an advanced architecture tailored for intensive workloads. Key features include:
  • Exceptional Compute Power- Supports dual Intel Xeon 6 Processors with TDP up to 350 W, delivering high-performance multicore processing for AI-driven applications.
  • Enhanced Memory Performance- Equipped with 32 DDR5 DIMM slots (16 per CPU) and 8 memory channels, supporting up to 8,192 GB DDR5 RDIMM/3DS RDIMM at 6400 MT/s for superior memory bandwidth.

Avalue Technology Unveils HPM-GNRDE High-Performance Server Motherboard

Avalue Technology introduces the HPM-GNRDE high-performance server motherboard, powered by the latest Intel Xeon 6 Processors (P-Core) 6500P & 6700P.

Designed to deliver quality computing performance, ultra-fast memory bandwidth, and advanced PCIe 5.0 expansion, the HPM-GNRDE is the ideal solution for AI workloads, high-performance computing (HPC), Cloud data centers, and enterprise applications. The HPM-GNRDE will make its debut at embedded world 2025, showcasing Avalue's innovation in high-performance computing.

ASUS Unveils All-New Intel Xeon 6 Server Lineup

ASUS today announced an all-new series of servers powered by the latest Intel Xeon 6 processors, including the Xeon 6900-series, 6500P/6700P-series and 6300-series processors. These powerhouse processors deliver exceptional performance, efficiency and scalability, featuring up to 128 Performance-cores (P-cores) or 288 Efficient-cores (E-cores) per socket, along with native support for PCI Express (PCIe 5.0) and DDR5 6400 MT/s memory speeds. The latest ASUS server solutions also incorporate the updated BMC module within the ASPEED 2600 chipset, providing improved manageability, security and compatibility with a wide range of remote management software - and coincide with the unveiling of the latest Intel Xeon 6 processors.

Redefining efficiency and scalability
Intel Xeon 6 processors are engineered to meet the needs of modern data centers, AI-driven workloads and enterprise computing. Offering a choice between P-core and E-core architectures, these processors provide flexibility for businesses to optimize performance and energy efficiency based on specific workloads.

Lenovo Delivers Unmatched Flexibility, Performance and Design with New ThinkSystem V4 Servers Powered by Intel Xeon 6 Processors

Today, Lenovo announced three new infrastructure solutions, powered by Intel Xeon 6 processors, designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next generation Lenovo ThinkSystem V4 servers that deliver breakthrough performance and exceptional versatility to handle any workload while enabling powerful AI capabilities in compact, high-density designs. Whether deploying at the edge, co-locating or leveraging a hybrid cloud, Lenovo is delivering the right mix of solutions that seamlessly unlock intelligence and bring AI wherever it is needed.

The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute intensive - from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth when using new MRDIMM technology, to scale and accelerate AI everywhere.

Advantech Unveils New AI, Industrial and Network Edge Servers Powered by Intel Xeon 6 Processors

Advantech, a global leader in industrial and embedded computing, today announced the launch of seven new server platforms built on Intel Xeon 6 processors, optimized for industrial, transportation and communications applications. Designed to meet the increasing demands of complex AI storage and networking workloads, these innovative edge servers and network appliances provide superior performance, reliability, and scalability for system integrators, solutions, and service provider customers.

Intel Xeon 6 - Exceptional Performance for the Widest Range of Workloads
Intel Xeon 6 processors feature advanced performance and efficiency cores, delivering up to 86 cores per CPU, DDR5 memory support with speeds up to 6400 MT/s, and PCIe Gen 5 lanes for high-speed connectivity. Designed to optimize both compute-intensive and scale-out workloads, these processors ensure seamless integration across a wide array of applications.

AMD to Build Next-Gen I/O Dies on Samsung 4nm, Not TSMC N4P

Back in January, we covered a report about AMD designing its next-generation "Zen 6" CCDs on a 3 nm-class node by TSMC, and developing a new line of server and client I/O dies (cIOD and sIOD). The I/O die is a crucial piece of silicon that contains all the uncore components of the processor, including the memory controllers, the PCIe root complex, and Infinity Fabric interconnects to the CCDs and multi-socket connections. Back then it was reported that these new-generation I/O dies were being designed on the 4 nm silicon fabrication process, which was interpreted as being AMD's favorite 4 nm-class node, the TSMC N4P, on which the company builds everything from its current "Strix Point" mobile processors to the "Zen 5" CCDs. It turns out that AMD has other plans, and is exploring a 4 nm-class node by Samsung.

This node is very likely the Samsung 4LPP, also known as the SF4, which has been in mass-production since 2022. The table below shows how the SF4 compares with TSMC N4P and Intel 4, where it is shown striking a balance between the two. We have also added values for the TSMC N5 node from which the N4P is derived from, and you can see that the SF4 offers comparable transistor density to the N5, and is a significant improvement in transistor density over the TSMC N6, which AMD uses for its current generation of sIOD and cIOD. The new 4 nm node will allow AMD to reduce the TDP of the I/O die, implement a new power management solution, and more importantly, the need for a new I/O die is driven by the need for updated memory controllers that support higher DDR5 speeds and compatibility with new kinds of DIMMs, such as CUDIMMs, RDIMMs with RCDs, etc.

Montage Technology Delivers Gen2 MRCD & MDB Samples for DDR5 MRDIMM

Montage Technology today announced that it has successfully sampled its Gen 2 Multiplexed Rank Registering Clock Driver (MRCD) and Multiplexed Rank Data Buffer (MDB) chipset to leading global memory manufacturers. Designed for DDR5 Multiplexed Rank DIMM (MRDIMM), this new chipset supports data rates up to 12800 MT/s, delivering exceptional memory performance for next-generation computing platforms.

The release comes at a crucial time, as AI and big data analytics drive increasing demands for memory bandwidth in data centers. MRDIMM technology has emerged as a key solution to address this challenge, particularly as server processors continue to increase in core count.

Supermicro Begins Volume Shipments of Max-Performance Servers Optimized for AI, HPC, Virtualization, and Edge Workloads

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge is commencing shipments of max-performance servers featuring Intel Xeon 6900 series processors with P-cores. The new systems feature a range of new and upgraded technologies with new architectures optimized for the most demanding high-performance workloads including large-scale AI, cluster-scale HPC, and environments where a maximum number of GPUs are needed, such as collaborative design and media distribution.

"The systems now shipping in volume promise to unlock new capabilities and levels of performance for our customers around the world, featuring low latency, maximum I/O expansion providing high throughput with 256 performance cores per system, 12 memory channels per CPU with MRDIMM support, and high performance EDSFF storage options," said Charles Liang, president and CEO of Supermicro. "We are able to ship our complete range of servers with these new application-optimized technologies thanks to our Server Building Block Solutions design methodology. With our global capacity to ship solutions at any scale, and in-house developed liquid cooling solutions providing unrivaled cooling efficiency, Supermicro is leading the industry into a new era of maximum performance computing."

Micron at the 2025 CES: Scripting a Strong Comeback to the Client and PC-DIY Segments

Micron at the 2025 International CES showed us product that hint at the company planning a strong comeback to the client and PC-DIY market segments. The company's Crucial brand is already a high-volume player in the client segment, but the company never really approached the enthusiast segment. Products like the company's new T705 Pro and P510 NVMe SSDs, and DDR5 Pro Overclocking memory, seek to change this. We begin our tour with PC memory, and the DDR5 Pro OC CUDIMMs. Crucial has jumped onto the CKD bandwagon, introducing memory modules and kits that come with DDR5-6400 out of the box, but which are geared for manual overclocking to take advantage of the 1β DRAM chips underneath (hence the name).

The company also showed us their first DDR5 CSODIMM suitable for the next generation of notebooks with HX-segment processors. This module comes with a CKD and a DDR5-6400 JEDEC-standard SPD profile out of the box. Lastly, there's the Micron-branded LPCAMM2, which comes in speeds of up to LPDDR5X-8533, and is suitable for the next generation of ultraportables.

ADATA Memory at CES 2025: CUDIMMs, CSODIMMs, and RDIMMs with RCD

ADATA at the 2025 International CES brought several of its latest memory products. The technology dominating memory products this year is CKD, or client clock driver. But there's more, ADATA also introduced memory modules with RCD, or registered clock driver, or a clock driver for RDIMMs. We begin our tour with the XPG Lancer CUDIMM RGB series, the company's flagship PC overclocking memory product. The top-spec module shown here comes with speeds as high as DDR5-9733, a step above even the DDR5-9600 that most other brands brought. The module comes in densities of 16 GB and 24 GB; and speeds of DDR5-8400, DDR5-8800, DDR5-9200, DDR5-9600, besides the top DDR5-9733. When paired with a Core Ultra "Arrow Lake-S" processor in Gear 4 mode, these kits should easily cross 10,000 MT/s using manual overclocking.

Next up, the company showed us its AICore line of DDR5 RDIMMs for workstations and servers. The module packs an RCD, a registered clock driver, which is essentially a CKD for RDIMMs. It is a component that clears out and amplifies the DDR5 physical layer signal, letting the machine operate at higher memory frequencies. The AICore series comes in speeds of up to DDR5-8000, and densities of up to 16 GB per module. Other speed variants in the series include DDR5-6400 and DDR5-7200. The recommended platforms for these modules include Intel's Xeon W-3500/W-2500 series "Sapphire Rapids," and AMD Ryzen Threadripper 7000-series "Storm Peak."

Crucial Broadens Memory and Storage Portfolio at CES 2025

Micron Technology, Inc., today announced expansions across its Crucial consumer memory and storage portfolio, including unveiling the high-speed Crucial P510 SSD, and expanding density and form factor options across its existing DRAM portfolio to enable broader choice and flexibility for consumers. The P510 features read and write speeds of up to 11,000/9,550 megabytes per second (MB/s), bringing blazing fast Gen 5 performance to the masses.

"With the exciting memory and storage offerings we're debuting today, Crucial's portfolio is now broader and stronger than ever before," said Dinesh Bahal, corporate vice president and general manager of Micron's Commercial Products Group. "These updates - from making fast Gen 5 SSDs available to the mainstream to launching high-density memory options - illustrate our commitment to driving innovation, performance and value for every consumer from casual gamers to creatives and students to hardcore enthusiasts."

Netlist Wins $118 Million in Second Patent Infringement Trial Against Samsung

Netlist, Inc. today announced that it won a $118 million damages award against Samsung Electronics Co., LTD., Samsung Electronics America, Inc., and Samsung Semiconductor, Inc. (together "Samsung") in the United States District Court for the Eastern District of Texas. The award resulted from a jury trial which involved three Netlist patents: U.S. Patent Nos. 7,619,912, 11,093,417 and 10,268,608. The infringing products were all Samsung DDR4 RDIMMs and DDR4 LRDIMMs. Netlist filed the complaint against Samsung in August 2022.

The federal jury's unanimous verdict confirmed that all three Netlist patents had been infringed by Samsung, that none of the patents were invalid, that Samsung willfully infringed those patents, and that money damages were owed to Netlist for the infringement of all three patents.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

Expanding the system's computing capability with an additional accelerator like a GPU is common. However, expanding the system's memory capacity with room for more DIMM is something new. Thanks to ServeTheHome, we see that at the OCP Summit 2024, Lenovo showcased its ThinkSystem SR860 V3 server, leveraging CXL technology and Astera Labs Leo memory controllers to accommodate a staggering 16 TB of DDR5 memory across 128 DIMM slots. Traditional four-socket servers face limitations due to the memory channels supported by Intel Xeon processors. With each CPU supporting up to 16 DDR5 DIMMs, a four-socket configuration maxes out at 64 DIMMs, equating to 8 TB when using 128 GB RDIMMs. Lenovo's new approach expands this ceiling significantly by incorporating an additional 64 DIMM slots through CXL memory expansion.

The ThinkSystem SR860 V3 integrates Astera Labs Leo controllers to enable the CXL-connected DIMMs. These controllers manage up to four DDR5 DIMMs each, resulting in a layered memory design. The chassis base houses four Xeon processors, each linked to 16 directly connected DIMMs, while the upper section—called the "memory forest"—houses the additional CXL-enabled DIMMs. Beyond memory capabilities, the server supports up to four double-width GPUs, making it also a solution for high-performance computing and AI workloads. This design caters to scale-up applications requiring vast memory resources, such as large-scale database management, and allows the resources to stay in memory instead of waiting on storage. CXL-based memory architectures are expected to become more common next year. Future developments may see even larger systems with shared memory pools, enabling dynamic allocation across multiple servers. For more pictures and video walkthrough, check out ServeTheHome's post.

Innodisk Unveils DDR5 6400 64GB CUDIMM and CSODIMM Memory Modules

Innodisk, a leading global AI solution provider, announces its DDR5 6400 DRAM series, featuring the industry's largest 64 GB single-module capacity. This 6400 series is purpose-built for data-intensive applications in AI, telehealth, and edge computing, where high performance at the edge is crucial. Available in versatile form factors, including CUDIMM, CSODIMM, and RDIMM, the series delivers unmatched speed, stability, and capacity to meet the rigorous demands of modern edge AI and industrial applications.

The DDR5 6400 series delivers a data transfer rate of 6400 MT/s, offering a 14% boost in speed over previous generations and doubling the maximum capacity to 64 GB. These enhancements make it an optimal choice for applications like Large Language Models (LLMs), generative AI, autonomous vehicles, and mixed reality, which require high-speed, reliable data processing in real time.

SK hynix Showcases Memory Solutions at the 2024 OCP Global Summit

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15-17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event's theme is "From Ideas to Impact," which aims to foster the realization of theoretical concepts into real-world technologies.

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM and CMS.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.

Rambus Unveils Industry-First Complete Chipsets for Next-Generation DDR5 MRDIMMs and RDIMMs

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today unveiled industry-first, complete memory interface chipsets for Gen 5 DDR5 RDIMMs and next-generation DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMMs). These innovative new products for RDIMMs and MRDIMMs will seamlessly extend DDR5 performance with unparalleled bandwidth and memory capacity for compute-intensive data center and AI workloads.

"The voracious memory demands of AI and HPC require the relentless pursuit of higher performance through continued innovation and technology leadership," said Sean Fan, chief operating officer at Rambus. "With our 30-plus years of renowned high-speed signal integrity and memory system expertise, the Rambus Gen5 RCD, and next-generation MRCD, MDB, and PMIC will be critical enabling chips in future-generation servers leveraging DDR5 RDIMM 8000 and MRDIMM 12800."

AMD EPYC "Turin" with 192 Cores and 384 Threads Delivers Almost 40% Higher Performance Than Intel Xeon 6

AMD has unveiled its latest EPYC processors, codenamed "Turin," featuring Zen 5 and Zen 5C dense cores. Phoronix's thorough testing reveals remarkable advancements in performance, efficiency, and value. The new lineup includes the EPYC 9575F (64-core), EPYC 9755 (128-core), and EPYC 9965 (192-core) models, all showing impressive capabilities across various server and HPC workloads. In benchmarks, a dual-socket configuration of the 128-core EPYC 9755 Turin outperformed Intel's dual Xeon "Granite Rapids" 6980P setup with MRDIMM-8800 by 40% in the geometric mean of all tests. Surprisingly, even a single EPYC 9755 or EPYC 9965 matched the dual Xeon 6980P in expanded tests with regular DDR5-6400. Within AMD's lineup, the EPYC 9755 showed a 1.55x performance increase over its predecessor, the 96-core EPYC 9654 "Genoa". The EPYC 9965 surpassed the dual EPYC 9754 "Bergamo" by 45%.

These gains come with improved efficiency. While power consumption increased moderately, performance improvements resulted in better overall efficiency. For example, the EPYC 9965 used 32% more power than the EPYC 9654 but delivered 1.55x the performance. Power consumption remains competitive: the EPYC 9965 averaged 275 Watts (peak 461 Watts), the EPYC 9755 averaged 324 Watts (peak 500 Watts), while Intel's Xeon 6980P averaged 322 Watts (peak 547 Watts). AMD's pricing strategy adds to the appeal. The 192-core model is priced at $14,813, compared to Intel's 128-core CPU at $17,800. This competitive pricing, combined with superior performance per dollar and watt, has resonated with hyperscalers. Estimates suggest 50-60% of hyperscale deployments now use AMD processors.

Supermicro Introduces New Versatile System Design for AI Delivering Optimization and Flexibility at the Edge

Super Micro Computer, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, announces the launch of a new, versatile, high-density infrastructure platform optimized for AI inferencing at the network edge. As companies seek to embrace complex large language models (LLM) in their daily operations, there is a need for new hardware capable of inferencing high volumes of data in edge locations with minimal latency. Supermicro's innovative system combines versatility, performance, and thermal efficiency to deliver up to 10 double-width GPUs in a single system capable of running in traditional air-cooled environments.

"Owing to the system's optimized thermal design, Supermicro can deliver all this performance in a high-density 3U 20 PCIe system with 256 cores that can be deployed in edge data centers," said Charles Liang, president and CEO of Supermicro. "As the AI market is growing exponentially, customers need a powerful, versatile solution to inference data to run LLM-based applications on-premises, close to where the data is generated. Our new 3U Edge AI system enables them to run innovative solutions with minimal latency."

ASUS Introduces All-New Intel Xeon 6 Processor Servers

ASUS today announced its all-new line-up of Intel Xeon 6 processor-powered servers, ready to satisfy the escalating demand for high-performance computing (HPC) solutions. The new servers include the multi-node ASUS RS920Q-E12, which supports Intel Xeon 6900 series processors for HPC applications; and the ASUS RS720Q-E12, RS720-E12 and RS700-E12 server models, embedded with Intel Xeon 6700 series with E-cores, will also support Intel Xeon 6700/6500 series with P-cores in Q1, 2025, to provide seamless integration and optimization for modern data centers and diverse IT environments.

These powerful new servers, built on the solid foundation of trusted and resilient ASUS server design, offer improved scalability, enabling clients to build customized data centers and scale up their infrastructure to achieve their highest computing potential - ready to deliver HPC success across diverse industries and use cases.

Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth

As AI workloads continue to evolve and expand, memory bandwidth and capacity are increasingly critical for system performance. The latest GPUs in the industry need the highest performance high bandwidth memory (HBM), significant memory capacity, as well as improved power efficiency. Micron is at the forefront of memory innovation to meet these needs and is now shipping production-capable HBM3E 12-high to key industry partners for qualification across the AI ecosystem.

Micron's industry-leading HBM3E 12-high 36 GB delivers significantly lower power consumption than our competitors' 8-high 24 GB offerings, despite having 50% more DRAM capacity in the package
Micron HBM3E 12-high boasts an impressive 36 GB capacity, a 50% increase over current HBM3E 8-high offerings, allowing larger AI models like Llama 2 with 70 billion parameters to run on a single processor. This capacity increase allows faster time to insight by avoiding CPU offload and GPU-GPU communication delays. Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitors' HBM3E 8-high 24 GB solutions. Micron HBM3E 12-high 36 GB offers more than 1.2 terabytes per second (TB/s) of memory bandwidth at a pin speed greater than 9.2 gigabits per second (Gb/s). These combined advantages of Micron HBM3E offer maximum throughput with the lowest power consumption can ensure optimal outcomes for power-hungry data centers. Additionally, Micron HBM3E 12-high incorporates fully programmable MBIST that can run system representative traffic at full spec speed, providing improved test coverage for expedited validation and enabling faster time to market and enhancing system reliability.

Supermicro Previews New Max Performance Intel-based X14 Servers

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is previewing new, completely re-designed X14 server platforms which will leverage next-generation technologies to maximize performance for compute-intensive workloads and applications. Building on the success of Supermicro's efficiency-optimized X14 servers that launched in June 2024, the new systems feature significant upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8800MT/s, and compatibility with next-generation SXM, OAM, and PCIe GPUs. This combination can drastically accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via Supermicro's Early Ship Program or for remote testing with Supermicro JumpStart.

"We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features," said Charles Liang, president and CEO of Supermicro. "Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry's most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO."

SMART Modular Technologies Introduces DDR5 RDIMMs for Liquid Immersion Servers

SMART Modular Technologies, Inc. ("SMART"), a division of SGH and a global leader in memory solutions, solid-state drives, and advanced memory, has launched a new line of DDR5 Registered DIMMs (RDIMMs) with conformal coating which are specifically designed for use in liquid immersion servers. This innovative product line combines the superior performance of DDR5 technology with enhanced protection, ensuring reliability and longevity in the most demanding data center environments.

Arthur Sainio, DRAM product director for SMART explains the significance of this introduction, "Our new DDR5 RDIMMs with conformal coating represent the perfect fusion of cutting-edge performance and rugged reliability for the next generation of immersion-cooled data centers. By combining the speed and efficiency of DDR5 technology with advanced protective coatings, we're enabling our customers to push the boundaries of computing power while ensuring long-term durability in demanding liquid immersion environments. This product embodies our commitment to innovation and our drive to meet the evolving needs of high-performance computing applications."
Return to Keyword Browsing
Mar 28th, 2025 09:43 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts