• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Supermicro Delivers Direct-Liquid-Optimized NVIDIA Blackwell Solutions

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,279 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing the highest-performing SuperCluster, an end-to-end AI data center solution featuring the NVIDIA Blackwell platform for the era of trillion-parameter-scale generative AI. The new SuperCluster will significantly increase the number of NVIDIA HGX B200 8-GPU systems in a liquid-cooled rack, resulting in a large increase in GPU compute density compared to Supermicro's current industry-leading liquid-cooled NVIDIA HGX H100 and H200-based SuperClusters. In addition, Supermicro is enhancing the portfolio of its NVIDIA Hopper systems to address the rapid adoption of accelerated computing for HPC applications and mainstream enterprise AI.

"Supermicro has the expertise, delivery speed, and capacity to deploy the largest liquid-cooled AI data center projects in the world, containing 100,000 GPUs, which Supermicro and NVIDIA contributed to and recently deployed," said Charles Liang, president and CEO of Supermicro. "These Supermicro SuperClusters reduce power needs due to DLC efficiencies. We now have solutions that use the NVIDIA Blackwell platform. Using our Building Block approach allows us to quickly design servers with NVIDIA HGX B200 8-GPU, which can be either liquid-cooled or air-cooled. Our SuperClusters provide unprecedented density, performance, and efficiency, and pave the way toward even more dense AI computing solutions in the future. The Supermicro clusters use direct liquid cooling, resulting in higher performance, lower power consumption for the entire data center, and reduced operational expenses."



Proven AI Performance at Scale: Supermicro NVIDIA HGX B200 Systems
The upgraded SuperCluster scalable unit is based on a rack-scale design with innovative vertical coolant distribution manifolds (CDMs), which allow for an increased amount of compute nodes in a single rack. Newly developed and efficient cold plates and an advanced hose design further improve the efficiency of the liquid cooling system. A new in-row CDU option for large deployments is also available. Traditional air-cooled data centers can also take advantage of the new NVIDIA HGX B200 8-GPU systems with a new air-cooled system chassis.

The new Supermicro NVIDIA HGX B200 8-GPU systems come with a range of upgrades compared to the previous generation. The new system includes improvements to thermals and power delivery, support for dual 500 W Intel Xeon 6 (with DDR5 MRDIMMs at 8800 MT/s), or AMD EPYCTM 9005 Series processors. A new air-cooled 10U form-factor Supermicro NVIDIA HGX B200 system features a redesigned chassis with expanded thermal headroom to accommodate eight 1000 W TDP Blackwell GPUs. These systems are designed with a 1:1 GPU-to-NIC ratio supporting NVIDIA BlueField -3 SuperNICs or NVIDIA ConnectX -7 NICs for scaling across a high-performance compute fabric. In addition, two NVIDIA BlueField-3 data processing units (DPUs) per system streamline data handling to and from attached high-performance AI storage.

Supermicro Solutions Featuring NVIDIA GB200 Grace Blackwell Superchips
Supermicro also offers solutions for all NVIDIA GB200 Grace Blackwell Superchips, including the newly announced NVIDIA GB200 NVL4 Superchip and the NVIDIA GB200 NVL72 single-rack exascale computer.

Supermicro's lineup of NVIDIA MGX designs will support the NVIDIA GB200 Grace Blackwell NVL4 Superchip. This superchip unlocks the future of converged HPC and AI, delivering revolutionary performance through four NVIDIA NVLink -connected Blackwell GPUs unified with two NVIDIA Grace CPUs over NVLink-C2C. Compatible with Supermicro's liquid-cooled NVIDIA MGX modular systems, the Superchip provides up to 2x performance for scientific computing, graph neural network (GNN) training, and inference applications over the prior generation.

The NVIDIA GB200 NVL72 SuperCluster with Supermicro end-to-end liquid-cooling solution delivers an exascale supercomputer in a single rack with SuperCloud Composer (SCC) software, providing comprehensive monitoring and management capability for liquid-cooled data centers. 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs are all connected via fifth-generation NVIDIA NVLink and NVLink Switch, effectively operating as one powerful GPU with a massive pool of HBM3e memory, facilitating 130 TB/s of total GPU communication bandwidth with low latency.

Accelerated Computing Systems with NVIDIA H200 NVL
Supermicro's 5U PCIe accelerated computing systems are now available with NVIDIA H200 NVL, ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations, delivering acceleration for many AI and HPC workloads regardless of size. With up to four GPUs connected by NVIDIA NVLink, a 1.5x memory capacity, and a 1.2x bandwidth increase with HBM3e, NVIDIA H200 NVL can fine-tune LLMs in a few hours, delivering up to 1.7x faster LLM inference performance over the previous generation. NVIDIA H200 NVL also includes a five-year subscription to NVIDIA AI Enterprise, a cloud-native software platform for developing and deploying production AI.

Supermicro's X14 and H14 5U PCIe accelerated computing systems support up to two 4-way NVIDIA H200 NVL systems through NVLink technology with a total of 8 GPUs in a system, providing up to 900 GB/s GPU-to-GPU interconnection with a combined pool of 564 GB of HBM3e memory per 4-GPU NVLink domain. The new PCIe accelerated computing system can support up to 10 PCIe GPUs and now also features the latest Intel Xeon 6 or AMD EPYC 9005 Series processors to deliver flexible and versatile options for HPC and AI applications.

Supermicro at Supercomputing Conference 2024
Supermicro will showcase a complete portfolio of AI and HPC infrastructure solutions at the Supercomputing Conference, including our liquid-cooled GPU servers for AI SuperClusters.

Check out the speaking sessions at our in-booth theater where customers, experts from Supermicro, and our technology partners will be presenting on the latest breakthroughs in computing technology.

Visit Supermicro at booth #2531, Hall B at SC24.

View at TechPowerUp Main Site
 
Top