Monday, October 7th 2024
Supermicro Currently Shipping Over 100,000 GPUs Per Quarter in its Complete Rack Scale Liquid Cooled Servers
Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing a complete liquid cooling solution that includes powerful Coolant Distribution Units (CDUs), cold plates, Coolant Distribution Manifolds (CDMs), cooling towers and end to end management software. This complete solution reduces ongoing power costs and Day 0 hardware acquisition and data center cooling infrastructure costs. The entire end-to-end data center scale liquid cooling solution is available directly from Supermicro.
"Supermicro continues to innovate, delivering full data center plug-and-play rack scale liquid cooling solutions," said Charles Liang, CEO and president of Supermicro. "Our complete liquid cooling solutions, including SuperCloud Composer for the entire life-cycle management of all components, are now cooling massive, state-of-the-art AI factories, reducing costs and improving performance. The combination of Supermicro deployment experience and delivering innovative technology is resulting in data center operators coming to Supermicro to meet their technical and financial goals for both the construction of greenfield sites and the modernization of existing data centers. Since Supermicro supplies all the components, the time to deployment and online are measured in weeks, not months."Many organizations require the highest-performing GPUs and CPUs to remain competitive and need these servers to run constantly. Supermicro's ultra-dense server with dual top-bin CPUs and 8 NVIDIA HGX GPUs in just 4U with liquid cooling is the ultimate AI server needed in AI factories. When installed in a rack, this server quadruples the computing density, allowing organizations to run larger training models with a smaller data center footprint.
Supermicro recently deployed more than 100,000 GPUs with liquid cooling solution (DLC) for some of the largest AI factories ever built, as well as other CSPs. With each server approaching 12kW of power needed for AI and HPC workloads, liquid cooling is a more efficient choice to maintain the desired operating temperature for each GPU and CPU. A single AI rack now generates over 100kW of heat, which needs to be efficiently removed from the data center. Datacenter-scale liquid cooling significantly reduces the power demand for a given cluster size. Up to 40% power reduction allows you to deploy more AI servers in a fixed power envelope to increase computing power and decrease LLM time to train, which are critical for these large CSPs and AI factories.
The entire liquid cooling solution directly from Supermicro includes:
Source:
Supermicro
"Supermicro continues to innovate, delivering full data center plug-and-play rack scale liquid cooling solutions," said Charles Liang, CEO and president of Supermicro. "Our complete liquid cooling solutions, including SuperCloud Composer for the entire life-cycle management of all components, are now cooling massive, state-of-the-art AI factories, reducing costs and improving performance. The combination of Supermicro deployment experience and delivering innovative technology is resulting in data center operators coming to Supermicro to meet their technical and financial goals for both the construction of greenfield sites and the modernization of existing data centers. Since Supermicro supplies all the components, the time to deployment and online are measured in weeks, not months."Many organizations require the highest-performing GPUs and CPUs to remain competitive and need these servers to run constantly. Supermicro's ultra-dense server with dual top-bin CPUs and 8 NVIDIA HGX GPUs in just 4U with liquid cooling is the ultimate AI server needed in AI factories. When installed in a rack, this server quadruples the computing density, allowing organizations to run larger training models with a smaller data center footprint.
Supermicro recently deployed more than 100,000 GPUs with liquid cooling solution (DLC) for some of the largest AI factories ever built, as well as other CSPs. With each server approaching 12kW of power needed for AI and HPC workloads, liquid cooling is a more efficient choice to maintain the desired operating temperature for each GPU and CPU. A single AI rack now generates over 100kW of heat, which needs to be efficiently removed from the data center. Datacenter-scale liquid cooling significantly reduces the power demand for a given cluster size. Up to 40% power reduction allows you to deploy more AI servers in a fixed power envelope to increase computing power and decrease LLM time to train, which are critical for these large CSPs and AI factories.
The entire liquid cooling solution directly from Supermicro includes:
- Optimally designed cold plates that allow liquid to flow through microchannels with a maximum surface area that dissipates up to 1600 W for next-generation GPUs.
- Specifically designed CDM (horizontal and vertical) to enable the highest GPU per rack density with up to 96 NVIDIA B200 GPUs per rack.
- State-of-the-art rack-CDU solutions with increased cooling capacity of 250kW and hot swappable pumps and power supplies to avoid any downtime
- Modular cooling towers tailored to cool DLC Racks with the latest energy efficient EC fan technology are ready to ship and deploy in days, enabling a faster Time-to-Online (TTO).
- SuperCloud Composer life cycle management software monitors the systems, SW, CDUs, racks, and cooling towers, optimizing the operational cost and managing the integrity of liquid-cooled data centers.
- Up to 40% energy savings for infrastructure and 80% space savings, eliminating the need for traditional CRAC/CRAH units
- Support for warm water cooling up to 113°F (45°C) and allow for the reuse of heat generated by AI systems for applications such as district heating and greenhouses.
2 Comments on Supermicro Currently Shipping Over 100,000 GPUs Per Quarter in its Complete Rack Scale Liquid Cooled Servers