Tuesday, July 30th 2024
NVIDIA Blackwell's High Power Consumption Drives Cooling Demands; Liquid Cooling Penetration Expected to Reach 10% by Late 2024
With the growing demand for high-speed computing, more effective cooling solutions for AI servers are gaining significant attention. TrendForce's latest report on AI servers reveals that NVIDIA is set to launch its next-generation Blackwell platform by the end of 2024. Major CSPs are expected to start building AI server data centers based on this new platform, potentially driving the penetration rate of liquid cooling solutions to 10%.
Air and liquid cooling systems to meet higher cooling demands
TrendForce reports that the NVIDIA Blackwell platform will officially launch in 2025, replacing the current Hopper platform and becoming the dominant solution for NVIDIA's high-end GPUs, accounting for nearly 83% of all high-end products. High-performance AI server models like the B200 and GB200 are designed for maximum efficiency, with individual GPUs consuming over 1,000 W. HGX models will house 8 GPUs each, while NVL models will support 36 or 72 GPUs per rack, significantly boosting the growth of the liquid cooling supply chain for AI servers.TrendForce highlights the increasing TDP of server chips, with the B200 chip's TDP reaching 1,000 W, making traditional air cooling solutions inadequate. The TDP of the GB200 NVL36 and NVL72 complete rack systems is projected to reach 70kW and nearly 140kW, respectively, necessitating advanced liquid cooling solutions for effective heat management.
TrendForce observes that the GB200 NVL36 architecture will initially utilize a combination of air and liquid cooling solutions, while the NVL72, due to higher cooling demands, will primarily employ liquid cooling.
TrendForce identifies five major components in the current liquid cooling supply chain for GB200 rack systems: cold plates, coolant distribution units (CDUs), manifolds, quick disconnects (QDs), and rear door heat exchangers (RDHx).
The CDU is the critical system responsible for regulating coolant flow to maintain rack temperatures within the designated TDP range, preventing component damage. Vertiv is currently the main CDU supplier for NVIDIA AI solutions, with Chicony, Auras, Delta, and CoolIT undergoing continuous testing.
GB200 shipments expected to reach 60,000 units in 2025, making Blackwell the mainstream platform and accounting for over 80% of NVIDIA's high-end GPUs
In 2025, NVIDIA will target CSPs and enterprise customers with diverse AI server configurations, including the HGX, GB200 Rack, and MGX, with expected shipment ratios of 5:4:1. The HGX platform will seamlessly transition from the existing Hopper platform, enabling CSPs and large enterprise customers to adopt it quickly. The GB200 rack AI server solution will primarily target the hyperscale CSP market. TrendForce predicts NVIDIA will introduce the NVL36 configuration at the end of 2024 to quickly enter the market, with the more complex NVL72 expected to launch in 2025.
TrendForce forecasts that in 2025, GB200 NVL36 shipments will reach 60,000 racks, with Blackwell GPU usage between 2.1 to 2.2 million units.
However, there are several variables in the adoption of the GB200 Rack by end customers. TrendForce points out that the NVL72's power consumption of around 140kW per rack requires sophisticated liquid cooling solutions, making it challenging. Additionally, liquid-cooled rack designs are more suitable for new CSP data centers but involve complex planning processes. CSPs might also avoid being tied to a single supplier's specifications and opt for HGX or MGX models with x86 CPU architectures, or expand their self-developed ASIC AI server infrastructure for lower costs or specific AI applications.
Source:
TrendForce
Air and liquid cooling systems to meet higher cooling demands
TrendForce reports that the NVIDIA Blackwell platform will officially launch in 2025, replacing the current Hopper platform and becoming the dominant solution for NVIDIA's high-end GPUs, accounting for nearly 83% of all high-end products. High-performance AI server models like the B200 and GB200 are designed for maximum efficiency, with individual GPUs consuming over 1,000 W. HGX models will house 8 GPUs each, while NVL models will support 36 or 72 GPUs per rack, significantly boosting the growth of the liquid cooling supply chain for AI servers.TrendForce highlights the increasing TDP of server chips, with the B200 chip's TDP reaching 1,000 W, making traditional air cooling solutions inadequate. The TDP of the GB200 NVL36 and NVL72 complete rack systems is projected to reach 70kW and nearly 140kW, respectively, necessitating advanced liquid cooling solutions for effective heat management.
TrendForce observes that the GB200 NVL36 architecture will initially utilize a combination of air and liquid cooling solutions, while the NVL72, due to higher cooling demands, will primarily employ liquid cooling.
TrendForce identifies five major components in the current liquid cooling supply chain for GB200 rack systems: cold plates, coolant distribution units (CDUs), manifolds, quick disconnects (QDs), and rear door heat exchangers (RDHx).
The CDU is the critical system responsible for regulating coolant flow to maintain rack temperatures within the designated TDP range, preventing component damage. Vertiv is currently the main CDU supplier for NVIDIA AI solutions, with Chicony, Auras, Delta, and CoolIT undergoing continuous testing.
GB200 shipments expected to reach 60,000 units in 2025, making Blackwell the mainstream platform and accounting for over 80% of NVIDIA's high-end GPUs
In 2025, NVIDIA will target CSPs and enterprise customers with diverse AI server configurations, including the HGX, GB200 Rack, and MGX, with expected shipment ratios of 5:4:1. The HGX platform will seamlessly transition from the existing Hopper platform, enabling CSPs and large enterprise customers to adopt it quickly. The GB200 rack AI server solution will primarily target the hyperscale CSP market. TrendForce predicts NVIDIA will introduce the NVL36 configuration at the end of 2024 to quickly enter the market, with the more complex NVL72 expected to launch in 2025.
TrendForce forecasts that in 2025, GB200 NVL36 shipments will reach 60,000 racks, with Blackwell GPU usage between 2.1 to 2.2 million units.
However, there are several variables in the adoption of the GB200 Rack by end customers. TrendForce points out that the NVL72's power consumption of around 140kW per rack requires sophisticated liquid cooling solutions, making it challenging. Additionally, liquid-cooled rack designs are more suitable for new CSP data centers but involve complex planning processes. CSPs might also avoid being tied to a single supplier's specifications and opt for HGX or MGX models with x86 CPU architectures, or expand their self-developed ASIC AI server infrastructure for lower costs or specific AI applications.
41 Comments on NVIDIA Blackwell's High Power Consumption Drives Cooling Demands; Liquid Cooling Penetration Expected to Reach 10% by Late 2024
20 years ago if you would have bought a server or host multiple applications you likely needed 10 servers with 10x the power usage. Today you can get away with just one since the tech has turned so efficient.
So 2700W looks like a lot - its likely more efficient then hosting a machine with 4 separate GPU's.
I live a very energy and waste conscious life overall, it runs through the core of how I operate, yet, power draw ≠ efficiency and performance per watt, it's all relative. I do enjoy my efficient parts, and undervolt / overclock everything i own to get the most out of it per watt.
As it is, higher bins and higher tier chips will reliably perform more efficiently and carry a higher performance ceiling; the 4090 isn't even the cream of the crop. Try the Quadro RTX A6000 Ada. That's some top-shelf chips, and they come at a PREMIUM.
Generally though, point is that in the consumer market all players will push their power draw far beyond the zenith of the perf/watt curve to get a few extra (very marketable) frames. Undervolting is the one truth, fite me.
P.s. The 3000W GPU might be good for the winter too, think about it. It will warm your room. Here winters are freezing!
en.wikipedia.org/wiki/David_Blackwell
Ada is named after Ada, Countess of Lovelace (mathematician that is credited with creating the first computer program), Ampere is named after André-Marie Ampère, physicist credited with the discovery of electrodynamics and creation of the electrical telegraph, Turing is named after Alan Turing, computer scientist famous for breaking the Enigma code and father of modern computing, Volta is named after the inventor Alessandro Volta, creator of the battery, Pascal after Blaise Pascal, famous for his work in probability theory and many other areas, Maxwell after James Clerk Maxwell (theory of electromagnetic reaction), Fermi after Enrico Fermi (creator of the first nuclear reactor and member of the Manhattan Project), Tesla after Nikola Tesla, creator of the AC system, etc.
Datacenters buy perf/W and get products geared towards that - maximum efficiency point that can be found.
The development goals flow and ebb here but usually it is a compromise between max performance and efficiency anyway.
In other words, people that complain about ruining the planet with high end gpus, if you don't have 4090s you are the ones ruining the planet
What We Can Learn from Pixar's 24,000-Core Supercomputer | No Film School
Wētā 'outgrew the power grid' during rendering of Avatar 2, producer says | Stuff
(Avatar 2 used Renderman which is still CPU based for a production output)