Friday, February 25th 2022
GIGABYTE Introduces Direct Liquid Cooled Servers Supercharged by NVIDIA
GIGABYTE Technology, today introduced two new liquid cooled HPC and AI training servers, G262-ZL0 and G492-ZL2, that can push the NVIDIA HGX A100 accelerators and AMD EPYC 7003 processors to the limit with enterprise-grade liquid cooling. To prevent overheating and server downtime in a compute dense data center, GIGABYTE worked with CoolIT Systems to develop a thermal solution that uses direct-liquid cooling to balance optimal performance, high availability, and efficient cooling.
For innovators and researchers in HPC, AI, and data analytics that demand a high level of CPU and GPU compute the new servers are built for the top-tier AMD EPYC 7003 processor and GPU baseboard, NVIDIA HGX A100 80 GB accelerator. Combining components well-designed for performance and efficiency enables much faster insights and results, which users appreciate while reaping the benefits of the value and lower TCO.The inclusion and choices of the NVIDIA HGX A100 platform in the new GIGABYTE servers is important, in that new NVIDIA Magnum IO GPUDirect technologies favor faster throughput while offloading workloads from the CPU to achieve notable performance boosts. The HGX platform supports NVIDIA GPUDirect RDMA for direct data exchange between GPUs and third-party devices such as NICs or storage adapters. And there is support for GPUDirect Storage for a direct data path to move data from storage to GPU memory while offloading the CPU, thus resulting in higher bandwidth and lower latency. For high-speed interconnects the four NVIDIA A100 server incorporates NVIDIA NVLink, while the eight NVIDIA A100 server uses NVSwitch and NVLink to enable 600 GB/s GPU peer-to-peer communication.
The G262-ZL0 is a 2U GPU-centric server supporting the NVIDIA HGX A100 4-GPU baseboard, while the bigger sibling is the G492-ZL2, a 4U GPU-centric server with the NVIDIA HGX A100 8-GPU baseboard. These new servers join the existing line of G262 and G492 servers that use conventional heatsinks and high airflow fans to now include direct liquid cooling solutions. Notably, these new servers isolate the GPU baseboard from the other components, so the accelerators are cooled by a liquid coolant to maintain peak performance. The other chamber houses the CPUs, RAM, storage, and expansion slots. The dual CPU sockets in these servers are also liquid cooled. Besides processing power, the servers have multiple 2.5" U.2 bays that support PCIe 4.0 x4 lanes and multiple PCIe slots for faster networking using a SmartNIC such as the NVIDIA ConnectX -7 for four ports of connectivity and up to 400 Gb/s of throughput.
Availability
Interested buyers can contact GIGABYTE directly to purchase either of the two servers, and then for all the questions about how to integrate the cooling structure into the data center and what additional cooling components are needed customers can contact CoolIT Systems.
Source:
GIGABYTE
For innovators and researchers in HPC, AI, and data analytics that demand a high level of CPU and GPU compute the new servers are built for the top-tier AMD EPYC 7003 processor and GPU baseboard, NVIDIA HGX A100 80 GB accelerator. Combining components well-designed for performance and efficiency enables much faster insights and results, which users appreciate while reaping the benefits of the value and lower TCO.The inclusion and choices of the NVIDIA HGX A100 platform in the new GIGABYTE servers is important, in that new NVIDIA Magnum IO GPUDirect technologies favor faster throughput while offloading workloads from the CPU to achieve notable performance boosts. The HGX platform supports NVIDIA GPUDirect RDMA for direct data exchange between GPUs and third-party devices such as NICs or storage adapters. And there is support for GPUDirect Storage for a direct data path to move data from storage to GPU memory while offloading the CPU, thus resulting in higher bandwidth and lower latency. For high-speed interconnects the four NVIDIA A100 server incorporates NVIDIA NVLink, while the eight NVIDIA A100 server uses NVSwitch and NVLink to enable 600 GB/s GPU peer-to-peer communication.
The G262-ZL0 is a 2U GPU-centric server supporting the NVIDIA HGX A100 4-GPU baseboard, while the bigger sibling is the G492-ZL2, a 4U GPU-centric server with the NVIDIA HGX A100 8-GPU baseboard. These new servers join the existing line of G262 and G492 servers that use conventional heatsinks and high airflow fans to now include direct liquid cooling solutions. Notably, these new servers isolate the GPU baseboard from the other components, so the accelerators are cooled by a liquid coolant to maintain peak performance. The other chamber houses the CPUs, RAM, storage, and expansion slots. The dual CPU sockets in these servers are also liquid cooled. Besides processing power, the servers have multiple 2.5" U.2 bays that support PCIe 4.0 x4 lanes and multiple PCIe slots for faster networking using a SmartNIC such as the NVIDIA ConnectX -7 for four ports of connectivity and up to 400 Gb/s of throughput.
Availability
Interested buyers can contact GIGABYTE directly to purchase either of the two servers, and then for all the questions about how to integrate the cooling structure into the data center and what additional cooling components are needed customers can contact CoolIT Systems.
6 Comments on GIGABYTE Introduces Direct Liquid Cooled Servers Supercharged by NVIDIA
The paperwork is waivers, proof of liability insurance, and includes additional colo charges too. Either the liability insurance or the colo charges for liquid exceed the cost of these Gigabyte servers, but that's irrelevant because the amount of time wasted with the paperwork itself is insane.
I am guessing that datacenters exist where there are racks with dedicated water feeds that provide cold sides and hot sides, since these servers don't provide radiators and installing radiators into racks would run the density that neccessitates such a form-factor in the first place. I've just never seen such a datacenter and I've been to/worked in many.