Friday, February 19th 2021
GIGABYTE Releases 2U Server: G262-ZR0 with NVIDIA HGX A100 4-GPU
GIGABYTE Technology, (TWSE: 2376), an industry leader in high-performance servers and workstations, today announced the G262-ZR0 for HPC, AI, and data analytics. Designed to support the highest-level of performance in GPU computing, the G262-ZR0 incorporates fast PCIe 4.0 throughput in addition to NVIDIA HGX technologies and NVIDIA NVLink to provide industry leading bandwidth performance.Key Technologies of the G262-ZR0:
The G262-ZR0 offers the highest GPU compute possible in a 2U chassis, and it is built with the latest technology to provide the fastest connections. 128 cores from the dual 2nd gen AMD EPYC processors and 160 PCIe Gen 4 lanes are required for the max throughput between CPU-to-CPU and CPU-to-GPU connections. Inside the G262 is the NVIDIA HGX A100 4-GPU platform for impressive performance in HPC and AI. In addition, the G262 has 16 DIMM slots for up to 4 TB of DDR4-3200 MHz memory in 8-channels. There are 6 low-profile (Gen 4) slots, one OCP 3.0 slot, and dual 1GbE LAN ports. For drives, 4x 2.5" U.2 NVMe/SATA bays and 2x M.2 slots. Powering the system are 2x 3000 W 80+ Platinum redundant power supplies. To accommodate such a powerful system a strong emphasis on thermal design was incorporated and the server was split creating a chamber dedicated to cooling the GPUs, and another for CPUs, memory, and expansion slots.
Remote and Multiple Server Management:
As part of GIGABYTE's value proposition, GIGABYTE provides GIGABYTE Management Console (GMC) for BMC server management via a web browser-based platform. Additionally, GIGABYTE Server Management (GSM) software is free to download and used to monitor and manage multiple servers. GMC and GSM offer great value while reducing license and customer maintenance costs.
- NVIDIA A100 with 40 GB or 80 GB: 40 GB of VRAM with 1.6 TB/s of memory bandwidth or 80G of VRAM with 2.0 TB/s bandwidth for high-level computational throughput.
- Excellent GPU-to-GPU communication via 3rd gen NVIDIA NVLink with 600 GB/s bandwidth.
- Reduction in latency and CPU utilization with Mellanox Socket Direct technology. In this dual-socket server, a single CPU can access the network by bypassing the inter-processor communication bus and adjacent CPU.
- PCIe 4.0 allows for faster interconnect (compared to PCIe 3.0) and low latency for NICs and NVMe drives via PCIe switch fabric.
- Ultra-fast 200 Gbps access of GPUs on other servers with RDMA and HDR InfiniBand.
The G262-ZR0 offers the highest GPU compute possible in a 2U chassis, and it is built with the latest technology to provide the fastest connections. 128 cores from the dual 2nd gen AMD EPYC processors and 160 PCIe Gen 4 lanes are required for the max throughput between CPU-to-CPU and CPU-to-GPU connections. Inside the G262 is the NVIDIA HGX A100 4-GPU platform for impressive performance in HPC and AI. In addition, the G262 has 16 DIMM slots for up to 4 TB of DDR4-3200 MHz memory in 8-channels. There are 6 low-profile (Gen 4) slots, one OCP 3.0 slot, and dual 1GbE LAN ports. For drives, 4x 2.5" U.2 NVMe/SATA bays and 2x M.2 slots. Powering the system are 2x 3000 W 80+ Platinum redundant power supplies. To accommodate such a powerful system a strong emphasis on thermal design was incorporated and the server was split creating a chamber dedicated to cooling the GPUs, and another for CPUs, memory, and expansion slots.
Remote and Multiple Server Management:
As part of GIGABYTE's value proposition, GIGABYTE provides GIGABYTE Management Console (GMC) for BMC server management via a web browser-based platform. Additionally, GIGABYTE Server Management (GSM) software is free to download and used to monitor and manage multiple servers. GMC and GSM offer great value while reducing license and customer maintenance costs.
15 Comments on GIGABYTE Releases 2U Server: G262-ZR0 with NVIDIA HGX A100 4-GPU
But seriously, this looks insane. A lot of awesome hardware in just 2U's. I can't even imagine how many digits one of these would run ya kitted out. 6? maybe even 7.
Maybe I'm just too used to HP and Dell's pricing, where something like this easily would be $300-400K.
Due to different requirements for different users the network cards for production traffic are not included, those 16x PCIe 4.0 slots will hold multiple 100G or Infiniband NIC's.
I'm wondering though - if the prices of these servers are ever published, do they even mean anything? It's not like they are sold in retail.
I can only imagine servers have the same type of discount as networking kit so silly discounts may apply (I look at you Cisco).
One thing you can count on tho. There will be at least one person/company which will try it for mining. To recoup part of the investment - safest bet you'll ever make. Even if in previous mining craze Quadros or Teslas were much inferior, but at 50k right now/1 BTC they don't have to run this for very long.