Friday, February 19th 2021

GIGABYTE Releases 2U Server: G262-ZR0 with NVIDIA HGX A100 4-GPU

GIGABYTE Technology, (TWSE: 2376), an industry leader in high-performance servers and workstations, today announced the G262-ZR0 for HPC, AI, and data analytics. Designed to support the highest-level of performance in GPU computing, the G262-ZR0 incorporates fast PCIe 4.0 throughput in addition to NVIDIA HGX technologies and NVIDIA NVLink to provide industry leading bandwidth performance.
Key Technologies of the G262-ZR0:
  • NVIDIA A100 with 40 GB or 80 GB: 40 GB of VRAM with 1.6 TB/s of memory bandwidth or 80G of VRAM with 2.0 TB/s bandwidth for high-level computational throughput.
  • Excellent GPU-to-GPU communication via 3rd gen NVIDIA NVLink with 600 GB/s bandwidth.
  • Reduction in latency and CPU utilization with Mellanox Socket Direct technology. In this dual-socket server, a single CPU can access the network by bypassing the inter-processor communication bus and adjacent CPU.
  • PCIe 4.0 allows for faster interconnect (compared to PCIe 3.0) and low latency for NICs and NVMe drives via PCIe switch fabric.
  • Ultra-fast 200 Gbps access of GPUs on other servers with RDMA and HDR InfiniBand.
Introduction to the G262-ZR0:
The G262-ZR0 offers the highest GPU compute possible in a 2U chassis, and it is built with the latest technology to provide the fastest connections. 128 cores from the dual 2nd gen AMD EPYC processors and 160 PCIe Gen 4 lanes are required for the max throughput between CPU-to-CPU and CPU-to-GPU connections. Inside the G262 is the NVIDIA HGX A100 4-GPU platform for impressive performance in HPC and AI. In addition, the G262 has 16 DIMM slots for up to 4 TB of DDR4-3200 MHz memory in 8-channels. There are 6 low-profile (Gen 4) slots, one OCP 3.0 slot, and dual 1GbE LAN ports. For drives, 4x 2.5" U.2 NVMe/SATA bays and 2x M.2 slots. Powering the system are 2x 3000 W 80+ Platinum redundant power supplies. To accommodate such a powerful system a strong emphasis on thermal design was incorporated and the server was split creating a chamber dedicated to cooling the GPUs, and another for CPUs, memory, and expansion slots.

Remote and Multiple Server Management:
As part of GIGABYTE's value proposition, GIGABYTE provides GIGABYTE Management Console (GMC) for BMC server management via a web browser-based platform. Additionally, GIGABYTE Server Management (GSM) software is free to download and used to monitor and manage multiple servers. GMC and GSM offer great value while reducing license and customer maintenance costs.
Add your own comment

15 Comments on GIGABYTE Releases 2U Server: G262-ZR0 with NVIDIA HGX A100 4-GPU

#1
Caring1
Mining/ Gaming edition? :laugh:
Posted on Reply
#2
kayjay010101
I've been around a lot of loud servers, but that looks to be a screamer. It's even got fans sticking out of the front over the ears! I guess you need that to cool 3KW in 2U's lol
But seriously, this looks insane. A lot of awesome hardware in just 2U's. I can't even imagine how many digits one of these would run ya kitted out. 6? maybe even 7.
Posted on Reply
#3
mashie
kayjay010101I've been around a lot of loud servers, but that looks to be a screamer. It's even got fans sticking out of the front over the ears! I guess you need that to cool 3KW in 2U's lol
But seriously, this looks insane. A lot of awesome hardware in just 2U's. I can't even imagine how many digits one of these would run ya kitted out. 6? maybe even 7.
I would guess $100k considering the bigger brother with twice the hardware is $200k: www.datacenterdynamics.com/en/news/nvidia-launches-5-petaflops-dgx-a100-and-cloud-centric-hgx-a100/
Posted on Reply
#5
TumbleGeorge
Ultra performance machine for a ton of money with slow old 1Gbps network? When 2*10Gbps is very normal in use from years.
Posted on Reply
#6
mashie
TumbleGeorgeUltra performance machine for a ton of money with slow old 1Gbps network? When 2*10Gbps is very normal in use from years.
No one that buy such servers will use the integrated NIC's for anything but iLO/device management.

Due to different requirements for different users the network cards for production traffic are not included, those 16x PCIe 4.0 slots will hold multiple 100G or Infiniband NIC's.
Posted on Reply
#7
Wirko
mashieI would guess $100k considering the bigger brother with twice the hardware is $200k: www.datacenterdynamics.com/en/news/nvidia-launches-5-petaflops-dgx-a100-and-cloud-centric-hgx-a100/
It looks like $200k includes some RAM and storage, and a serious amount of networking (but not twice the number of CPUs - it's still 2).

I'm wondering though - if the prices of these servers are ever published, do they even mean anything? It's not like they are sold in retail.
Posted on Reply
#8
mashie
The $200k price is for a fully loaded pod.

I can only imagine servers have the same type of discount as networking kit so silly discounts may apply (I look at you Cisco).
Posted on Reply
#9
TumbleGeorge
mashieNo one that buy such servers will use the integrated NIC's for anything but iLO/device management.

Due to different requirements for different users the network cards for production traffic are not included, those 16x PCIe 4.0 slots will hold multiple 100G or Infiniband NIC's.
It always makes an impression when in a $ 100,000+ product, the designer or assembler has saved a few dollars and used the lowest class for an interface. I would understand him if it was a machine that will produce tens of millions of units and the savings from questionable action summarized up to a solid amount of ca$h.... But in this case ...Yes they will sale biuld in on pcie card network parts additionally...and will milk clients for more money for "optional parts and accessories". So correctly, so capitalistically :D
Posted on Reply
#10
ypsylon
For sure it does look like epic setup, without any CPU puns. 7 fans (or 10 if under that front shroud are doubles), probably about 8-12Amps each at max tilt... woof. That's how much solid, rendering workstation draws total power. With noise levels challenging space shuttle during lift-off. ;)

One thing you can count on tho. There will be at least one person/company which will try it for mining. To recoup part of the investment - safest bet you'll ever make. Even if in previous mining craze Quadros or Teslas were much inferior, but at 50k right now/1 BTC they don't have to run this for very long.
Posted on Reply
#11
Patriot
TumbleGeorgeIt always makes an impression when in a $ 100,000+ product, the designer or assembler has saved a few dollars and used the lowest class for an interface. I would understand him if it was a machine that will produce tens of millions of units and the savings from questionable action summarized up to a solid amount of ca$h.... But in this case ...Yes they will sale biuld in on pcie card network parts additionally...and will milk clients for more money for "optional parts and accessories". So correctly, so capitalistically :D
You do realize different customers want different nics right? Not everyone wants or needs the $2k nic, and some want 4 of them...
Posted on Reply
#12
TumbleGeorge
PatriotYou do realize different customers want different nics right? Not everyone wants or needs the $2k nic, and some want 4 of them...
Nik's integrated on motherboards is much much cheaper than models on dedicated pcie adapters.
Posted on Reply
#13
Patriot
TumbleGeorgeNik's integrated on motherboards is much much cheaper than models on dedicated pcie adapters.
Just... no.
Posted on Reply
#14
Jism
TumbleGeorgeNik's integrated on motherboards is much much cheaper than models on dedicated pcie adapters.
Yeah but, in professional stuff; it's kind of different then your local homebased NIC. Enterprise hardware is designed for 24 hours constant bashing, years long.
Posted on Reply
#15
Wirko
A car analogy. Everybody loves them, I know. Wheeled excavators, bulldozers etc., in particular the heaviest types, the ones that easily exceed 200k €, are sold without wheels. Wheels must be ordered separately. There are several types, costing ... well, more than that 200G card apiece (wheel and tyre).
Posted on Reply
Add your own comment
Nov 19th, 2024 20:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts