Wednesday, May 30th 2018

NVIDIA Introduces HGX-2, Fusing HPC and AI Computing into Unified Architecture

NVIDIA HGX-2 , the first unified computing platform for both artificial intelligence and high performance computing. The HGX-2 cloud server platform, with multi-precision computing capabilities, provides unique flexibility to support the future of computing. It allows high-precision calculations using FP64 and FP32 for scientific computing and simulations, while also enabling FP16 and Int8 for AI training and inference. This unprecedented versatility meets the requirements of the growing number of applications that combine HPC with AI.

A number of leading computer makers today shared plans to bring to market systems based on the NVIDIA HGX-2 platform. "The world of computing has changed," said Jensen Huang, founder and chief executive officer of NVIDIA, speaking at the GPU Technology Conference Taiwan, which kicked off today. "CPU scaling has slowed at a time when computing demand is skyrocketing. NVIDIA's HGX-2 with Tensor Core GPUs gives the industry a powerful, versatile computing platform that fuses HPC and AI to solve the world's grand challenges."
HGX-2-serves as a "building block" for manufacturers to create some of the most advanced systems for HPC and AI. It has achieved record AI training speeds of 15,500 images per second on the ResNet-50 training benchmark, and can replace up to 300 CPU-only servers.

It incorporates such breakthrough features as NVIDIA NVSwitch interconnect fabric, which seamlessly links 16 NVIDIA Tesla V100 Tensor Core GPUs to work as a single, giant GPU delivering two petaflops of AI performance. The first system built using HGX-2 was the recently announced NVIDIA DGX-2 .

HGX-2 comes a year after the launch of the original NVIDIA HGX-1, at Computex 2017. The HGX-1 reference architecture won broad adoption among the world's leading server makers and companies operating massive datacenters, including Amazon Web Services, Facebook and Microsoft.

OEM, ODM Systems Expected Later This Year
Four leading server makers - Lenovo, QCT, Supermicro and Wiwynn - announced plans to bring their own HGX-2-based systems to market later this year.

Additionally, four of the world's top original design manufacturers (ODMs) - Foxconn, Inventec, Quanta and Wistron - are designing HGX-2-based systems, also expected later this year, for use in some of the world's largest cloud datacenters.

Family of NVIDIA GPU-Accelerated Server Platforms
HGX-2 is a part of the larger family of NVIDIA GPU-Accelerated Server Platforms, an ecosystem of qualified server classes addressing a broad array of AI, HPC and accelerated computing workloads with optimal performance.

Supported by major server manufacturers, the platforms align with the datacenter server ecosystem by offering the optimal mix of GPUs, CPUs and interconnects for diverse training (HGX-T2), inference (HGX-I2) and supercomputing (SCX) applications. Customers can choose a specific server platform to match their accelerated computing workload mix and achieve best-in-class performance.

Broad Industry Support
Top OEMs and ODMs have voiced strong support for HGX-2:

"Foxconn has long been dedicated to hyperscale computing solutions and successfully won customer recognition. We're glad to work with NVIDIA for the HGX-2 project, which is the most promising solution to fulfill explosive demand from AI/DL."

- Ed Wu, corporate executive vice president at Foxconn and chairman at Ingrasys

"Inventec has a proven history of delivering high-performing and scalable servers with robust innovative designs for our customers who run some of the world's largest datacenters. By rapidly incorporating HGX-2 into our future designs, we'll infuse our portfolio with the most powerful AI solution available to companies worldwide."

- Evan Chien, head of IEC White Box Product Center, China Business Line Director, Inventec

"NVIDIA's HGX-2 ups the ante with a design capable of delivering two petaflops of performance for AI and HPC-intensive workloads. With the HGX-2 server building block, we'll be able to quickly develop new systems that can meet the growing needs of our customers who demand the highest performance at scale."

- Paul Ju, vice president and general manager of Lenovo DCG

"As a leading cloud enabler, Quanta is committed to developing solutions for the next generation of clouds for a variety of innovative use cases. As we have seen a multitude of AI applications on the rise, Quanta works closely with NVIDIA to ensure our clients benefit from the latest and greatest GPU technologies. We are thrilled to broaden our GPU compute portfolio with this critical enabler for AI clouds as an HGX-2 launch partner."

- Mike Yang, senior vice president, Quanta Computer, and president, QCT

"To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform. The HGX-2 system will enable efficient training of complex models."

- Charles Liang, president and CEO of Supermicro

"We are very honored to work with NVIDIA as a partner. The demand for AI cloud computing is emerging in today's modern technology environment. I strongly believe the high performance and modularized flexibility of the HGX-2 system will make great contributions to various computing areas, ranging from academics and science to government applications."

- Jeff Lin, president of Enterprise Business Group, Wistron

"Wiwynn specializes in delivering hyperscale datacenter and cloud infrastructure solutions. Our collaboration with NVIDIA and the HGX-2 server building block will enable us to provide our customers with two petaflops of computing for computationally intensive AI and HPC workloads."

- Steven Lu, Vice President, Wiwynn
Add your own comment

8 Comments on NVIDIA Introduces HGX-2, Fusing HPC and AI Computing into Unified Architecture

#1
Arrakis9
As expected this was the only thing discussed in the key note.. Just remember, the more you buy.. The more you save! :rolleyes:

Classic Nvidia

Oh and the pricing you ask??
$399,000
Posted on Reply
#2
Fluffmeister
When virtual spaceships for an unfinished game can set you back $27000 these days, that isn't too bad.
Posted on Reply
#3
dj-electric
Arrakis+9As expected this was the only thing discussed in the key note.. Just remember, the more you buy.. The more you save! :rolleyes:

Classic Nvidia

Oh and the pricing you ask??
$399,000
Yeah, AMD's deep learning servers are much cheaper.
Posted on Reply
#4
Blueberries
I think you guys are missing the point... I have absolutely zero use for this machine but if I was a billionaire I would buy one just to have a massive nerdboner looking inside the chassis.
Posted on Reply
#5
Fluffmeister
Oh for sure, this thing is basically the ultimate GPU, it's a 81,920 CUDA Core monster with 512GB of HBM2 VRAM, with bonkers levels of performance across whatever work load you want to throw at it.

Stick a glass window on the side, add some RGB lighting, and you win the i own Skynet competition.
Posted on Reply
#6
TheGuruStud
FluffmeisterOh for sure, this thing is basically the ultimate GPU, it's a 81,920 CUDA Core monster with 512GB of HBM2 VRAM, with bonkers levels of performance across whatever work load you want to throw at it.

Stick a glass window on the side, add some RGB lighting, and you win the i own Skynet competition.
Irresponsible levels of perf?

All you'd need is a leather jacket to go with it and you can be the biggest douche bag on a stage. (Oops, someone at Nvidia already has that covered)
Posted on Reply
#7
Midland Dog
dj-electricYeah, AMD's deep learning servers are much cheaper.
and much shittier, hence you can charge ridiculous prices for the top of the line product
Posted on Reply
#8
dj-electric
Midland Dogand much shittier, hence you can charge ridiculous prices for the top of the line product
Yup, that was kind of sarcastically said. What beats tensorflow with CUDA today? welp, nothing does.
Posted on Reply
Nov 8th, 2024 13:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts