Wednesday, November 8th 2023

Intel Gaudi AI Accelerator Gains 2x Performance Leap on GPT-3 with FP8 Software

Today, MLCommons published results of the industry standard MLPerf training v3.1 benchmark for training AI models, with Intel submitting results for Intel Gaudi 2 accelerators and 4th Gen Intel Xeon Scalable processors with Intel Advanced Matrix Extensions (Intel AMX). Intel Gaudi2 demonstrated a significant 2x performance leap, with the implementation of the FP8 data type on the v3.1 training GPT-3 benchmark. The benchmark submissions reinforced Intel's commitment to bring AI everywhere with competitive AI solutions.

"We continue to innovate with our AI portfolio and raise the bar with our MLPerf performance results in consecutive MLCommons AI benchmarks. Intel Gaudi and 4th Gen Xeon processors deliver a significant price-performance benefit for customers and are ready to deploy today. Our breadth of AI hardware and software configuration offers customers comprehensive solutions and choice tailored for their AI workloads," said Sandra Rivera, Intel executive vice president and general manager of the Data Center and AI Group.
The newest MLCommons MLPerf results build on Intel's strong AI performance over previous MLPerf training results from June. The Intel Xeon processor remains the only CPU reporting MLPerf results, and Intel Gaudi2 is one of only three accelerator solutions upon which results are based, only two of which are commercially available.

Intel Gaudi2 and 4th Gen Xeon processors demonstrate compelling AI training performance in a variety of hardware configurations to address the increasingly broad array of customer AI compute requirements.

Gaudi2 continues to be the only viable alternative to NVIDIA's H100 for AI compute needs, delivering significant price-performance. MLPerf results for Gaudi2 displayed the AI accelerator's increasing training performance:
  • Gaudi2 demonstrated a 2x performance leap with the implementation of the FP8 data type on the v3.1 training GPT-3 benchmark, reducing time-to-train by more than half compared to the June MLPerf benchmark, completing the training in 153.58 minutes on 384 Intel Gaudi2 accelerators. The Gaudi2 accelerator supports FP8 in both E5M2 and E4M3 formats, with the option of delayed scaling when necessary.
  • Intel Gaudi2 demonstrated training on the Stable Diffusion multi-modal model with 64 accelerators in 20.2 minutes, using BF16. In future MLPerf training benchmarks, Stable Diffusion performance will be submitted on the FP8 data type.
  • On eight Intel Gaudi2 accelerators, benchmark results were 13.27 and 15.92 minutes for BERT and ResNet-50, respectively, using BF16.
Intel remains the only CPU vendor to submit MLPerf results. The MLPerf results for 4th Gen Xeon highlighted its strong performance:
  • Intel submitted results for RESNet50, RetinaNet, BERT and DLRM dcnv2. The 4th Gen Intel Xeon scalable processors' results for ResNet50, RetinaNet and BERT were similar to the strong out-of-box performance results submitted for the June 2023 MLPerf benchmark.
  • DLRM dcnv2 is a new model from June's submission, with the CPU demonstrating a time-to-train submission of 227 minutes using only four nodes.
4th Gen Xeon processor performance demonstrates that many enterprise organizations can economically and sustainably train small to mid-sized deep learning models on their existing enterprise IT infrastructure with general-purpose CPUs, especially for use cases in which training is an intermittent workload.

With software updates and optimizations, Intel anticipates more advances in AI performance results in forthcoming MLPerf benchmarks. Intel's AI products provide customers with more choice for AI solutions to meet dynamic requirements requiring performance, efficiency and usability.
Sources: Intel, MLCommons
Add your own comment

3 Comments on Intel Gaudi AI Accelerator Gains 2x Performance Leap on GPT-3 with FP8 Software

#1
Guwapo77
How do these companies wake up in the mornings knowing they are accelerating the End of Days?!
Posted on Reply
#2
TumbleGeorge
Guwapo77How do these companies wake up in the mornings knowing they are accelerating the End of Days?!
FP-8 is so low precision that computers must be ashamed to be forced to calculate it. /s
Posted on Reply
#3
Minus Infinity
TumbleGeorgeFP-8 is so low precision that computers must be ashamed to be forced to calculate it. /s
LOL I know, is it 1 bit of precision or does it muster 2 bits? I used to do electromagnetic calculations where even fp64 was causing us problems, we needed like 18 bits of precison. An fp96 would have been nice.
Posted on Reply
Nov 28th, 2024 00:22 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts