• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Gaudi AI Accelerator Gains 2x Performance Leap on GPT-3 with FP8 Software

GFreeman

News Editor
Staff member
Joined
Mar 6, 2023
Messages
1,540 (2.43/day)
Today, MLCommons published results of the industry standard MLPerf training v3.1 benchmark for training AI models, with Intel submitting results for Intel Gaudi 2 accelerators and 4th Gen Intel Xeon Scalable processors with Intel Advanced Matrix Extensions (Intel AMX). Intel Gaudi2 demonstrated a significant 2x performance leap, with the implementation of the FP8 data type on the v3.1 training GPT-3 benchmark. The benchmark submissions reinforced Intel's commitment to bring AI everywhere with competitive AI solutions.

"We continue to innovate with our AI portfolio and raise the bar with our MLPerf performance results in consecutive MLCommons AI benchmarks. Intel Gaudi and 4th Gen Xeon processors deliver a significant price-performance benefit for customers and are ready to deploy today. Our breadth of AI hardware and software configuration offers customers comprehensive solutions and choice tailored for their AI workloads," said Sandra Rivera, Intel executive vice president and general manager of the Data Center and AI Group.



The newest MLCommons MLPerf results build on Intel's strong AI performance over previous MLPerf training results from June. The Intel Xeon processor remains the only CPU reporting MLPerf results, and Intel Gaudi2 is one of only three accelerator solutions upon which results are based, only two of which are commercially available.

Intel Gaudi2 and 4th Gen Xeon processors demonstrate compelling AI training performance in a variety of hardware configurations to address the increasingly broad array of customer AI compute requirements.

Gaudi2 continues to be the only viable alternative to NVIDIA's H100 for AI compute needs, delivering significant price-performance. MLPerf results for Gaudi2 displayed the AI accelerator's increasing training performance:
  • Gaudi2 demonstrated a 2x performance leap with the implementation of the FP8 data type on the v3.1 training GPT-3 benchmark, reducing time-to-train by more than half compared to the June MLPerf benchmark, completing the training in 153.58 minutes on 384 Intel Gaudi2 accelerators. The Gaudi2 accelerator supports FP8 in both E5M2 and E4M3 formats, with the option of delayed scaling when necessary.
  • Intel Gaudi2 demonstrated training on the Stable Diffusion multi-modal model with 64 accelerators in 20.2 minutes, using BF16. In future MLPerf training benchmarks, Stable Diffusion performance will be submitted on the FP8 data type.
  • On eight Intel Gaudi2 accelerators, benchmark results were 13.27 and 15.92 minutes for BERT and ResNet-50, respectively, using BF16.

Intel remains the only CPU vendor to submit MLPerf results. The MLPerf results for 4th Gen Xeon highlighted its strong performance:
  • Intel submitted results for RESNet50, RetinaNet, BERT and DLRM dcnv2. The 4th Gen Intel Xeon scalable processors' results for ResNet50, RetinaNet and BERT were similar to the strong out-of-box performance results submitted for the June 2023 MLPerf benchmark.
  • DLRM dcnv2 is a new model from June's submission, with the CPU demonstrating a time-to-train submission of 227 minutes using only four nodes.

4th Gen Xeon processor performance demonstrates that many enterprise organizations can economically and sustainably train small to mid-sized deep learning models on their existing enterprise IT infrastructure with general-purpose CPUs, especially for use cases in which training is an intermittent workload.

With software updates and optimizations, Intel anticipates more advances in AI performance results in forthcoming MLPerf benchmarks. Intel's AI products provide customers with more choice for AI solutions to meet dynamic requirements requiring performance, efficiency and usability.

View at TechPowerUp Main Site | Source
 
Joined
Jun 6, 2021
Messages
685 (0.54/day)
System Name Red Devil
Processor AMD 5950x - Vermeer - B0
Motherboard Gigabyte X570 AORUS MASTER
Cooling NZXT Kraken Z73 360mm; 14 x Corsair QL 120mm RGB Case Fans
Memory G.SKill Trident Z Neo 32GB Kit DDR4-3600 CL14 (F4-3600C14Q-32GTZNB)
Video Card(s) PowerColor's Red Devil Radeon RX 6900 XT (Navi 21 XTX)
Storage 1 x Western Digital SN850 1GB; 1 x WD Black SN850X 4TB; 1 x Samsung SSD 870EVO 2TB
Display(s) 1 x MSI MPG 321URX QD-OLED 4K; 2 x Asus VG27AQL1A
Case Corsair Obsidian 1000D
Audio Device(s) Raz3r Nommo V2 Pro ; Steel Series Arctis Nova Pro X Wireless (XBox Version)
Power Supply AX1500i Digital ATX - 1500w - 80 Plus Titanium
Mouse Razer Basilisk V3
Keyboard Razer Huntsman V2 - Optical Gaming Keyboard
Software Windows 11
How do these companies wake up in the mornings knowing they are accelerating the End of Days?!
 
Joined
May 3, 2018
Messages
2,881 (1.20/day)
FP-8 is so low precision that computers must be ashamed to be forced to calculate it. /s
LOL I know, is it 1 bit of precision or does it muster 2 bits? I used to do electromagnetic calculations where even fp64 was causing us problems, we needed like 18 bits of precison. An fp96 would have been nice.
 
Top