Thursday, November 14th 2024

NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

We know that NVIDIA's latest "Blackwell" GPUs are fast, but how much faster are they over the previous generation "Hopper"? Thanks to the latest MLPerf Training v4.1 results, NVIDIA's HGX B200 Blackwell platform has demonstrated massive performance gains, measuring up to 2.2x improvement per GPU compared to its HGX H200 Hopper. The latest results, verified by MLCommons, reveal impressive achievements in large language model (LLM) training. The Blackwell architecture, featuring HBM3e high-bandwidth memory and fifth-generation NVLink interconnect technology, achieved double the performance per GPU for GPT-3 pre-training and a 2.2x boost for Llama 2 70B fine-tuning compared to the previous Hopper generation. Each benchmark system incorporated eight Blackwell GPUs operating at a 1,000 W TDP, connected via NVLink Switch for scale-up.

The network infrastructure utilized NVIDIA ConnectX-7 SuperNICs and Quantum-2 InfiniBand switches, enabling high-speed node-to-node communication for distributed training workloads. While previous Hopper-based systems required 256 GPUs to optimize performance for the GPT-3 175B benchmark, Blackwell accomplished the same task with just 64 GPUs, leveraging its larger HBM3e memory capacity and bandwidth. One thing to look out for is the upcoming GB200 NVL72 system, which promises even more significant gains past the 2.2x. It features expanded NVLink domains, higher memory bandwidth, and tight integration with NVIDIA Grace CPUs, complemented by ConnectX-8 SuperNIC and Quantum-X800 switch technologies. With faster switching and better data movement with Grace-Blackwell integration, we could see even more software optimization from NVIDIA to push the performance envelope.
Sources: MLCommons, via NVIDIA
Add your own comment

14 Comments on NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

#1
Prima.Vera
Are those banned for Chinese Market?
Posted on Reply
#2
AleksandarK
News Editor
Prima.VeraAre those banned for Chinese Market?
Yeah, but Chinese entities still get them from companies in India or other countries.
Posted on Reply
#3
close
What about performance per watt? It's nice that the architecture brings good performance uplift but does it do it at the cost of efficiency or not?
Posted on Reply
#4
kondamin
I hope it’s not just an inferencing improvement round
Posted on Reply
#6
marios15
I bet these are great at INT8
I bet the MT65002 will be great!
Posted on Reply
#7
AnarchoPrimitiv
closeWhat about performance per watt? It's nice that the architecture brings good performance uplift but does it do it at the cost of efficiency or not?
Exactly....."per GPU" isn't exactly a unit of measurement and although I cannot recall the exact specifications, if I remember correctly, the B100 for example, is made from two GPU chips so I'm sure it has way more "cores" (although it seems like Nvidia is constantly changing what a Cuda "core" is so I don't even know of a Blackwell "core" can be directly compared to a hopper "core")....Basically what I'm saying is that to compare power efficiency between the two architectures, I feel like we have to use a unit of measurement like "watts per mm^2" where "mm^2" is the physical area of the GPU chip/chiplet
Posted on Reply
#8
SOAREVERSOR
piloponth
This article is about the actual product for AI and servers. It won't be scalped. That only happens to the junk for gaming. Which will be scalped.
Posted on Reply
#9
mikesg
Who would have thought "more data" & "more power" would have made it this far before attention was paid to properly engineering the solution.
Posted on Reply
#10
Daven
piloponth
Someone's really looking forward to buying an RDNA4 or Battlemage GPU.
Posted on Reply
#11
lexluthermiester
piloponth
DavenSomeone's really looking forward to buying an RDNA4 or Battlemage GPU.
This article is not about consumer products.
Posted on Reply
#12
SOAREVERSOR
lexluthermiesterThis article is not about consumer products.
Another person who actually read the article! PC gamers don't read.
DavenSomeone's really looking forward to buying an RDNA4 or Battlemage GPU.
This isn't about a GPU to play games.
Posted on Reply
#13
lilhasselhoffer
SOAREVERSORAnother person who actually read the article! PC gamers don't read.


This isn't about a GPU to play games.
You know...it's funny. You almost have a point, then it's smothered in the crib.

Everything is going to be about the GPUs when it comes to Nvidia. It's the nature of the thing, when you're a GPU company. In this instance there's about 2 degrees to cover. Let's make it like that other game involving Kevin Bacon.
1) Nvidia produces a new Blackwell based A.I. accelerator.
2) The A.I. accelerator is run on the same lines as their other products.
3) The production of the A.I. accelerator is higher margin, and thus will decrease the amount of GPUs on the market.

Two leaps to get from an announced (presumably commercial or educational use) product to its direct impact on the cost of consumer GPUs. Oh, and scalping is a thing...right now the countries around China are scalping for them...and you know if scalpers get caught there is a penalty, though the up-side from scalping is huge profits and an artificially inflated cost for things that are knock-on or related. In this case scalping the Nvidia A.I. accelerators will drive people who cannot afford them to buy GPUs...which will price out consumers. Cool. That might be one jump.



In short, the price of tea in China does influence the price of tea in India. It's impossible to cross fingers and wish away that the things are linked, despite theoretically being in separate realms.
Posted on Reply
#14
Jism
A stacked server, at least 9KWh (9000W) an hour.
Posted on Reply
Add your own comment
Nov 14th, 2024 18:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts