Thursday, November 14th 2024

NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

We know that NVIDIA's latest "Blackwell" GPUs are fast, but how much faster are they over the previous generation "Hopper"? Thanks to the latest MLPerf Training v4.1 results, NVIDIA's HGX B200 Blackwell platform has demonstrated massive performance gains, measuring up to 2.2x improvement per GPU compared to its HGX H200 Hopper. The latest results, verified by MLCommons, reveal impressive achievements in large language model (LLM) training. The Blackwell architecture, featuring HBM3e high-bandwidth memory and fifth-generation NVLink interconnect technology, achieved double the performance per GPU for GPT-3 pre-training and a 2.2x boost for Llama 2 70B fine-tuning compared to the previous Hopper generation. Each benchmark system incorporated eight Blackwell GPUs operating at a 1,000 W TDP, connected via NVLink Switch for scale-up.

The network infrastructure utilized NVIDIA ConnectX-7 SuperNICs and Quantum-2 InfiniBand switches, enabling high-speed node-to-node communication for distributed training workloads. While previous Hopper-based systems required 256 GPUs to optimize performance for the GPT-3 175B benchmark, Blackwell accomplished the same task with just 64 GPUs, leveraging its larger HBM3e memory capacity and bandwidth. One thing to look out for is the upcoming GB200 NVL72 system, which promises even more significant gains past the 2.2x. It features expanded NVLink domains, higher memory bandwidth, and tight integration with NVIDIA Grace CPUs, complemented by ConnectX-8 SuperNIC and Quantum-X800 switch technologies. With faster switching and better data movement with Grace-Blackwell integration, we could see even more software optimization from NVIDIA to push the performance envelope.
Sources: MLCommons, via NVIDIA
Add your own comment

18 Comments on NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

#1
Prima.Vera
Are those banned for Chinese Market?
Posted on Reply
#2
AleksandarK
News Editor
Prima.VeraAre those banned for Chinese Market?
Yeah, but Chinese entities still get them from companies in India or other countries.
Posted on Reply
#3
close
What about performance per watt? It's nice that the architecture brings good performance uplift but does it do it at the cost of efficiency or not?
Posted on Reply
#4
kondamin
I hope it’s not just an inferencing improvement round
Posted on Reply
#6
marios15
I bet these are great at INT8
I bet the MT65002 will be great!
Posted on Reply
#7
AnarchoPrimitiv
closeWhat about performance per watt? It's nice that the architecture brings good performance uplift but does it do it at the cost of efficiency or not?
Exactly....."per GPU" isn't exactly a unit of measurement and although I cannot recall the exact specifications, if I remember correctly, the B100 for example, is made from two GPU chips so I'm sure it has way more "cores" (although it seems like Nvidia is constantly changing what a Cuda "core" is so I don't even know of a Blackwell "core" can be directly compared to a hopper "core")....Basically what I'm saying is that to compare power efficiency between the two architectures, I feel like we have to use a unit of measurement like "watts per mm^2" where "mm^2" is the physical area of the GPU chip/chiplet
Posted on Reply
#8
SOAREVERSOR
piloponth
This article is about the actual product for AI and servers. It won't be scalped. That only happens to the junk for gaming. Which will be scalped.
Posted on Reply
#9
mikesg
Who would have thought "more data" & "more power" would have made it this far before attention was paid to properly engineering the solution.
Posted on Reply
#10
Daven
piloponth
Someone's really looking forward to buying an RDNA4 or Battlemage GPU.
Posted on Reply
#11
lexluthermiester
piloponth
DavenSomeone's really looking forward to buying an RDNA4 or Battlemage GPU.
This article is not about consumer products.
Posted on Reply
#12
SOAREVERSOR
lexluthermiesterThis article is not about consumer products.
Another person who actually read the article! PC gamers don't read.
DavenSomeone's really looking forward to buying an RDNA4 or Battlemage GPU.
This isn't about a GPU to play games.
Posted on Reply
#13
lilhasselhoffer
SOAREVERSORAnother person who actually read the article! PC gamers don't read.


This isn't about a GPU to play games.
You know...it's funny. You almost have a point, then it's smothered in the crib.

Everything is going to be about the GPUs when it comes to Nvidia. It's the nature of the thing, when you're a GPU company. In this instance there's about 2 degrees to cover. Let's make it like that other game involving Kevin Bacon.
1) Nvidia produces a new Blackwell based A.I. accelerator.
2) The A.I. accelerator is run on the same lines as their other products.
3) The production of the A.I. accelerator is higher margin, and thus will decrease the amount of GPUs on the market.

Two leaps to get from an announced (presumably commercial or educational use) product to its direct impact on the cost of consumer GPUs. Oh, and scalping is a thing...right now the countries around China are scalping for them...and you know if scalpers get caught there is a penalty, though the up-side from scalping is huge profits and an artificially inflated cost for things that are knock-on or related. In this case scalping the Nvidia A.I. accelerators will drive people who cannot afford them to buy GPUs...which will price out consumers. Cool. That might be one jump.



In short, the price of tea in China does influence the price of tea in India. It's impossible to cross fingers and wish away that the things are linked, despite theoretically being in separate realms.
Posted on Reply
#14
Jism
A stacked server, at least 9KWh (9000W) an hour.
Posted on Reply
#15
Broken Processor
I'll be interested to see how cut down the 5090 and what it can do if that performance gain is directly translated they will price accordingly we already know the compute cards are double. Gaming card used prices are bad here with 3090ti selling for 1000, 4090 down to 600 and used entry level 5090 1300+ ! At that money none of he make sense especially the 3090's for compute even I know the extra ram is helpful but even still not enough when compared to 40xx
Posted on Reply
#16
Hankieroseman
SOAREVERSORThis article is about the actual product for AI and servers. It won't be scalped. That only happens to the junk for gaming. Which will be scalped.
Absolutely but I'm still praying for a 5090 someday. After what I went through during Cadet Covid's reign of terror to land a 3090, I managed to finally get one but at a ridiculous ebay price.
Posted on Reply
#17
igormp
Broken ProcessorI'll be interested to see how cut down the 5090 and what it can do if that performance gain is directly translated they will price accordingly we already know the compute cards are double. Gaming card used prices are bad here with 3090ti selling for 1000, 4090 down to 600 and used entry level 5090 1300+ ! At that money none of he make sense especially the 3090's for compute even I know the extra ram is helpful but even still not enough when compared to 40xx
The GB100 chip is going to be totally different from the GB102 one, as usual from Nvidia in the past releases (the x100 chips usually have FP64 units, no RT cores, HBM support etc etc).
Aside from that, I'm curious to see how much of a die cut the 5090 is going to be from the full GB102. I expect something similar to what we saw with the 4090 w.r.t. the full AD102.

As for pricing/performance, a 3090 is almost as fast as a 4090 for LLM tasks (albeit way less efficient), given that it's memory speed is pretty much the same, hence why its priced similarly to the 4090 in many places.
The 3090(ti) also allows one to use NVLink still, offsetting the bottleneck in PCIe speeds for training models with layer-parallel approaches.
Posted on Reply
#18
Lycanwolfen
SO the price of the 5090 are you ready $2500.00 US. or $3250.00 CAD

Ya I'll pass sorry Nvidia you care nothing about gamers and all about riping people off.

I would pay maybe up to 300 dollars for a video card but not 2500.00
Posted on Reply
Add your own comment
Nov 21st, 2024 10:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts