• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,558 (0.97/day)
We know that NVIDIA's latest "Blackwell" GPUs are fast, but how much faster are they over the previous generation "Hopper"? Thanks to the latest MLPerf Training v4.1 results, NVIDIA's HGX B200 Blackwell platform has demonstrated massive performance gains, measuring up to 2.2x improvement per GPU compared to its HGX H200 Hopper. The latest results, verified by MLCommons, reveal impressive achievements in large language model (LLM) training. The Blackwell architecture, featuring HBM3e high-bandwidth memory and fifth-generation NVLink interconnect technology, achieved double the performance per GPU for GPT-3 pre-training and a 2.2x boost for Llama 2 70B fine-tuning compared to the previous Hopper generation. Each benchmark system incorporated eight Blackwell GPUs operating at a 1,000 W TDP, connected via NVLink Switch for scale-up.

The network infrastructure utilized NVIDIA ConnectX-7 SuperNICs and Quantum-2 InfiniBand switches, enabling high-speed node-to-node communication for distributed training workloads. While previous Hopper-based systems required 256 GPUs to optimize performance for the GPT-3 175B benchmark, Blackwell accomplished the same task with just 64 GPUs, leveraging its larger HBM3e memory capacity and bandwidth. One thing to look out for is the upcoming GB200 NVL72 system, which promises even more significant gains past the 2.2x. It features expanded NVLink domains, higher memory bandwidth, and tight integration with NVIDIA Grace CPUs, complemented by ConnectX-8 SuperNIC and Quantum-X800 switch technologies. With faster switching and better data movement with Grace-Blackwell integration, we could see even more software optimization from NVIDIA to push the performance envelope.



View at TechPowerUp Main Site | Source
 
Joined
Sep 15, 2011
Messages
6,705 (1.39/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
Are those banned for Chinese Market?
 
Joined
Nov 2, 2016
Messages
111 (0.04/day)
What about performance per watt? It's nice that the architecture brings good performance uplift but does it do it at the cost of efficiency or not?
 
Joined
Oct 20, 2017
Messages
129 (0.05/day)
 
Joined
Nov 6, 2016
Messages
1,747 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
What about performance per watt? It's nice that the architecture brings good performance uplift but does it do it at the cost of efficiency or not?
Exactly....."per GPU" isn't exactly a unit of measurement and although I cannot recall the exact specifications, if I remember correctly, the B100 for example, is made from two GPU chips so I'm sure it has way more "cores" (although it seems like Nvidia is constantly changing what a Cuda "core" is so I don't even know of a Blackwell "core" can be directly compared to a hopper "core")....Basically what I'm saying is that to compare power efficiency between the two architectures, I feel like we have to use a unit of measurement like "watts per mm^2" where "mm^2" is the physical area of the GPU chip/chiplet
 

mikesg

New Member
Joined
Jun 1, 2024
Messages
25 (0.15/day)
Who would have thought "more data" & "more power" would have made it this far before attention was paid to properly engineering the solution.
 
Joined
Apr 2, 2011
Messages
2,797 (0.56/day)
Another person who actually read the article! PC gamers don't read.


This isn't about a GPU to play games.

You know...it's funny. You almost have a point, then it's smothered in the crib.

Everything is going to be about the GPUs when it comes to Nvidia. It's the nature of the thing, when you're a GPU company. In this instance there's about 2 degrees to cover. Let's make it like that other game involving Kevin Bacon.
1) Nvidia produces a new Blackwell based A.I. accelerator.
2) The A.I. accelerator is run on the same lines as their other products.
3) The production of the A.I. accelerator is higher margin, and thus will decrease the amount of GPUs on the market.

Two leaps to get from an announced (presumably commercial or educational use) product to its direct impact on the cost of consumer GPUs. Oh, and scalping is a thing...right now the countries around China are scalping for them...and you know if scalpers get caught there is a penalty, though the up-side from scalping is huge profits and an artificially inflated cost for things that are knock-on or related. In this case scalping the Nvidia A.I. accelerators will drive people who cannot afford them to buy GPUs...which will price out consumers. Cool. That might be one jump.



In short, the price of tea in China does influence the price of tea in India. It's impossible to cross fingers and wish away that the things are linked, despite theoretically being in separate realms.
 
Top