Wednesday, March 20th 2024
Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000
Jensen Huang has been talking to media outlets following the conclusion of his keynote presentation at NVIDIA's GTC 2024 conference—an NBC TV "exclusive" interview with the Team Green boss has caused a stir in tech circles. Jim Cramer's long-running "Squawk on the Street" trade segment hosted Huang for just under five minutes—NBC's presenter labelled the latest edition of GTC the "Woodstock of AI." NVIDIA's leader reckoned that around $1 trillion of industry was in attendance at this year's event—folks turned up to witness the unveiling of "Blackwell" B200 and GB200 AI GPUs. In the interview, Huang estimated that his company had invested around $10 billion into the research and development of its latest architecture: "we had to invent some new technology to make it possible."
Industry watchdogs have seized on a major revelation—as disclosed during the televised NBC report—Huang revealed that his next-gen AI GPUs "will cost between $30,000 and $40,000 per unit." NVIDIA (and its rivals) are not known to publicly announce price ranges for AI and HPC chips—leaks from hardware partners and individuals within industry supply chains are the "usual" sources. An investment banking company has already delved into alleged Blackwell production costs—as shared by Tae Kim/firstadopter: "Raymond James estimates it will cost NVIDIA more than $6000 to make a B200 and they will price the GPU at a 50-60% premium to H100...(the bank) estimates it costs NVIDIA $3320 to make the H100, which is then sold to customers for $25,000 to $30,000." Huang's disclosure should be treated as an approximation, since his company (normally) deals with the supply of basic building blocks.
Sources:
Tae Kim Tweet, Tom's Hardware, CNBC, NBC, The Next Platform
Industry watchdogs have seized on a major revelation—as disclosed during the televised NBC report—Huang revealed that his next-gen AI GPUs "will cost between $30,000 and $40,000 per unit." NVIDIA (and its rivals) are not known to publicly announce price ranges for AI and HPC chips—leaks from hardware partners and individuals within industry supply chains are the "usual" sources. An investment banking company has already delved into alleged Blackwell production costs—as shared by Tae Kim/firstadopter: "Raymond James estimates it will cost NVIDIA more than $6000 to make a B200 and they will price the GPU at a 50-60% premium to H100...(the bank) estimates it costs NVIDIA $3320 to make the H100, which is then sold to customers for $25,000 to $30,000." Huang's disclosure should be treated as an approximation, since his company (normally) deals with the supply of basic building blocks.
18 Comments on Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000
I'm still searching for an explanation as to why they didn't produce this new chip at 3nm. These R&D numbers seem even stranger when you consider that each chip offers minimal architectural advancement, biggest improvements comes from the fact of gluing two chips together...
........
You see - you can't sell a garbage RTX 5060 Ti that is 5% faster than RTX 4070 S for 1000$, if the latter already costs around 600$.
You need a very significant performance increase which won't happen... because greed.
But greed means that either way they are approaching a wall that will make the whole GPU segment DOA.
Leather jacket man does this at least twice a day lol
nVidia can charge as much as they want as long as no one else is capable enough to compete them.
I would really like to see if nVidia had x86 license, what they could produce.
It's similar to AMD's Zen 1 in that both dies are the same but different in that Zen 1 chips are designed as parts of the whole and modular in nature. Hence why AMD was able to add additional chiplets and they'd be interoperable. Meanwhile the Nvidia design appears to be fixed in nature, you get two full dies connected. By extension Nvidia's is not a chiplet architecture.
If Nvidia had the technical knowledge to implement a chiplet architecture it makes little sense why they'd go with two big dies over smaller dies that would be vastly cheaper to produce with higher yields. In addition, in Nvidia's design they are clearly not going to be able to use those massive dies across all product segments. By extension this means that like other monolithic products you'll need to have different dies for the various SKUs. In a chiplet based architecture you build your SKUs out of chiplets to address all segments of the market, which allows a lot of flexability in terms of which parts of the market you want to allocate product to and it allows you to bin chiplets among your entire yield and not just the yield of a specific SKU. It appears Nvidia's design lacks modularity and scalability which fundementally makes it two monolithic dies and not chiplets.
Basically both ARM and nVidia would destroy Intel and AMD...
nGreedia are trying to argue that their new ARM servers are amazing, but all I can see is an expensive platform, with significantly less performance than AMD's Epyc.