Wednesday, March 20th 2024

Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000

Jensen Huang has been talking to media outlets following the conclusion of his keynote presentation at NVIDIA's GTC 2024 conference—an NBC TV "exclusive" interview with the Team Green boss has caused a stir in tech circles. Jim Cramer's long-running "Squawk on the Street" trade segment hosted Huang for just under five minutes—NBC's presenter labelled the latest edition of GTC the "Woodstock of AI." NVIDIA's leader reckoned that around $1 trillion of industry was in attendance at this year's event—folks turned up to witness the unveiling of "Blackwell" B200 and GB200 AI GPUs. In the interview, Huang estimated that his company had invested around $10 billion into the research and development of its latest architecture: "we had to invent some new technology to make it possible."

Industry watchdogs have seized on a major revelation—as disclosed during the televised NBC report—Huang revealed that his next-gen AI GPUs "will cost between $30,000 and $40,000 per unit." NVIDIA (and its rivals) are not known to publicly announce price ranges for AI and HPC chips—leaks from hardware partners and individuals within industry supply chains are the "usual" sources. An investment banking company has already delved into alleged Blackwell production costs—as shared by Tae Kim/firstadopter: "Raymond James estimates it will cost NVIDIA more than $6000 to make a B200 and they will price the GPU at a 50-60% premium to H100...(the bank) estimates it costs NVIDIA $3320 to make the H100, which is then sold to customers for $25,000 to $30,000." Huang's disclosure should be treated as an approximation, since his company (normally) deals with the supply of basic building blocks.
Sources: Tae Kim Tweet, Tom's Hardware, CNBC, NBC, The Next Platform
Add your own comment

18 Comments on Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000

#2
Denver
If it costs $30k they are starting to give up some of their obscene profit margins.

I'm still searching for an explanation as to why they didn't produce this new chip at 3nm. These R&D numbers seem even stranger when you consider that each chip offers minimal architectural advancement, biggest improvements comes from the fact of gluing two chips together...
Posted on Reply
#4
bonehead123
T0@stwe had to invent some new technology to make it possible."
Yea, and it's called "The NEW Jacket financing plan" or "Fleecing da AI monster- 2024 & Beyond", hahahaha

........
Posted on Reply
#5
thesmokingman
DenverIf it costs $30k they are starting to give up some of their obscene profit margins.

I'm still searching for an explanation as to why they didn't produce this new chip at 3nm. These R&D numbers seem even stranger when you consider that each chip offers minimal architectural advancement, biggest improvements comes from the fact of gluing two chips together...
He's BS'ing everyone with that R&D cost, 70%+ margin... lol.
Posted on Reply
#6
oxrufiioxo
thesmokingmanHe's BS'ing everyone with that R&D cost, 70%+ margin... lol.
Given that the chip cost 6k ish to make it's probably 80%+ lmao. I mean if it was me I'd be charging as much as companies are willing to spend so can't really blame them.
Posted on Reply
#7
ghazi
And some of you guys are still hoping for another $700 Gx102. Don't think Jensen cares about your market segment anymore...
Posted on Reply
#8
Raiden85
Going by those prices the consumer 5090 will be $2000 easily :eek:
Posted on Reply
#9
stimpy88
Total BS from a total BS human. We are looking at another +-30% price increase over the awful current gen. $1000 for a midrange card.
Posted on Reply
#10
ARF
DenverI'm still searching for an explanation as to why they didn't produce this new chip at 3nm.
If the chips were backported from 3nm to 5nm+, that means they failed to comply with the initial performance metrics. Maybe 3nm is broken for such large and extremely power hungry chips?
stimpy88Total BS from a total BS human. We are looking at another +-30% price increase over the awful current gen. $1000 for a midrange card.
That won't work. Because there is a red line for the gamers beyond which the sales will be physicallly impossible.
You see - you can't sell a garbage RTX 5060 Ti that is 5% faster than RTX 4070 S for 1000$, if the latter already costs around 600$.
You need a very significant performance increase which won't happen... because greed.
But greed means that either way they are approaching a wall that will make the whole GPU segment DOA.
Posted on Reply
#11
Noyand
oxrufiioxoGiven that the chip cost 6k ish to make it's probably 80%+ lmao. I mean if it was me I'd be charging as much as companies are willing to spend so can't really blame them.
Also worth to keep in mind that unlike us poor mortal, the people buying those are probably getting their money back and more. If I were selling something that allow people to make millions, I would charge accordingly TBH.
Posted on Reply
#12
ThrashZone
Hi,
Leather jacket man does this at least twice a day lol
Posted on Reply
#13
gffermari
If it was easy to R&D and produce a chip that capable, more companies would have done it already.
nVidia can charge as much as they want as long as no one else is capable enough to compete them.

I would really like to see if nVidia had x86 license, what they could produce.
Posted on Reply
#14
Prima.Vera
gffermariIf it was easy to R&D and produce a chip that capable, more companies would have done it already.
nVidia can charge as much as they want as long as no one else is capable enough to compete them.

I would really like to see if nVidia had x86 license, what they could produce.
They have ARM license and are going to release this year supposedly a very competitive CPU. Time will tell I guess...
Posted on Reply
#15
evernessince
I think Nvidia is sort of tipping it's hand in regards to technical limitations by demonstrating that they have two big dies that act as one.

It's similar to AMD's Zen 1 in that both dies are the same but different in that Zen 1 chips are designed as parts of the whole and modular in nature. Hence why AMD was able to add additional chiplets and they'd be interoperable. Meanwhile the Nvidia design appears to be fixed in nature, you get two full dies connected. By extension Nvidia's is not a chiplet architecture.

If Nvidia had the technical knowledge to implement a chiplet architecture it makes little sense why they'd go with two big dies over smaller dies that would be vastly cheaper to produce with higher yields. In addition, in Nvidia's design they are clearly not going to be able to use those massive dies across all product segments. By extension this means that like other monolithic products you'll need to have different dies for the various SKUs. In a chiplet based architecture you build your SKUs out of chiplets to address all segments of the market, which allows a lot of flexability in terms of which parts of the market you want to allocate product to and it allows you to bin chiplets among your entire yield and not just the yield of a specific SKU. It appears Nvidia's design lacks modularity and scalability which fundementally makes it two monolithic dies and not chiplets.
Posted on Reply
#16
gffermari
Prima.VeraThey have ARM license and are going to release this year supposedly a very competitive CPU. Time will tell I guess...
The same for ARM. ARM is not allowed to produce x86 CPUs.
Basically both ARM and nVidia would destroy Intel and AMD...
Posted on Reply
#17
AnotherReader
gffermariIf it was easy to R&D and produce a chip that capable, more companies would have done it already.
nVidia can charge as much as they want as long as no one else is capable enough to compete them.

I would really like to see if nVidia had x86 license, what they could produce.
Nvidia's previous attempts at CPUs haven't been earth shattering so I doubt they would have done any better than AMD or Intel at x86 CPUs.
Posted on Reply
#18
stimpy88
gffermariThe same for ARM. ARM is not allowed to produce x86 CPUs.
Basically both ARM and nVidia would destroy Intel and AMD...
ARM on the consumer desktop won't ever happen, and probably won't go very far outside of niche use cases in the datacenter either. There is no ARM architecture or product that is as fast as a 96 core Zen 4. ARM can't even compete with it's own products against the performance of x86/AMD64.

nGreedia are trying to argue that their new ARM servers are amazing, but all I can see is an expensive platform, with significantly less performance than AMD's Epyc.
Posted on Reply
Add your own comment
Dec 18th, 2024 23:08 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts