Friday, February 23rd 2024

NVIDIA Expects Upcoming Blackwell GPU Generation to be Capacity-Constrained

NVIDIA is anticipating supply issues for its upcoming Blackwell GPUs, which are expected to significantly improve artificial intelligence compute performance. "We expect our next-generation products to be supply constrained as demand far exceeds supply," said Colette Kress, NVIDIA's chief financial officer, during a recent earnings call. This prediction of scarcity comes just days after an analyst noted much shorter lead times for NVIDIA's current flagship Hopper-based H100 GPUs tailored to AI and high-performance computing. The eagerly anticipated Blackwell architecture and B100 GPUs built on it promise major leaps in capability—likely spurring NVIDIA's existing customers to place pre-orders already. With skyrocketing demand in the red-hot AI compute market, NVIDIA appears poised to capitalize on the insatiable appetite for ever-greater processing power.

However, the scarcity of NVIDIA's products may present an excellent opportunity for significant rivals like AMD and Intel. If both companies can offer a product that could beat NVIDIA's current H100 and provide a suitable software stack, customers would be willing to jump to their offerings and not wait many months for the anticipated high lead times. Intel is preparing the next-generation Gaudi 3 and working on the Falcon Shores accelerator for AI and HPC. AMD is shipping its Instinct MI300 accelerator, a highly competitive product, while already working on the MI400 generation. It remains to be seen if AI companies will begin the adoption of non-NVIDIA hardware or if they will remain a loyal customer and agree to the higher lead times of the new Blackwell generation. However, capacity constrain should only be a problem at launch, where the availability should improve from quarter to quarter. As TSMC improves CoWoS packaging capacity and 3 nm production, NVIDIA's allocation of the 3 nm wafers will likely improve over time as the company moves its priority from H100 to B100.
Sources: Q4 Earning Call Transcript, via Tom's Hardware
Add your own comment

64 Comments on NVIDIA Expects Upcoming Blackwell GPU Generation to be Capacity-Constrained

#26
DeathtoGnomes
$NVDA needs to keep rising, so what better way but to create an artificial supply shortage ( reminiscent of the memory shortages?), but who is gonna prove it for an anti-trust suit?
Posted on Reply
#27
wolf
Performance Enthusiast
DeathtoGnomes$NVDA needs to keep rising, so what better way but to create an artificial supply shortage ( reminiscent of the memory shortages?), but who is gonna prove it for an anti-trust suit?
Isn't it all something they can make up anyway, if they project they can sell 50 million units and can only make 30 million, there's a constraint. The magic being to some degree they can project whatever they choose, and thus create this headline.

"we won't be able to make enough of these, they'll fly off shelves", because of course they've said some version of that, and it's probably true.
Posted on Reply
#28
Beermotor
LionheartJensen - "What narrative are we using to justify our BS prices for newer GPU's?"

Nvidia Sales goon - "Supply constraint?"

Jensen - "Perfect!"
This is the same crap they (Nvidia and AMD both) pulled when this most recent generation of cards came out.
ilyonAnd without any trace of RTX™ and DLSS™, who cares ?
Easy now, Jensen. ;)
3valatzyThis is because it will be built on the 3 nm process.

Meanwhile, AMD's new RX 8700 XT (top RDNA 1.4 part) will stay on the older 4 nm process.
Prediction is double the raster performance, and triple the ray-tracing performance.

RTX 5090 will be 60% faster than RTX 4090, and launching between Q4 this year and Q2 next year depending on how fail AMD will be.
What on earth is RDNA 1.4?

RDNA3 was a learning process for them with regard to MCM packaging and the wins/losses that come from that approach and RDNA4 is supposed to essentially be a bug-fixed and much more optimized version of RDNA3.

RDNA5 is supposed to be six to nine months behind RDNA4 which seems to indicate that RDNA4 is more of a half-generation GPU series, more like a "super" release but targeted toward the midrange or lower cards.
Vayra86Please let me vape some out of that crystal ball you've got there. I love me a dose of wishful thinking
It isn't wishful thinking as much as it's a Twitter/Discord circle-jerk by people that went to the YouTube school of Engineering.
londisteYou mean things like DLSS? :D
There is a whole bunch of smart people working on all that, regardless of how well the transistors shrink.
DLSS, FSR, and XeSS's days are numbered.

Microsoft and the Khronos Group are sick of vendors doing proprietary shit and have some smart people working on vendor-independent upscaling.

Edit: before any of you start pounding out an empassioned response to tell me that DLSS is hardware accellerated, whatever DirectX and Vulkan upscaling standards come out of this will allow for optional hardware accelleration either on the GPU or CPU. Nvidia/AMD/Intel can write their drivers so the upscaling tech executes using the same hardware features that they're currently using. That's how it went down with multitexturing, shaders, deferred rendering, and so on.
Posted on Reply
#29
Onasi
iameatingjamThats not really what I meant, I was thinking, more efficient designs of the existing nodes. But I suppose software solutions like dlss fit into that.
It is theoretically possible, after all, we have seen what NVidia managed to do with their backs against the wall and no potential die shrink - that was Maxwell. Significant performance and efficiency increase on the same node as Kepler. But there is a caveat to that - it coincided with a radical change in rendering techniques, what with the move to deferred rendering and new consoles. Maxwell was designed with it in mind, Kepler wasn’t and so it quickly fell behind. There is no shift like that on the horizon now. Well, unless we count RT, and that DOES need more raw transistors for RT cores.
Posted on Reply
#30
iameatingjam
OnasiIt is theoretically possible, after all, we have seen what NVidia managed to do with their backs against the wall and no potential die shrink - that was Maxwell. Significant performance and efficiency increase on the same node as Kepler. But there is a caveat to that - it coincided with a radical change in rendering techniques, what with the move to deferred rendering and new consoles. Maxwell was designed with it in mind, Kepler wasn’t and so it quickly fell behind. There is no shift like that on the horizon now. Well, unless we count RT, and that DOES need more raw transistors for RT cores.
All I'm saying is future performance increases are going to come less and less from node shrinks. Whatever that manifests as, I can't say.
Posted on Reply
#31
Onasi
iameatingjamAll I'm saying is future performance increases are going to come less and less from node shrinks. Whatever that manifests as, I can't say.
Chiplets, most likely. Large monolithic GPUs are already a losing proposition, the AD102 apparently has terrible yields, so swapping to some form of MCM would make sense for everyone involved.
I also wouldn’t rule out shrinks just yet. Sure, we may be approaching limits of silicon as a material, but other options exist. Graphene and synthetic diamonds have shown some potential here.
Posted on Reply
#32
Kn0xxPT
Nvidia is just a behemoth in GPU market that they dont even care... their product is just flat better than the competition. You want the best, pay for it.
Telling NVIDIA to care about gamers, my friends they are not a social services company.

It hurts yes, gamers and crypto have been feeding the monster.... now we have this.

We just need to have some solid alternatives from AMD and Intel.
Posted on Reply
#33
Soul_
Pretty happy with my 3xxx, not like any of the new games have been very good, validating an upgrade. So, they can keep their supply.
Posted on Reply
#34
AnotherReader
iameatingjamAll I'm saying is future performance increases are going to come less and less from node shrinks. Whatever that manifests as, I can't say.
All of the architectural tricks that increase performance require more transistors which means node shrinks are the most important part of the performance ladder and that's unlikely to change. We may move to new materials at some point, but architectural techniques rely on node shrinks as well.
Posted on Reply
#35
Gooigi's Ex
LionheartJensen - "What narrative are we using to justify our BS prices for newer GPU's?"

Nvidia Sales goon - "Supply constraint?"

Jensen - "Perfect!"
DEADASS this ^
BeermotorThis is the same crap they (Nvidia and AMD both) pulled when this most recent generation of cards came out.


Easy now, Jensen. ;)


What on earth is RDNA 1.4?

RDNA3 was a learning process for them with regard to MCM packaging and the wins/losses that come from that approach and RDNA4 is supposed to essentially be a bug-fixed and much more optimized version of RDNA3.

RDNA5 is supposed to be six to nine months behind RDNA4 which seems to indicate that RDNA4 is more of a half-generation GPU series, more like a "super" release but targeted toward the midrange or lower cards.



It isn't wishful thinking as much as it's a Twitter/Discord circle-jerk by people that went to the YouTube school of Engineering.


DLSS, FSR, and XeSS's days are numbered.

Microsoft and the Khronos Group are sick of vendors doing proprietary shit and have some smart people working on vendor-independent upscaling.

Edit: before any of you start pounding out an empassioned response to tell me that DLSS is hardware accellerated, whatever DirectX and Vulkan upscaling standards come out of this will allow for optional hardware accelleration either on the GPU or CPU. Nvidia/AMD/Intel can write their drivers so the upscaling tech executes using the same hardware features that they're currently using. That's how it went down with multitexturing, shaders, deferred rendering, and so on.
This person get it!
Posted on Reply
#36
Random_User
This was just obvious.
3valatzyThis is because it will be built on the 3 nm process.

Meanwhile, AMD's new RX 8700 XT (top RDNA 1.4 part) will stay on the older 4 nm process.
Prediction is double the raster performance, and triple the ray-tracing performance.

RTX 5090 will be 60% faster than RTX 4090, and launching between Q4 this year and Q2 next year depending on how fail AMD will be.
And be around $3K, while keeping the old gen cards at inflated prices as well.
wolfLisa Su - "perfect, well just copy that pricing model too"
Why bother. If they colluded with Nvidia, and there's no pressure for regulators. The consumers will have to buy for whatever price these come.
Posted on Reply
#37
SOAREVERSOR
hatIt's a bit like insanity if you ask me. Why are we pushing so hard for smaller and smaller manufacturing processes with crap yields that come from a singular place? This current path only results in expensive, hot running, loud, watt guzzling equipment that there are too few to satisfy demand (allegedly). Maybe if we gave the process engineers a break we'd all be better off for it.
Because people want performance gains. And it's not just corporate clients. PC Gamers = I want performance gains and more games, don't care if companies lose money, don't care if companies go broke, I want it and I want it now, stomps footsies and toddler goes back to their room. Corporate Clients = I want gains tell me the cost.

The obvious fix for this is to get off PC gaming. But as that won't happen, it's cloud or 30k for a GPU. So pick one of the three.
Posted on Reply
#38
Prima.Vera
Prepare for the most overpriced cards in the history.
What's sad is, suckers will still buy no matter the price, therefore keeping nGreedia happy and the stocks up.
Let's hope Intel and AMD will bring some competition, but it's hardly believable...
Bwaze2020, RTX 3080 - $700 - €700
2022, RTX 4080 - $1200 - €900
2024, RTX 5080 - $2040 - €1100

2026, RTX 6080 - $3468 - €1300
2028, RTX 7080 - $5896 - €1500
2030, RTX 8080 - $10022 - €1700
2032, RTX 9080 - $17038 - €1900
2034, RTX 10080 - $28965 - €2100
Fixed.
Posted on Reply
#39
Wirko
Nvidia and board partners will only sell new GPUs to PCMR at auctions, not in retail.
Posted on Reply
#40
Lycanwolfen
Or just build an Actual Graphics card and not a AI card. When the RTX cards came out they are not real Graphics card they are AI cards more AI for more scaling and DLSS. The GTX was the last of the pure graphics cards The pure hardware cards. No software no drivers no AI scalers nothing. Load up game and play.
Posted on Reply
#41
stimpy88
Well, if you deliberately don't make enough... nGreedia just being nGreedia.
Posted on Reply
#42
Prima.Vera
Why don't they use a more flexible way. Make a video card with 2 different chips. 1 GPU for pure graphics acceleration, and one for AI, RTRT, upscaling and all related...
Remember the time when video cards had 2 two chips? One for 2D graphics and one for 3D one?
Posted on Reply
#43
Dr. Dro
hatIt's a bit like insanity if you ask me. Why are we pushing so hard for smaller and smaller manufacturing processes with crap yields that come from a singular place? This current path only results in expensive, hot running, loud, watt guzzling equipment that there are too few to satisfy demand (allegedly). Maybe if we gave the process engineers a break we'd all be better off for it.
Because it's the only way such advanced processors are feasible. Semiconductors aren't magic.
LionheartJensen - "What narrative are we using to justify our BS prices for newer GPU's?"

Nvidia Sales goon - "Supply constraint?"

Jensen - "Perfect!"
Of course we've heard nothing of the sort regarding the unjustifiable, insane consumer-grade Ryzen Threadripper prices, have we? Just gotta dog on Nvidia for some quick acceptance points.
stimpy88Well, if you deliberately don't make enough... nGreedia just being nGreedia.
Sure because they can just "make" them, it's not like they don't rely on TSMC, yields being good or anything. Apple's also totally not getting the lion's share of N3 wafers, iPhones are made of just pixie and fairy dust or something. Even Intel will use this node to make their next-gen Core processors. That's how insane the demand for this node has become.

But of course, you just "make" them for like, "really cheap" and then "charge thousands" because it's greed and not because they spent multiple billions on R&D and have actual constraints involving third parties, technology and at this scale, even the concept of physics itself. Money just solves (absolves) everything!
Posted on Reply
#44
Lionheart
Dr. DroBecause it's the only way such advanced processors are feasible. Semiconductors aren't magic.



Of course we've heard nothing of the sort regarding the unjustifiable, insane consumer-grade Ryzen Threadripper prices, have we? Just gotta dog on Nvidia for some quick acceptance points.



Sure because they can just "make" them, it's not like they don't rely on TSMC, yields being good or anything. Apple's also totally not getting the lion's share of N3 wafers, iPhones are made of just pixie and fairy dust or something. Even Intel will use this node to make their next-gen Core processors. That's how insane the demand for this node has become.

But of course, you just "make" them for like, "really cheap" and then "charge thousands" because it's greed and not because they spent multiple billions on R&D and have actual constraints involving third parties, technology and at this scale, even the concept of physics itself. Money just solves (absolves) everything!
"Just gotta dog on Nvidia for some quick acceptance points." WTF you on about? I just came here to make a funny yet truthful joke & you're chucking a hissy fit over it thinking I want likes & love, you reek of bitterness & if you wanna go complain about AMD's threadripper prices, be the 1st one & go make a thread about it and whinge there.
Posted on Reply
#45
sLowEnd
It's not uncommon for new high end parts to be in short supply at launch. It's been like that for decades.

How long the supply will be an issue after launch remains to be seen though.
Posted on Reply
#46
OneMoar
There is Always Moar
Man the SEC/FTC needs to bitch slap nvidia upside the head for playing games like this
making statements like this is stock manipulation 101 and super illegal
Posted on Reply
#47
Dr. Dro
OneMoarMan the SEC/FTC needs to bitch slap nvidia upside the head for playing games like this
making statements like this is stock manipulation 101 and super illegal
Actually, they're obligated to do so. It's in shareholders' direct best interests to be made legally aware of the fact that the company may potentially face supply chain and production difficulties, all of which have a direct impact on a potential earnings forecast and on the income itself. Regulatory agencies would need to be involved if Nvidia willingly misled investors on their actual situation.

The TSMC N3 node is in extreme demand, and the processors Nvidia has are easily amongst the most advanced using this node, which means that yields are not exactly perfect. Everything they said is true.
Posted on Reply
#48
Wirko
OneMoarMan the SEC/FTC needs to bitch slap nvidia upside the head for playing games like this
making statements like this is stock manipulation 101 and super illegal
On the other hand, they're compelled by law to state what the CFO stated to the investors. What she said is based on their projections and calculations, which in turn are based on realistic and probable predictions of supply, demand and other factors, headwinds, tailwinds, hurricanes, whatever. You think they can't prove that to the agencies?
Posted on Reply
#49
Minus Infinity
BeermotorRDNA3 was a learning process for them with regard to MCM packaging and the wins/losses that come from that approach and RDNA4 is supposed to essentially be a bug-fixed and much more optimized version of RDNA3.

RDNA5 is supposed to be six to nine months behind RDNA4 which seems to indicate that RDNA4 is more of a half-generation GPU series, more like a "super" release but targeted toward the midrange or lower cards.
RDNA5 is very late 2025 at the earliest and is more than 12 months behind RDNA4 which ships by Q4 2024.
RDNA4 may only be mid-range but the 8700XT class gpu is said to be faster than 7900XT yet under $500.
Posted on Reply
#50
OneMoar
There is Always Moar
Dr. DroActually, they're obligated to do so. It's in shareholders' direct best interests to be made legally aware of the fact that the company may potentially face supply chain and production difficulties, all of which have a direct impact on a potential earnings forecast and on the income itself. Regulatory agencies would need to be involved if Nvidia willingly misled investors on their actual situation.

The TSMC N3 node is in extreme demand, and the processors Nvidia has are easily amongst the most advanced using this node, which means that yields are not exactly perfect. Everything they said is true.
true statements are the worst form of manipulation thats what makes it problematic
Posted on Reply
Add your own comment
May 17th, 2024 19:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts