Friday, February 28th 2025

OpenAI Has "Run Out of GPUs" - Sam Altman Mentions Incoming Delivery of "Tens of Thousands"

Yesterday, OpenAI introduced its "strongest" GPT-4.5 model. A research preview build is only available to paying customers—Pro-tier subscribers fork out $200 a month for early access privileges. The non-profit organization's CEO shared an update via social media post; complete with a "necessary" hyping up of version 4.5: "it is the first model that feels like talking to a thoughtful person to me. I have had several moments where I've sat back in my chair and been astonished at getting actual good advice from an AI." There are apparent performance caveats—Sam Altman proceeded to add a short addendum: "this isn't a reasoning model and won't crush benchmarks. It's a different kind of intelligence, and there's a magic to it (that) I haven't felt before. Really excited for people to try it!" OpenAI had plans to make GPT-4.5 available to its audience of "Plus" subscribers, but major hardware shortages have delayed a roll-out to the $20 per month tier.

Altman disclosed his personal disappointment: "bad news: it is a giant, expensive model. We really wanted to launch it to Plus and Pro (customers) at the same time, but we've been growing a lot and are out of GPUs. We will add tens of thousands of GPUs next week, and roll it out to the plus tier then...Hundreds of thousands coming soon, and I'm pretty sure y'all will use every one we can rack up." Insiders believe that OpenAI is finalizing a proprietary AI-crunching solution, but a rumored mass production phase is not expected to kick-off until 2026. In the meantime, Altman & Co. are still reliant on NVIDIA for new shipments of AI GPUs. Despite being a very important customer, OpenAI is reportedly not satisfied about the "slow" flow of Team Green's latest DGX B200 and DGX H200 platforms into server facilities. Several big players are developing in-house designs, in an attempt to ween themselves off prevalent NVIDIA technologies.
Sources: Sam Altman Tweet, Tom's Hardware, PC Gamer
Add your own comment

48 Comments on OpenAI Has "Run Out of GPUs" - Sam Altman Mentions Incoming Delivery of "Tens of Thousands"

#3
ZoneDymo
gotta keep people believing otherwise it falls apart too soon.
Posted on Reply
#4
Vayra86
Sam Altman announces new ICO... erm I mean... yeah
Posted on Reply
#5
MentalAcetylide
My only question is what in the hell are they doing with all of this AI stuff? I would say concocting novel ways of bending people over while making them smile & milking them for as much money as possible, but I don't think they need AI for that given the general lack of prudence among the masses.
Posted on Reply
#6
mb194dc
Imagine how much money they're burning with all the pointless hardware purchases. They'll be lucky to keep going another 12 months.
Posted on Reply
#7
Tartaros
Then next week another chinese comes up with an AI that runs on a MSX and Jensen strangulates Sam with his leather jacket.
Posted on Reply
#8
Steevo
Just needs more money, more time, they swear they will achieve their goal...... maybe they and Fusion will be ready in 50 years?
Posted on Reply
#9
trsttte
mb194dcImagine how much money they're burning with all the pointless hardware purchases. They'll be lucky to keep going another 12 months.
As long as M$ is willing to keep paying the bills they can last a long long time. That won't be forever though... Can't wait for the headlines trashing them in a couple weeks when Deepseek or any other upstart launches a model that's able to achieve 70, 80 or even 90% of what this does for a fraction of the cost
Posted on Reply
#10
john_
In the meantime, Altman & Co. are still reliant on NVIDIA for new shipments of AI GPUs. Despite being a very important customer, OpenAI is reportedly not satisfied about the "slow" flow of Team Green's latest DGX B200 and DGX H200 platforms into server facilities.
There is AMD out there, but I guess they don't have the software skills to incorporate AMD GPUs in their systems.
Posted on Reply
#11
freeagent
john_There is AMD out there, but I guess they don't have the software skills to incorporate AMD GPUs in their systems.
You sure about that?

Maybe AMD doesn't have the skills to incorporate the hardware needed.. just throwing that out there.
Posted on Reply
#12
lexluthermiester
T0@stOpenAI Has "Run Out of GPUs"
So sad... I can hear tiny violins playing...
Posted on Reply
#13
TheinsanegamerN
john_There is AMD out there, but I guess they don't have the software skills to incorporate AMD GPUs in their systems.
freeagentYou sure about that?

Maybe AMD doesn't have the skills to incorporate the hardware needed.. just throwing that out there.
It could also be both. Grifters are, by nature, lazy people, so they'll gravitate to nVidia since CUDA is already well documented and tools already exist. Deepwhale or whatever it was called showed AMD hardware can be very competitive if you build from the ground up. Of course, because only the most driven would ever do that, not having tools ready for clients puts you at a massive disadvantage.
Posted on Reply
#14
freeagent
CUDA is almost 20 years old. Quite the head start.
Posted on Reply
#15
Darmok N Jalad
"Sorry, Sam, we seem to have misplaced some ROPs somewhere. Once we find them, we'll be in touch."
Posted on Reply
#16
john_
freeagentYou sure about that?

Maybe AMD doesn't have the skills to incorporate the hardware needed.. just throwing that out there.
I see them begging Nvidia for GPUs, but probably Nvidia has given priority to Musk, because, well, he is in the government, and Microsoft and Google and Meta. Add Deepseek in the mixture and OpenAI from leader in AI could drop many places.
Also OpenAI thinking of building it's own chips and Altman being in bad relations with Musk, who let me repeat is in the government, well OpenAI probably gets the last place in the priority list of Nvidia's top customers. They either differenciate themselves, or keep begging.
Posted on Reply
#17
MentalAcetylide
TartarosThen next week another chinese comes up with an AI that runs on a MSX and Jensen strangulates Sam with his leather jacket.
The Bard's Tale III has a weapon called Kali's Garrote; Nvidia has Huang's Garrote, equip to screw customers & multiply quarterly earnings.
Posted on Reply
#18
Rover4444
They know they can use Radeon Instincts instead, right? LLMs don't need the image generation power of NVIDIA hardware, they could just put all the stuff doing inference on Instincts and reserve the NVIDIA hardware for training. Unless they need more density, then like, what? Are you training a 1T+ model or something?
T0@stOpenAI Has "Run Out of GPUs"
... And that's a GOOD thing!
Posted on Reply
#19
DaemonForce
Darmok N Jalad"Sorry, Sam, we seem to have misplaced some ROPs somewhere. Once we find them, we'll be in touch."
OMFG what a nightmare. If this ROP issue is way bigger than being communicated, that's a LOT of defects.
Around every two dozen cards missing the 8 ROPs or whatever is a whole ass other flagship card missing.
That's actually expensive.
Posted on Reply
#20
8tyone
The More You Buy, The More You Save
Posted on Reply
#21
Dragokar
freeagentYou sure about that?

Maybe AMD doesn't have the skills to incorporate the hardware needed.. just throwing that out there.
Well from what I know in that segment the hardware is okay, the software support still lacks behind.........but yeah it still would be smart to not only rely on one supplier. I guess many companies nowadays just rely on just in time and only one source of supplies. That's okay as long everything is fine. ;)
Posted on Reply
#22
R0H1T
ZoneDymogotta keep people believing otherwise it falls apart too soon.
What, you mean the "AI" scam's run its course! No freakin way :laugh:
mb194dcImagine how much money they're burning with all the pointless hardware purchases. They'll be lucky to keep going another 12 months.
Imagine the energy, then job losses & maybe rehiring the lot of them because your stupid "AI" can't distinguish between sarcasm and a real question :shadedshu:
Posted on Reply
#23
Beermotor
mb194dcImagine how much money they're burning with all the pointless hardware purchases. They'll be lucky to keep going another 12 months.
Yep I think the last figures I heard were they spent nine billion in 2024 to make four billion in revenue. Even their $200/month subscriptions are losing them money.

The main issue with the current AI stuff is the utility of it just isn't there for anything other than psychotic chatbots, pictures of big-titted elf girls, and source code analysis.
Posted on Reply
#24
mb194dc
BeermotorYep I think the last figures I heard were they spent nine billion in 2024 to make four billion in revenue. Even their $200/month subscriptions are losing them money.

The main issue with the current AI stuff is the utility of it just isn't there for anything other than psychotic chatbots, pictures of big-titted elf girls, and source code analysis.
More importantly, competition have shown they can make better models for a few million...

No idea how OpenAI expect to compete, their costs are 100x too high and they'll be undercut to bankruptcy most probably.

Just a matter of time before open source competitors show up trained on Western data, instead of the Chinese stuff.
Posted on Reply
#25
Rover4444
BeermotorThe main issue with the current AI stuff is the utility of it just isn't there for anything other than psychotic chatbots, pictures of big-titted elf girls, and source code analysis.
... To be totally fair though, all of those have MASSIVE usecases...
Posted on Reply
Add your own comment
Mar 31st, 2025 04:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts