Monday, February 10th 2025

Report Suggests OpenAI Finalizing Proprietary GPU Design

Going back a year, we started hearing about an OpenAI proprietary AI chip project—this (allegedly) highly ambitious endeavor included grand plans for a dedicated fabrication network. TSMC was reportedly in the equation, but indirectly laughed at the AI research organization's ardent requests. Fast-forward to the present day; OpenAI appears to be actively pursuing a proprietary GPU design through traditional means. A Reuters exclusive report points to 2025 being an important year for the company's aforementioned "in-house" AI chip—the publication believes that OpenAI's debut silicon design has reached the finalization stage. Insiders have divulged that the project is only months away from being submitted to TSMC for "taping out." The foundry's advanced 3-nanometer process technology is reported to be on the cards. A Reuters source reckons that the unnamed chip features: "a commonly used systolic array architecture with high-bandwidth memory (HBM)...and extensive networking capabilities."

Broadcom is reportedly assisting with the development of OpenAI's in-house design—we heard about rumored negotiations taking place last summer. Jim Keller's tempting offer—of creating an AI chip for less than $1 trillion—was ignored early last year; OpenAI has instead assembled its own internal team of industry veterans. The October 2024 news cycle posited that former Google TPU engineers were drafted in as team leaders, with a targeted mass production window scheduled for 2026. The latest Reuters news article reiterates this projected timeframe, albeit dependent on the initial tape going "smoothly." OpenAI's chip department has grown to around forty individuals with recent months, according to industry moles—a small number relative to the headcounts at "Google or Amazon's AI chip program."
Sources: Reuters, Wccftech
Add your own comment

10 Comments on Report Suggests OpenAI Finalizing Proprietary GPU Design

#1
Dahita
This is the best news ever! If Nvidia has a competitor on that segment, maybe they'll remember their original customers. The gamers.
Posted on Reply
#2
Assimilator
Hahaha good luck getting that fabbed, losers. Apple's not going to give up the 3nm capacity they bought.
Posted on Reply
#3
windwhirl
AssimilatorHahaha good luck getting that fabbed, losers. Apple's not going to give up the 3nm capacity they bought.
Nor the 2nm they likely already reserved in advanced :roll:
Posted on Reply
#4
Athanasius
When are people going to stop calling new devices targeted purely at AI (and possibly other computer) workloads, especially if they don't even have any form of video out, GPUs ?

Call them AIPUs if that's their purpose. Perhaps Matrix Processing Unit, Massively Parallel Processing Unit, or something else, for a more generic term ?
Posted on Reply
#5
QuietBob
AthanasiusWhen are people going to stop calling new devices targeted purely at AI (and possibly other computer) workloads, especially if they don't even have any form of video out, GPUs ?
Call them AIPUs if that's their purpose. Perhaps Matrix Processing Unit, Massively Parallel Processing Unit, or something else, for a more generic term ?
GPU = General Purpose/Processing Unit
Not to give him any ideas, but I think Jensen's on to something :cool:
Posted on Reply
#6
Franzen4Real
AthanasiusWhen are people going to stop calling new devices targeted purely at AI (and possibly other computer) workloads, especially if they don't even have any form of video out, GPUs ?

Call them AIPUs if that's their purpose. Perhaps Matrix Processing Unit, Massively Parallel Processing Unit, or something else, for a more generic term ?
Generic Processing Unit? :D

edit: QuietBob beat me to it :laugh:
Posted on Reply
#7
Denver
I don't think it's a GPU... at best an ASIC like Google's TPUs.
Posted on Reply
#8
trsttte
QuietBobGPU = General Purpose/Processing Unit
Not to give him any ideas, but I think Jensen's on to something :cool:
Franzen4RealGeneric Processing Unit? :D

edit: QuietBob beat me to it :laugh:
Nope, that's not what the G stands for. Their instruction sets have nothing generic about them either and only perform a specific set of operations when compared to other processing units like, I don't know, a CPU ;)
Posted on Reply
#9
ScaLibBDP
I'd give it a 50-50 chance that the OpenAI will succeed, because such a move is Not simple.

Why? OpenAI currently uses more than 10,000 GPUs to train LLMs and spent already more than 500 million dollars ( Note: I think it is closer to 1 billion dollars now ).

AMD and Intel failed to compete in that segment of AI ( training ), but it is a good reminder to NVIDIA that they are Not alone.
Posted on Reply
#10
kondamin
And just when they finish someone finds a new way to generate an llm that’s completely different but a thousand times faster And totally incompatible with their 7 trillion project
Posted on Reply
Feb 11th, 2025 01:12 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts