Monday, February 10th 2025
Report Suggests OpenAI Finalizing Proprietary GPU Design
Going back a year, we started hearing about an OpenAI proprietary AI chip project—this (allegedly) highly ambitious endeavor included grand plans for a dedicated fabrication network. TSMC was reportedly in the equation, but indirectly laughed at the AI research organization's ardent requests. Fast-forward to the present day; OpenAI appears to be actively pursuing a proprietary GPU design through traditional means. A Reuters exclusive report points to 2025 being an important year for the company's aforementioned "in-house" AI chip—the publication believes that OpenAI's debut silicon design has reached the finalization stage. Insiders have divulged that the project is only months away from being submitted to TSMC for "taping out." The foundry's advanced 3-nanometer process technology is reported to be on the cards. A Reuters source reckons that the unnamed chip features: "a commonly used systolic array architecture with high-bandwidth memory (HBM)...and extensive networking capabilities."
Broadcom is reportedly assisting with the development of OpenAI's in-house design—we heard about rumored negotiations taking place last summer. Jim Keller's tempting offer—of creating an AI chip for less than $1 trillion—was ignored early last year; OpenAI has instead assembled its own internal team of industry veterans. The October 2024 news cycle posited that former Google TPU engineers were drafted in as team leaders, with a targeted mass production window scheduled for 2026. The latest Reuters news article reiterates this projected timeframe, albeit dependent on the initial tape going "smoothly." OpenAI's chip department has grown to around forty individuals with recent months, according to industry moles—a small number relative to the headcounts at "Google or Amazon's AI chip program."
Sources:
Reuters, Wccftech
Broadcom is reportedly assisting with the development of OpenAI's in-house design—we heard about rumored negotiations taking place last summer. Jim Keller's tempting offer—of creating an AI chip for less than $1 trillion—was ignored early last year; OpenAI has instead assembled its own internal team of industry veterans. The October 2024 news cycle posited that former Google TPU engineers were drafted in as team leaders, with a targeted mass production window scheduled for 2026. The latest Reuters news article reiterates this projected timeframe, albeit dependent on the initial tape going "smoothly." OpenAI's chip department has grown to around forty individuals with recent months, according to industry moles—a small number relative to the headcounts at "Google or Amazon's AI chip program."
10 Comments on Report Suggests OpenAI Finalizing Proprietary GPU Design
Call them AIPUs if that's their purpose. Perhaps Matrix Processing Unit, Massively Parallel Processing Unit, or something else, for a more generic term ?
Not to give him any ideas, but I think Jensen's on to something :cool:
edit: QuietBob beat me to it :laugh:
Why? OpenAI currently uses more than 10,000 GPUs to train LLMs and spent already more than 500 million dollars ( Note: I think it is closer to 1 billion dollars now ).
AMD and Intel failed to compete in that segment of AI ( training ), but it is a good reminder to NVIDIA that they are Not alone.