• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

OpenAI Could Make Custom Chips to Power Next-Generation AI Models

Joined
Jan 14, 2019
Messages
12,827 (5.88/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
lol, no that's not Dojo.




Similar to RISC-V, but with Tesla's own custom instructions and four-wide SMT. Interesting.
 
Joined
Apr 19, 2018
Messages
1,227 (0.50/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
What I find funny is the fact that every single serious player in the A.I. industry is talking about making their own chips, MS, Tesla, Google, OpenAI, Meta the list goes on. The collective hate for nGreedia and their business practices is palpable. Make hey while you can, you greedy little leather jacket man.

Price your products into the stratosphere nGreedia, and even the corps won't stick with you.
 
Last edited:
Joined
Jan 11, 2022
Messages
946 (0.87/day)
You've clearly missed the point that these H100s cost $40K a piece and neural nets don't start working well till you get the gpu count over the thousands. Then there's the upkeep price which is insane. Any idea the cost to power and maintain a 10K gpu cluster buddy?

They want their own AI chip because they SEE that Elon's Dojo is providing the same performance as a 10K gpu cluster at 1/6th the cost and 1/4 the footprint.

People must think AI is free or something. There are huge costs for this level of compute and that's not even mentioning that the whole industry is compute constrained.

Open AI claimed it was spending about 3 million a month on operational costs that's both power infrastructure under microsoft Azure and wages.
It was doing 10million queries per day

So 3/300= 1cent that is still very expensive for a query but not 4

That hardware is bought and used over a period of years it's not that part of it is used up every query and that it needs to be replaced after a billion.
That cluster will be happy running all sorts of stuff a couple of years on.
Also power, yeah microsoft has a power bill in the billions but they are doing a shit ton more than just doing 10m queries per day.

Conclusion someone was adding a heck of a lot of fudge to the numbers.
 
Joined
Jan 14, 2019
Messages
12,827 (5.88/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
What I find funny is the fact that every single serious player in the A.I. industry is talking about making their own chips
A 150 g pack of potato chips costs way over £2 now in most shops now, which is outright ridiculous, so I guess I'd better make my own as well.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.59/day)
Location
Ex-usa | slava the trolls
What I find funny is the fact that every single serious player in the A.I. industry is talking about making their own chips, MS, Tesla, Google, OpenAI, Meta the list goes on. The collective hate for nGreedia and their business practices is palpable. Make hey while you can, you greedy little leather jacket man.

Price your products into the stratosphere nGreedia, and even the corps won't stick with you.

Wonder why AMD doesn't sell its radeons graphics cards, then? The collective hate towards AMD is outragous, too. 8-9% market share is very disturbing.

A 150 g pack of potato chips costs way over £2 now in most shops now, which is outright ridiculous, so I guess I'd better make my own as well.

Just buy a kilo potatos for 1.17£, make your own "chips" and call it a day :D
 
Joined
Nov 26, 2021
Messages
1,709 (1.51/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
No, we do not. However the original post points out that there are possible cost efficiencies in rolling your own AI silicon rather than buying at the market.

The US government certainly isn't telling Nvidia how much to price their AI accelerators.

Right now it's basically this:

AI customer: "How much is this?"
Nvidia: "How much you got?"
AI customer: "I don't like your attitude."
Nvidia: "We have competitors. Ask them about pricing and availability."
AI customer: "But your stuff is better. And you have a better dev platform. And our staff is already familiar with your products."
Nvidia: "Don't worry, we'll sell what we have to someone."

It's important to note that Apple makes their own machine learning silicon (the Neural Engine in their marketing speak), but for their own use. They have already done what the original post basically describes since about 2017. And they probably started working on in earnest back in 2010.

There are no Nvidia Tensor cores in my iPhone, iPad, and Mac. Apple is not affected by the market's demand for Nvidia's AI products.

Silicon Valley was built by people who left larger companies to start their own thing, hoping to do better than the status quo. There's nothing new about OpenAI's thoughts. Google, Meta, Amazon, they all have teams working on custom silicon designs.
Apple's neural engine is targeted at inference. They may claim in their marketing that it can do training too, but that isn't its raison d'etre. Machine learning silicon is rather simple when compared to Apple's CPUs so it's extremely unlikely that they started working on it in 2010. AlexNet woke up the wider computing world to the feasibility of machine learning on GPUs by becoming the first convolutional neural network to win ImageNet in September 2012.
 
Joined
Dec 29, 2010
Messages
3,811 (0.74/day)
Processor AMD 5900x
Motherboard Asus x570 Strix-E
Cooling Hardware Labs
Memory G.Skill 4000c17 2x16gb
Video Card(s) RTX 3090
Storage Sabrent
Display(s) Samsung G9
Case Phanteks 719
Audio Device(s) Fiio K5 Pro
Power Supply EVGA 1000 P2
Mouse Logitech G600
Keyboard Corsair K95
Every time I see these "other companies are just going to invert their own cheap AI chips and then NVIDIA will die" posts I LOL at the inability to understand basic facts. Guess what, designing your own silicon is difficult and guess what, those companies are going to be headbutting the exact same wall of lack of foundry capacity that NVIDIA is.

What is actually happening is NVIDIA is telling these companies "we can't get capacity but if you're willing to buy some of Apple's allocation at TSMC for <stupid price>, we'll happily make those GPUs for you" and these companies are going "HOLY SHIT THAT'S EXPENSIVE" and NVIDIA is going "yeah, now you see our problem". Then these companies shit out press releases subtly threatening NVIDIA, who's sitting back going "why are we getting blamed when Apple is the one taking all the capacity"?

I'm more and more surprised every day that NVIDIA is still making consumer GPUs. Switch all your production to server GPUs, let the consumer market sit it out a few generations, it's not like we won't be waiting a couple of generations down the line. Especially given how AMD is barely competitive and Intel isn't in any way shape or form.


You mean the Dojo that also uses 10,000 NVIDIA GPUs?
No one said NV is gonna die. They do however have a captive market with no real competitors till Q4 when AMD releases their accelerators. However even that is coming at a time when the biggest compute consumers MSFT, Google, Tesla have started to plan out their next five years (Tesla already on year 3 and ramping) and it has them pivoting away from NV. That's just the reality of gouging the fuck out of the market, shrugs.
 
Joined
May 31, 2017
Messages
432 (0.16/day)
Processor Ryzen 5700X
Motherboard Gigabyte B550 Arous Elite V2
Cooling Thermalright PA120
Memory Kingston FURY Renegade 3600Mhz @ 3733 tight timings
Video Card(s) Sapphire Pulse RX 6800
Storage 36TB
Display(s) Samsung QN90A
Case be quiet! Dark Base Pro 900
Audio Device(s) Khadas Tone Pro 2, HD660s, KSC75, JBL 305 MK1
Power Supply Coolermaster V850 Gold V2
Mouse Roccat Burst Pro
Keyboard Dogshit with Otemu Brown
Software W10 LTSC 2021
the future is compute on memory
 
Joined
Jul 29, 2022
Messages
547 (0.62/day)
Just buy a kilo potatos for 1.17£, make your own "chips" and call it a day :D
okay, but then you have to factor in the cost of oil, salt, a deep fryer, and the cost of electricity and manual labor. It comes out significantly more expensive.

Personally I prefer popcorn. It's cheaper, takes longer to eat, and fits better when you are watching movies.
 
Joined
Jan 14, 2023
Messages
856 (1.19/day)
System Name Asus G16
Processor i9 13980HX
Motherboard Asus motherboard
Cooling 2 fans
Memory 32gb 4800mhz
Video Card(s) 4080 laptop
Storage 16tb, x2 8tb SSD
Display(s) QHD+ 16in 16:10 (2560x1600, WQXGA) 240hz
Power Supply 330w psu
I love Nvidia, I bought their stock over 9months ago, has not disappoint.
 
Joined
Jan 14, 2019
Messages
12,827 (5.88/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
I love Nvidia, I bought their stock over 9months ago, has not disappoint.
I wish I did too. As soon as others start manufacturing AI chips, the bubble will pop.
 
Joined
Jan 14, 2023
Messages
856 (1.19/day)
System Name Asus G16
Processor i9 13980HX
Motherboard Asus motherboard
Cooling 2 fans
Memory 32gb 4800mhz
Video Card(s) 4080 laptop
Storage 16tb, x2 8tb SSD
Display(s) QHD+ 16in 16:10 (2560x1600, WQXGA) 240hz
Power Supply 330w psu
I wish I did too. As soon as others start manufacturing AI chips, the bubble will pop.
Then I go back to VOO, and brk.b.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.59/day)
Location
Ex-usa | slava the trolls
okay, but then you have to factor in the cost of oil, salt, a deep fryer, and the cost of electricity and manual labor. It comes out significantly more expensive.

Have you calculated or just blind guessing?
All of the costs are distributed in much larger quantities, and there is no way that rubbish 150 g chips pack would cost more than 2£.
Electricity cost is towards 0, salt is towards 0.

1.17£ is 1 kg potatos. Then, 150 g is only 0.17£ raw material.
 
Joined
Apr 19, 2018
Messages
1,227 (0.50/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
A 150 g pack of potato chips costs way over £2 now in most shops now, which is outright ridiculous, so I guess I'd better make my own as well.
But if they were £10 for a pack, you'd be an idiot not to.
 
Joined
Jan 14, 2019
Messages
12,827 (5.88/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
But if they were $10 for a pack, you'd be an idiot not to.
You're already way better off making your own if you value your money more than your time. Even with a £2 pack price, you can make about 10x as much for yourself.
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
You're already way better off making your own if you value your money more than your time. Even with a £2 pack price, you can make about 10x as much for yourself.
The inverse of this is what so many fail to understand. Yes, Tesla built Dojo because they couldn't get enough NVIDIA GPUs fast enough... but it took them half a decade. That's a huge time and money sink into something that isn't your company's core competency.

The other problem is that, because nobody outside Tesla knows how Dojo works, anyone they recruit to work with it is going to need to be trained on its intricacies - which increases the cost of hiring and time for a new dev to ramp up. It also decreases the pool of talent willing to work for you, because Dojo isn't a transferable skill, so you are basically shackling yourself to Tesla if you choose to work there - and people experienced in the software development industry are simply too smart to do that. Which kinda sucks, since those are the people you most need.

Of course, maybe Tesla spent part of those 5 years perfecting a CUDA-to-Dojo transpiler and none of the above is relevant. But my experience with big companies is that they love building walled gardens - if not to trap users, then to trap developers. This is why open and shared standards are important.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,827 (5.88/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
The inverse of this is what so many fail to understand. Yes, Tesla built Dojo because they couldn't get enough NVIDIA GPUs fast enough... but it took them half a decade. That's a huge time and money sink into something that isn't your company's core competency.

The other problem is that, because nobody outside Tesla knows how Dojo works, anyone they recruit to work with it is going to need to be trained on its intricacies - which increases the cost of hiring and time for a new dev to ramp up. It also decreases the pool of talent willing to work for you, because Dojo isn't a transferable skill, so you are basically shackling yourself to Tesla if you choose to work there - and people experienced in the software development industry are simply too smart to do that. Which kinda sucks, since those are the people you most need.

Of course, maybe Tesla spent part of those 5 years perfecting a CUDA-to-Dojo transpiler and none of the above is relevant. But my experience with big companies is that they love building walled gardens - if not to trap users, then to trap developers. This is why open and shared standards are important.
Based on how much of a walled garden Tesla cars are, I can imagine how much they invested into that transpiler, if it even exists.
 
Top