Friday, January 19th 2024

Meta Will Acquire 350,000 H100 GPUs Worth More Than 10 Billion US Dollars

Mark Zuckerberg has shared some interesting insights about Meta's AI infrastructure buildout, which is on track to include an astonishing number of NVIDIA H100 Tensor GPUs. In the post on Instagram, Meta's CEO has noted the following: "We're currently training our next-gen model Llama 3, and we're building massive compute infrastructure to support our future roadmap, including 350k H100s by the end of this year -- and overall almost 600k H100s equivalents of compute if you include other GPUs." That means that the company will enhance its AI infrastructure with 350,000 H100 GPUs on top of the existing GPUs, which is equivalent to 250,000 H100 in terms of computing power, for a total of 600,000 H100-equivalent GPUs.

The raw number of GPUs installed comes at a steep price. With the average selling price of H100 GPU nearing 30,000 US dollars, Meta's investment will settle the company back around $10.5 billion. Other GPUs should be in the infrastructure, but most will comprise the NVIDIA Hopper family. Additionally, Meta is currently training the LLama 3 AI model, which will be much more capable than the existing LLama 2 family and will include better reasoning, coding, and math-solving capabilities. These models will be open-source. Later down the pipeline, as the artificial general intelligence (AGI) comes into play, Zuckerberg has noted that "Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit." So, expect to see these models in the GitHub repositories in the future.
Source: Mark Zuckerberg (Instagram)
Add your own comment

53 Comments on Meta Will Acquire 350,000 H100 GPUs Worth More Than 10 Billion US Dollars

#51
kondamin
Solaris17and I am sure meta is aware of it.
Is meta getting in to the energy business?im Imagining a meta SMR line
Posted on Reply
#52
Denver
Solaris17and I am sure meta is aware of it.
Mark is the guy who wasted tons of money on "metaverse", of course, he knows.
Posted on Reply
#53
Jism
WarigatorI'm honestly extremely surprised that people are truly willing to pay so much money for cards that don't even seem that impressive to me.

Take VRAM for example. Nvidia went from 64 MB in 2000 to 512 MB in 2005 to 3 GB in 2010 (48x in 10 years). Titan Pascal in 2016 had 12 GB for $1200 and RTX 4080 in 2023 had 16 GB for also $1200. This is only 33.3% more in 7 years. H100 are ridiculously expensive.

We are already seeing video games progressing at a slower and slower pace thanks to slower and slower pace of hardware change as well as the horrible and nonsensical move to mobile gaming.

Titan Maxwell in 2015 also had 12 GB for $1200 by they way....
www.techpowerup.com/gpu-specs/h100-pcie-80-gb.c3899

The professional cards have over 80GB of VRAM. Well, not really VRAM, it's just there to store stuff into.

AMD provides you better bang for the buck anyways. Ive bin using 10 years intel based servers and swapped a few out with one big Epyc. It's blazing fast.
Posted on Reply
Add your own comment
Nov 23rd, 2024 05:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts