Friday, July 21st 2023

Cerebras and G42 Unveil World's Largest Supercomputer for AI Training with 4 ExaFLOPS

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the UAE-based technology holding group, today announced Condor Galaxy, a network of nine interconnected supercomputers, offering a new approach to AI compute that promises to significantly reduce AI model training time. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), has 4 exaFLOPs and 54 million cores. Cerebras and G42 are planning to deploy two more such supercomputers, CG-2 and CG-3, in the U.S. in early 2024. With a planned capacity of 36 exaFLOPs in total, this unprecedented supercomputing network will revolutionize the advancement of AI globally.

"Collaborating with Cerebras to rapidly deliver the world's fastest AI training supercomputer and laying the foundation for interconnecting a constellation of these supercomputers across the world has been enormously exciting. This partnership brings together Cerebras' extraordinary compute capabilities, together with G42's multi-industry AI expertise. G42 and Cerebras' shared vision is that Condor Galaxy will be used to address society's most pressing challenges across healthcare, energy, climate action and more," said Talal Alkaissi, CEO of G42 Cloud, a subsidiary of G42.
Located in Santa Clara, California, CG-1 links 64 Cerebras CS-2 systems together into a single, easy-to-use AI supercomputer, with an AI training capacity of 4 exaFLOPs. Cerebras and G42 offer CG-1 as a cloud service, allowing customers to enjoy the performance of an AI supercomputer without having to manage or distribute models over physical systems.

CG-1 is the first time Cerebras has partnered not only to build a dedicated AI supercomputer but also to manage and operate it. CG-1 is designed to enable G42 and its cloud customers to train large, ground-breaking models quickly and easily, thereby accelerating innovation. The Cerebras-G42 strategic partnership has already advanced state-of-the-art AI models in Arabic bilingual chat, healthcare and climate studies.

"Delivering 4 exaFLOPs of AI compute at FP 16, CG-1 dramatically reduces AI training timelines while eliminating the pain of distributed compute," said Andrew Feldman, CEO of Cerebras Systems. "Many cloud companies have announced massive GPU clusters that cost billions of dollars to build, but that are extremely difficult to use. Distributing a single model over thousands of tiny GPUs takes months of time from dozens of people with rare expertise. CG-1 eliminates this challenge. Setting up a generative AI model takes minutes, not months and can be done by a single person. CG-1 is the first of three 4 exaFLOP AI supercomputers to be deployed across the U.S. Over the next year, together with G42, we plan to expand this deployment and stand up a staggering 36 exaFLOPs of efficient, purpose-built AI compute."

A leading AI and cloud computing company based in the UAE, G42 is driving large-scale digital transformation initiatives globally. The UAE was the first nation to appoint a Minister for AI in their federal government, followed by massive investments, including the establishment of G42 research partner, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), the first post-graduate university in the world focused entirely on AI.

Training large models requires huge amounts of compute, vast datasets, and specialized AI expertise. The partnership between G42 and Cerebras delivers on all three of these elements. With the Condor Galaxy supercomputing network, the two companies are democratizing AI, enabling simple and easy access to the industry's leading AI compute. G42's work with diverse datasets across healthcare, energy and climate studies will enable users of the systems to train new cutting-edge foundational models. These models and derived applications are a powerful force for good. Finally, Cerebras and G42 bring together a team of hardware engineers, data engineers, AI scientists, and industry specialists to deliver a full-service AI offering to solve customers' problems. This combination will produce ground-breaking results and turbo charge hundreds of AI projects globally.

About Condor Galaxy 1 (CG-1)
Optimized for Large Language Models and Generative AI, CG-1 delivers 4 exaFLOPs of 16 bit AI compute, with standard support for up to 600 billion parameter models and extendable configurations that support up to 100 trillion parameter models. With 54 million AI-optimized compute cores, 388 terabits per second of fabric bandwidth, and fed by 72,704 AMD EPYC processor cores, unlike any known GPU cluster, CG-1 delivers near-linear performance scaling from 1 to 64 CS-2 systems using simple data parallelism.

"AMD is committed to accelerating AI with cutting edge high-performance computing processors and adaptive computing products as well as through collaborations with innovative companies like Cerebras that share our vision of pervasive AI," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "Driven by more than 70,000 AMD EPYC processor cores, Cerebras' Condor Galaxy 1 will make accessible vast computational resources for researchers and enterprises as they push AI forward."

CG-1 offers native support for training with long sequence lengths, up to 50,000 tokens out of the box, without any special software libraries. Programing CG-1 is done entirely without complex distributed programming languages, meaning even the largest models can be run without weeks or months spent distributing work over thousands of GPUs.

Rendering of the complete Condor Galaxy 1 AI Supercomputer, which features an impressive 54 million cores across 64 CS-2 nodes, supported by over 72 thousand AMD EPYC cores for a total of 4 exaFLOPs of AI compute at FP-16. (Photo: Rebecca Lewington/ Cerebras Systems)
Located at Colovore, a high-performance colocation facility in Santa Clara, California, CG-1 is operated by Cerebras under U.S. laws, ensuring state of the art AI systems are not used by adversary states. Each Cerebras CS-2 system is

designed, packaged, manufactured, tested, and integrated in the U.S.; Cerebras is the only AI hardware company to package processors and manufacture AI systems in the U.S.

CG-1 is the first of three 4 exaFLOP AI supercomputers (CG-1, CG-2, and CG-3), built and located in the U.S. by Cerebras and G42 in partnership. These three AI supercomputers will be interconnected in a 12 exaFLOP, 162 million core distributed AI supercomputer consisting of 192 Cerebras CS-2s and fed by more than 218,000 high performance AMD EPYC CPU cores. G42 and Cerebras plan to bring online six additional Condor Galaxy supercomputers in 2024, bringing the total compute power to 36 exaFLOPs.

Access to CG-1 is available now. For more information, please visit www.condorgalaxy.ai.

Condor Galaxy Brand Inspiration
The Condor Galaxy, also known as NGC 6872, stretches 522,000 light years from tip to tip, which is about 5 times larger than the Milky Way. The galaxy is visible in the southern skies as part of the Pavo constellation and is 212 million light-years from Earth.

About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer system, designed for the singular purpose of accelerating generative AI work. Our flagship product, the CS-2 system, powered by the world's largest and fastest AI processor, makes training large models simple and easy, by avoiding the complexity of distributed computing. Cerebras solutions are available in the cloud, through the Cerebras AI Model Studio or on premises. For further information, visit https://www.cerebras.net.

About G42
G42 is a global leader in creating visionary artificial intelligence capabilities for a better tomorrow. Born in Abu Dhabi and operating around the world, G42 champions AI as a powerful force for good. Its people are constantly reimagining what technology can do, applying advanced thinking and innovation to accelerate progress and tackle society's most pressing problems. G42 is joining forces with nations, corporations, and individuals to create the infrastructure for tomorrow's world. From molecular biology to space exploration and everything in between, G42 realizes exponential possibilities, today. For further information, visit www.g42.ai.
Add your own comment

16 Comments on Cerebras and G42 Unveil World's Largest Supercomputer for AI Training with 4 ExaFLOPS

#1
AnarchoPrimitiv
The UAE, huh? Didn't know they were a player in the AI game, wonder if it's just like the other oil states around the Arabian Peninsula and its just part their standard diversification programs (to not be utterly dependent on oil revenues) or they're specifically focusing on AI to become a major actor.

***Seems like G42 is a wholly state subsidized company and is actually lead by the the National Security Advisory of the UAE
Posted on Reply
#2
Daven
Condor Galaxy- 4 exaflops (FP16) Epyc and CG-2
El Capitan - 2 exaflops, Epyc and Instinct
Aurora- 2 exaflops, SPR and Ponte Vecchio
Frontier- 1 exaflop, Epyc and Instinct
Fugaku- 0.5 exaflop, ARM and A64FX

The top 5 supercomputers are using a diverse array of compute architectures. I love the post-Winteldia world!

Edit: The 4 Exaflops number is at FP16. So really 2 Exaflops when comparing to other super computers.
Posted on Reply
#3
A&P211
Can it run the terminator
Posted on Reply
#4
john_
72704 / 64 = 1136 CPUs
Let's say $4000 each friendly price, AMD makes from those CPUs sold just 4.5 million dollars.
Not much. I guess they also get payed for extra services and probably make much more from those services. Or maybe they don't. Just take those $4.5 millions say thanks and leave.

If those cores where Nvidia cores, let's say H100s with 14592 cores each GPU, that means about 3700 GPUs, with $30000 very friendly price, that means Nvidia is NOT making about 111 millions. That's a huge amount.

If those cores where AMD cores, let's say MI300X, so I am guessing about 19000 cores, that means about 2842 GPUs, with $25000 ultra super duper competitive price with the very friendly Nvidia price, that makes about 71 millions. Also huge amount.

But those are Cerebras AI cores, right? So AMD and Nvidia make zero from AI cores here. AMD could be making 75 million dollars, gets 4.5 millions instead minimum, Nvidia could be making 111 millions, gets nothing.
Posted on Reply
#5
Daven
john_72704 / 64 = 1136 CPUs
Let's say $4000 each friendly price, AMD makes from those CPUs sold just 4.5 million dollars.
Not much. I guess they also get payed for extra services and probably make much more from those services. Or maybe they don't. Just take those $4.5 millions say thanks and leave.

If those cores where Nvidia cores, let's say H100s with 14592 cores each GPU, that means about 3700 GPUs, with $30000 very friendly price, that means Nvidia is NOT making about 111 millions. That's a huge amount.

If those cores where AMD cores, let's say MI300X, so I am guessing about 19000 cores, that means about 2842 GPUs, with $25000 ultra super duper competitive price with the very friendly Nvidia price, that makes about 71 millions. Also huge amount.

But those are Cerebras AI cores, right? So AMD and Nvidia make zero from AI cores here. AMD could be making 75 million dollars, gets 4.5 millions instead minimum, Nvidia could be making 111 millions, gets nothing.
Its also about deployability. Proving that AMD tech can be deployed in complex compute environments can sell AMD to other customers. That helps with the bottomline.
Posted on Reply
#6
john_
DavenIts also about deployability. Proving that AMD tech can be deployed in complex compute environments can sell AMD to other customers. That helps with the bottomline.
It's also an indication that AI accelerators in Sapphire Rapids, for huge projects like the above, are not such an advantage compared to other EPYC advantages.
Posted on Reply
#7
TumbleGeorge
DavenCondor Galaxy- 4 exaflops, Epyc and CG-2
El Capitan - 2 exaflops, Epyc and Instinct
Aurora- 2 exaflops, SPR and Ponte Vecchio
Frontier- 1 exaflop, Epyc and Instinct
Fugaku- 0.5 exaflop, ARM and A64FX

The top 5 supercomputers are using a diverse array of compute architectures. I love the post-Winteldia world!
Try again. The numbers of Condor Galaxy is in calculations with half precision.
Posted on Reply
#8
Daven
TumbleGeorgeTry again. The numbers of Condor Galaxy is in calculations with half precision.
I think I was editing my post at the same time you commented.
Posted on Reply
#9
TumbleGeorge
DavenI think I was editing my post at the same time you commented.
Mmm I say about how precisely calculations. We by default think that FP16 is FP32 divided by 2. But in this case performance numbers depends how is set architecture which this supercomputer using. FP32/FP16 efficiency is 1/4 if I understood right.
Posted on Reply
#10
Denver
john_72704 / 64 = 1136 CPUs
Let's say $4000 each friendly price, AMD makes from those CPUs sold just 4.5 million dollars.
Not much. I guess they also get payed for extra services and probably make much more from those services. Or maybe they don't. Just take those $4.5 millions say thanks and leave.

If those cores where Nvidia cores, let's say H100s with 14592 cores each GPU, that means about 3700 GPUs, with $30000 very friendly price, that means Nvidia is NOT making about 111 millions. That's a huge amount.

If those cores where AMD cores, let's say MI300X, so I am guessing about 19000 cores, that means about 2842 GPUs, with $25000 ultra super duper competitive price with the very friendly Nvidia price, that makes about 71 millions. Also huge amount.

But those are Cerebras AI cores, right? So AMD and Nvidia make zero from AI cores here. AMD could be making 75 million dollars, gets 4.5 millions instead minimum, Nvidia could be making 111 millions, gets nothing.
U$ 11M**

Why would AMD discount if they have the best processors on the market ? Don't worry, TSMC will lack production capacity and everyone will get a piece of the pie. :P
Posted on Reply
#11
Evildead666
Good grief.

So this can scale up pretty quickly.
Colossus/Guardian/SkyNet and HAL aren't too far away.
:)

If you haven't seen "Colossus : The Forbin Project", it's really worth it.
Or read the book, for those that prefer ;)
Posted on Reply
#12
xrli
DavenCondor Galaxy- 4 exaflops (FP16) Epyc and CG-2
El Capitan - 2 exaflops, Epyc and Instinct
Aurora- 2 exaflops, SPR and Ponte Vecchio
Frontier- 1 exaflop, Epyc and Instinct
Fugaku- 0.5 exaflop, ARM and A64FX

The top 5 supercomputers are using a diverse array of compute architectures. I love the post-Winteldia world!

Edit: The 4 Exaflops number is at FP16. So really 2 Exaflops when comparing to other super computers.
Correct me if I am wrong, but I recall TOP500 supercomputers are ranked by their FP64 flops, so shouldn't it be roughly 1 exaflops equivalent instead?
Posted on Reply
#13
john_
DenverU$ 11M**

Why would AMD discount if they have the best processors on the market ? Don't worry, TSMC will lack production capacity and everyone will get a piece of the pie. :p
I was looking at the price of the 64 cores model.
Amazon.com: EPYC Rome 64-CORE 7702 3.35GHz CHIP SKT SP3 256MB Cache 200W Tray SP in : Electronics at $4886
When a big customer comes to you, not an individual to buy 2-5-10 CPUs, you do discounts. So, $4000 is I think a possible price. $4000 * 1136 about $4,5 millions.

How did you came up with that "U$ 11M"? I am just curious.
Posted on Reply
#14
Denver
john_I was looking at the price of the 64 cores model.
Amazon.com: EPYC Rome 64-CORE 7702 3.35GHz CHIP SKT SP3 256MB Cache 200W Tray SP in : Electronics at $4886
When a big customer comes to you, not an individual to buy 2-5-10 CPUs, you do discounts. So, $4000 is I think a possible price. $4000 * 1136 about $4,5 millions.

How did you came up with that "U$ 11M"? I am just curious.

I was basing myself on the price of the best recent EPYC CPUs. Are they going to build a super computer with technology from 2 years ago? Damn. :p
Posted on Reply
#15
dragontamer5788
DavenEdit: The 4 Exaflops number is at FP16. So really 2 Exaflops when comparing to other super computers.
TOP500 supercomputers benchmark on 64-bit operations.

So really 1 Exaflop, if we're doing-bit-for-bit.
Posted on Reply
#16
john_
DenverI was basing myself on the price of the best recent EPYC CPUs. Are they going to build a super computer with technology from 2 years ago? Damn. :p
Yeah, my bad. Messed up the Italian cities. Need more EPYC geography lessons.

AMD had done some discounts, with the 64 core GENOA 9554 selling, in one example, at $6529.
So I guess for a customer who wants to buy EPYC CPUs by the thousands, a price close to $5500 per CPU would be logical. Cerebrus doesn't buy Instinct cards obviously, so I guess they get a big discount, just not the best possible discount. Remember that wiredzone in the price example is a reseller, so they are buying cheaper than the $6529 they are selling.
Posted on Reply
Add your own comment
May 21st, 2024 22:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts