News Posts matching #Supercomputer

Return to Keyword Browsing

IBM and RIKEN Unveil First IBM Quantum System Two Outside of the U.S.

IBM and RIKEN, a national research laboratory in Japan, today unveiled the first IBM Quantum System Two ever to be deployed outside of the United States and beyond an IBM Quantum Data Center. The availability of this system also marks a milestone as the first quantum computer to be co-located with RIKEN's supercomputer Fugaku—one of the most powerful classical systems on Earth. This effort is supported by the New Energy and Industrial Technology Development Organization (NEDO), an organization under the jurisdiction of Japan's Ministry of Economy, Trade and Industry (METI)'s "Development of Integrated Utilization Technology for Quantum and Supercomputers" as part of the "Project for Research and Development of Enhanced Infrastructures for Post 5G Information and Communications Systems."

IBM Quantum System Two at RIKEN is powered by IBM's 156-qubit IBM Quantum Heron, the company's best performing quantum processor to-date. IBM Heron's quality as measured by the two-qubit error rate, across a 100-qubit layered circuit, is 3x10-3 (with the best two-qubit error being 1x10-3)—which is 10 times better than the previous generation 127-qubit IBM Quantum Eagle. IBM Heron's speed, as measured by the CLOPS (circuit layer operations per second) metric is 250,000, which reflects another 10x improvement in the past year, over IBM Eagle.

NVIDIA and HPE Join Forces to Construct Advanced Supercomputer in Germany

NVIDIA and Hewlett Packard Enterprise announced Tuesday at a supercomputing conference in Hamburg their partnership with Germany's Leibniz Supercomputing Centre to build a new supercomputer called Blue Lion which will deliver approximately 30 times more computing power than the current SuperMUC-NG system. The Blue Lion supercomputer will run on NVIDIA's upcoming Vera Rubin architecture. This setup combines the Rubin GPU with NVIDIA's first custom CPU Vera. The integrated system aims to unite simulation, data processing, and AI into one high-bandwidth low-latency platform. Optimized to support scientific research it boasts shared memory coherent compute abilities, and in-network acceleration.

HPE will build the system using its next-gen Cray technology by including NVIDIA GPUs along with cutting-edge storage and interconnect systems. The Blue Lion supercomputer will use HPE's 100% fanless direct liquid-cooling setup. This design circulates warm water through pipes for efficient cooling while the generated system's heat output will be reused to warm buildings nearby. The Blue Lion project comes after NVIDIA said Lawrence Berkeley National Lab in the US will also set up a Vera Rubin-powered system called Doudna next year. Scientists will have access to the Blue Lion supercomputer beginning in early 2027. The Blue Lion supercomputer, based in Germany will be utilized by researchers working on climate, physics, and machine learning. In contrast, Doudna, the U.S. Department of Energy's next supercomputer, will get its data from telescopes, genome sequencers, and fusion experiments.

El Capitan Retains Top Spot in 65th TOP500 List as Exascale Era Expands

The 65th edition of the TOP500 showed that the El Capitan system retains the No. 1 position. With El Capitan, Frontier, and Aurora, there are now 3 Exascale systems leading the TOP500. All three are installed at Department of Energy (DOE) laboratories in the United States.

The El Capitan system at the Lawrence Livermore National Laboratory, California, remains the No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.742 EFlop/s on the HPL benchmark. LLNL now also submitted a measurement for the HPCG benchmark, achieving 17.41 Petaflop/s, which makes the system the new No. 1 on this ranking as well.

IBM Plans "Quantum Starling" Fault-Tolerant Quantum Supercomputer

IBM has announced a detailed plan to create the world's first large-scale, fault-tolerant quantum computer by 2029. This system, named IBM Quantum Starling, will be located in a new Quantum Data Center in Poughkeepsie, New York. It is being developed to perform approximately 100 million quantum operations on 200 logical qubits, representing a significant leap, about 20,000 times more powerful than today's leading machines. Logical qubits are fundamental to the construction of error-corrected quantum processors. Each one encodes a single unit of quantum information across several physical qubits that continuously monitor each other for errors. By greatly reducing the error rates of logical qubits through this method, IBM intends to run complex algorithms with high reliability. This will open up new possibilities in fields like drug discovery, materials science, chemistry simulations, and large-scale optimization.

A key feature of Starling's design is its use of quantum low-density parity-check (qLDPC) error-correcting codes. These advanced codes need up to 90 percent fewer physical qubits compared to previous standard methods, which significantly lowers the required resources and infrastructure. IBM's research documents show how it will manage instruction sequencing, operation execution, and the real-time decoding of qubit measurements using conventional electronics like FPGAs or ASICs. IBM's updated Quantum Roadmap outlines several intermediate goals with processors named after birds. In 2025, IBM Quantum Loon will test long-range "C-coupler" interconnects and essential qLDPC components. Following that, in 2026, the modular Kookaburra chip will combine quantum memory with logical processing. In 2027, Cockatoo will connect multiple modules using "L-couplers," simulating the nodes of a larger system.

Doudna Supercomputer Will be Powered by NVIDIA's Next-gen Vera Rubin Platform

Ready for a front-row seat to the next scientific revolution? That's the idea behind Doudna—a groundbreaking supercomputer announced today at Lawrence Berkeley National Laboratory in Berkeley, California. The system represents a major national investment in advancing U.S. high-performance computing (HPC) leadership, ensuring U.S. researchers have access to cutting-edge tools to address global challenges. "It will advance scientific discovery from chemistry to physics to biology and all powered by—unleashing this power—of artificial intelligence," U.S. Energy Secretary Chris Wright (pictured above) said at today's event.

Also known as NERSC-10, Doudna is named for Nobel laureate and CRISPR pioneer Jennifer Doudna. The next-generation system announced today is designed not just for speed but for impact. Powered by Dell Technologies infrastructure with the NVIDIA Vera Rubin architecture, and set to launch in 2026, Doudna is tailored for real-time discovery across the U.S. Department of Energy's most urgent scientific missions. It's poised to catapult American researchers to the forefront of critical scientific breakthroughs, fostering innovation and securing the nation's competitive edge in key technological fields.

Heron QPU-powered IBM Quantum System One Will Bolster UTokyo's Miyabi Supercomputer

The University of Tokyo (UTokyo) and IBM have announced plans to deploy the latest 156-qubit IBM Heron quantum processing unit (QPU), which will be operational in the IBM Quantum System One administered by UTokyo for the members of the Quantum Innovation Initiative (QII) Consortium. The IBM Heron QPU, which features a tunable-coupler architecture, delivers a significantly higher performance than the processor previously installed in 2023.

This is the second update of the IBM Quantum System One as part of the collaboration between UTokyo and IBM. It was first deployed with a 27-qubit IBM Falcon QPU, before being updated to a 127-qubit IBM Eagle QPU in 2023. It is now transitioning to the latest generation highly performant IBM Heron later this year. IBM has deployed four Heron-based systems worldwide and their performance shows significant improvement over the previous Eagle QPU, with a 3-4x improvement in two-qubit error rates; an order of magnitude improvement in device-wide performance benchmarked by errors across 100-qubit long layers; continued improvement in speed, with a 60 percent increase in CLOPS expected; and a system uptime of more than 95%. The latest IBM Heron processor has continued to demonstrate immense value in orchestrating utility-level workloads, to date, with multiple published studies leveraging these systems' capability of achieving more than 5,000 gate operations.

MSI Teases EdgeXpert MS-C931 - an NVIDIA DGX Spark-based Desktop AI Supercomputer

MSI IPC, a global leader in industrial computing and AI-driven solutions, is set to unveil its latest innovations at COMPUTEX 2025, held from May 20 to 23 at the Taipei Nangang Exhibition Center. Visitors can explore MSI IPC's cutting-edge technologies at Booth J0506, Hall 1, 1F.

Introducing the EdgeXpert MS-C931: A Desktop AI Supercomputer
MSI IPC will unveil the EdgeXpert MS-C931, a desktop AI supercomputer built on the NVIDIA DGX Spark platform. Powered by the NVIDIA GB10 Grace Blackwell Superchip, the EdgeXpert MS-C931 delivers 1,000 AI TOPS FP4 performance, equipped with high-speed ConnectX 7 networking, 128 GB unified memory, and support for large language models. Designed for AI developers and researchers, it is ideal for applications in education, finance, and healthcare industries.

ASUS Introduces a New Class of NVIDIA-powered Desktop AI Supercomputers

They say that the most difficult part of transportation planning is last-mile delivery. A network of warehouses and trucks can bring products within a mile of almost all customers, but logistical challenges and costs add up quickly in the process of delivering those goods to the right doors at the right time. There's a similar pattern in the AI space. Massive data center installations have empowered astonishing cloud-based AI services, but many researchers, developers, and data scientists need the power of an AI supercomputer to travel that last mile. They need machines that offer the convenience and space-saving design of a desktop PC, but go well above and beyond the capabilities of consumer-grade hardware, especially when it comes to available GPU memory.

Enter a new class of AI desktop supercomputers, powered by ASUS and NVIDIA. The upcoming ASUS AI supercomputer lineup, spearheaded by the ASUS ExpertCenter Pro ET900N G3 desktop PC and ASUS Ascent GX10 mini-PC, wield the latest NVIDIA Grace Blackwell superchips to deliver astounding performance in AI workflows. For those who need local, private supercomputing resources, but for whom a data center or rack server installation isn't feasible, these systems provide a transformative opportunity to seize the capabilities of AI.

NVIDIA & Partners to Produce American-made AI Supercomputers in US for First Time

NVIDIA is working with its manufacturing partners to design and build factories that, for the first time, will produce NVIDIA AI supercomputers entirely in the U.S. Together with leading manufacturing partners, the company has commissioned more than a million square feet of manufacturing space to build and test NVIDIA Blackwell chips in Arizona and AI supercomputers in Texas. NVIDIA Blackwell chips have started production at TSMC's chip plants in Phoenix, Arizona. NVIDIA is building supercomputer manufacturing plants in Texas, with Foxconn in Houston and with Wistron in Dallas. Mass production at both plants is expected to ramp up in the next 12-15 months. The AI chip and supercomputer supply chain is complex and demands the most advanced manufacturing, packaging, assembly and test technologies. NVIDIA is partnering with Amkor and SPIL for packaging and testing operations in Arizona.

Within the next four years, NVIDIA plans to produce up to half a trillion dollars of AI infrastructure in the United States through partnerships with TSMC, Foxconn, Wistron, Amkor and SPIL. These world-leading companies are deepening their partnership with NVIDIA, growing their businesses while expanding their global footprint and hardening supply chain resilience. NVIDIA AI supercomputers are the engines of a new type of data center created for the sole purpose of processing artificial intelligence—AI factories that are the infrastructure powering a new AI industry. Tens of "gigawatt AI factories" are expected to be built in the coming years. Manufacturing NVIDIA AI chips and supercomputers for American AI factories is expected to create hundreds of thousands of jobs and drive trillions of dollars in economic security over the coming decades.

Eurocom Unleashes New 18-inch Raptor X18 Laptop With NVIDIA GeForce RTX 5090

Eurocom is launching the Raptor X18, the world's first customized 18" laptop powered by the newest 24 GB GDDR7 680 Tensor AI cores NVIDIA GeForce RTX 5090, 256 GB of DDR5 memory, 32 TB of RAID 0/1/5 four NVMe SSDs, and the Intel Core Ultra 9 275HX processor with 24 cores, 24 threads, and a massive 36 MB cache. Eurocom is pushing the boundaries of portable high-performance computing with the launch of the Raptor X18 Mobile Supercomputer, designed for those who demand extreme computational power—whether for AI development, deep learning, scientific simulations, engineering, content creation, or gaming at the highest levels.

Unparalleled Performance and Display
The Raptor X18 is the first laptop ever to feature an astonishing 256 GB of DDR5 memory, ensuring seamless multitasking for the most memory-intensive operations, including running large-scale LLM models offline and locally. Combined with 24 GB of next-gen GDDR7 VRAM from the NVIDIA GeForce RTX 5090 GPU (680 Tensor AI cores), this mobile supercomputer dominates AI processing, 3D modeling, complex simulations, and ultra-high-performance gaming. With up to 32 TB of NVMe SSD storage in RAID 0/1/5, users can store and access massive datasets with blazing speed, eliminating bottlenecks in high-end workflows. The Raptor X18 features an 18-inch UHD 200 Hz (3840x2400) display, offering impeccable clarity and smoothness for gaming, scientific visualization, and content creation.

Smarter Memory Paves the Way for EU Independence in Computer Manufacturing

New technology from Chalmers University of Technology and the University of Gothenburg, Sweden, is helping the EU establish its own competitive computer manufacturing industry. Researchers have developed components critical for optimising on-chip memory, a key factor in enhancing the performance of next-generation computers.

The research leader, Professor Per Stenström, along with colleagues, has discovered new ways to make cache memory work smarter. A cache is a local memory that temporarily stores frequently accessed data, improving a computer's speed and performance. "Our solution enables computers to retrieve data significantly faster than before, as the cache can manage far more processing elements (PEs) than most existing systems. This makes it possible to meet the demands of tomorrow's powerful computers," says Per Stenström, Professor at the Department of Computer Science and Engineering at Chalmers University of Technology and the University of Gothenburg.

Argonne Releases Aurora: Intel-based Exascale Supercomputer Available to Researchers

The U.S. Department of Energy's (DOE) Argonne National Laboratory has released its Aurora exascale supercomputer to researchers across the world, heralding a new era of computing-driven discoveries. With powerful capabilities for simulation, artificial intelligence (AI), and data analysis, Aurora will drive breakthroughs in a range of fields including airplane design, cosmology, drug discovery, and nuclear energy research.

"We're ecstatic to officially deploy Aurora for open scientific research," said Michael Papka, director of the Argonne Leadership Computing Facility (ALCF), a DOE Office of science user facility. "Early users have given us a glimpse of Aurora's vast potential. We're eager to see how the broader scientific community will use the system to transform their research."

NVIDIA Reveals Secret Weapon Behind DLSS Evolution: Dedicated Supercomputer Running for Six Years

At the RTX "Blackwell" Editor's Day during CES 2025, NVIDIA pulled back the curtain on one of its most powerful tools: a dedicated supercomputer that has been continuously improving DLSS (Deep Learning Super Sampling) for the past six years. Brian Catanzaro, NVIDIA's VP of applied deep learning research, disclosed that thousands of the company's latest GPUs have been working round-the-clock, analyzing and perfecting the technology that has revolutionized gaming graphics. "We have a big supercomputer at NVIDIA that is running 24/7, 365 days a year improving DLSS," Catanzaro explained during his presentation on DLSS 4. The supercomputer's primary task involves analyzing failures in DLSS performance, such as ghosting, flickering, or blurriness across hundreds of games. When issues are identified, the system augments its training data sets with new examples of optimal graphics and challenging scenarios that DLSS needs to address.

DLSS 4 is the first move from convolutional neural networks to a transformer model that runs locally on client PCs. The continuous learning process has been crucial in refining the technology, with the dedicated supercomputer serving as the backbone of this evolution. The scale of resources allocated to DLSS development is massive, as the entire pipeline for a self-improving DLSS model must consist of not only thousands but tens of thousands of GPUs. Of course, a company making 100,000 GPU data centers (xAI's Colossus) must save some for itself and is proactively using it to improve its software stack. NVIDIA's CEO Jensen Huang famously said that DLSS can predict the future. Of course, these statements are to be tested when the Blackwell series launches. However, the approach of using massive data centers to improve DLSS is quite interesting, and with each new GPU generation NVIDIA release, the process is getting significantly sped up.

AMD Powers El Capitan: The World's Fastest Supercomputer

Today, AMD showcased its ongoing high performance computing (HPC) leadership at Supercomputing 2024 by powering the world's fastest supercomputer for the sixth straight Top 500 list.

The El Capitan supercomputer, housed at Lawrence Livermore National Laboratory (LLNL), powered by AMD Instinct MI300A APUs and built by Hewlett Packard Enterprise (HPE), is now the fastest supercomputer in the world with a High-Performance Linpack (HPL) score of 1.742 exaflops based on the latest Top 500 list. Both El Capitan and the Frontier system at Oak Ridge National Lab claimed numbers 18 and 22, respectively, on the Green 500 list, showcasing the impressive capabilities of the AMD EPYC processors and AMD Instinct GPUs to drive leadership performance and energy efficiency for HPC workloads.

NEC to Build Japan's Newest Supercomputer Based on Intel Xeon 6900P and AMD Instinct MI300A

NEC Corporation (NEC; TSE: 6701) has received an order for a next-generation supercomputer system from Japan's National Institutes for Quantum Science and Technology (QST), under the National Research and Development Agency, and the National Institute for Fusion Science (NIFS), part of the National Institutes of Natural Sciences under the Inter-University Research Institute Corporation. The new supercomputer system is scheduled to be operational from July 2025. The next-generation supercomputer system will feature multi-architecture with the latest CPUs and GPUs and will consist of large storage capacity and a high-speed network. This system is expected to be used for various research and development in the field of fusion science research.

Specifically, the system will be used for precise prediction of experiments and creation of operation scenarios in the ITER project, which is being promoted as an international project, and the Satellite Tokamak (JT-60SA) project, which is being promoted as a Broader Approach activity, and for design of DEMO reactors. The DEMO project promotes large-scale numerical calculations for DEMO design and R&D to accelerate the realization of a DEMO reactor that contributes to carbon neutrality. In addition, NIFS will conduct numerical simulation research using the supercomputer for multi-scale and multi-physics systems, including fusion plasmas, to broadly accelerate research on the science and applications of fusion plasmas, and as an Inter-University Research Institute, will provide universities and research institutes nationwide with opportunities for collaborative research using the state-of-the-art supercomputer.

NVIDIA Ethernet Networking Accelerates World's Largest AI Supercomputer, Built by xAI

NVIDIA today announced that xAI's Colossus supercomputer cluster comprising 100,000 NVIDIA Hopper GPUs in Memphis, Tennessee, achieved this massive scale by using the NVIDIA Spectrum-X Ethernet networking platform, which is designed to deliver superior performance to multi-tenant, hyperscale AI factories using standards-based Ethernet, for its Remote Direct Memory Access (RDMA) network.

Colossus, the world's largest AI supercomputer, is being used to train xAI's Grok family of large language models, with chatbots offered as a feature for X Premium subscribers. xAI is in the process of doubling the size of Colossus to a combined total of 200,000 NVIDIA Hopper GPUs.

Foxconn to Build Taiwan's Fastest AI Supercomputer With NVIDIA Blackwell

NVIDIA and Foxconn are building Taiwan's largest supercomputer, marking a milestone in the island's AI advancement. The project, Hon Hai Kaohsiung Super Computing Center, revealed Tuesday at Hon Hai Tech Day, will be built around NVIDIA's groundbreaking Blackwell architecture and feature the GB200 NVL72 platform, which includes a total of 64 racks and 4,608 Tensor Core GPUs. With an expected performance of over 90 exaflops of AI performance, the machine would easily be considered the fastest in Taiwan.

Foxconn plans to use the supercomputer, once operational, to power breakthroughs in cancer research, large language model development and smart city innovations, positioning Taiwan as a global leader in AI-driven industries. Foxconn's "three-platform strategy" focuses on smart manufacturing, smart cities and electric vehicles. The new supercomputer will play a pivotal role in supporting Foxconn's ongoing efforts in digital twins, robotic automation and smart urban infrastructure, bringing AI-assisted services to urban areas like Kaohsiung.

Japan Unveils Plans for Zettascale Supercomputer: 100 PFLOPs of AI Compute per Node

The zettascale era is officially on the map, as Japan has announced plans to develop a successor to its renowned Fugaku supercomputer. The Ministry of Education, Culture, Sports, Science and Technology (MEXT) has set its sights on creating a machine capable of unprecedented processing power, aiming for 50 ExaFLOPS of peak AI performance with zettascale capabilities. The ambitious "Fugaku Next" project, slated to begin development next year, will be headed by RIKEN, one of Japan's leading research institutions, in collaboration with tech giant Fujitsu. With a target completion date of 2030, the new supercomputer aims to surpass current technological boundaries, potentially becoming the world's fastest once again. MEXT's vision for the "Fugaku Next" includes groundbreaking specifications for each computational node.

The ministry anticipates peak performance of several hundred FP64 TFLOPS for double-precision computations, around 50 FP16 PFLOPS for AI-oriented half-precision calculations, and approximately 100 PFLOPS for AI-oriented 8-bit precision calculations. These figures represent a major leap from Fugaku's current capabilities. The project's initial funding is set at ¥4.2 billion ($29.06 million) for the first year, with total government investment expected to exceed ¥110 billion ($761 million). While the specific architecture remains undecided, MEXT suggests the use of CPUs with special-purpose accelerators or a CPU-GPU combination. The semiconductor node of choice will likely be a 1 nm node or even more advanced nodes available at the time, with advanced packaging also used. The supercomputer will also feature an advanced storage system to handle traditional HPC and AI workloads efficiently. We already have an insight into Monaka, Fujitsu's upcoming CPU design with 150 Armv9 cores. However, Fugaku Next will be powered by the Monaka Next design, which will likely be much more capable.

European Supercomputer Chip SiPearl Rhea Delayed, But Upgraded with More Cores

The rollout of SiPearl's much-anticipated Rhea processor for European supercomputers has been pushed back by a year to 2025, but the delay comes with a silver lining - a significant upgrade in core count and potential performance. Originally slated to arrive in 2024 with 72 cores, the homegrown high-performance chip will now pack 80 cores when it eventually launches. This decisive move by SiPearl and its partners is a strategic choice to ensure the utmost quality and capabilities for the flagship European processor. The additional 12 months will allow the engineering teams to further refine the chip's architecture, carry out extensive testing, and optimize software stacks to take full advantage of Rhea's computing power. Now called the Rhea1, the chip is a crucial component of the European Processor Initiative's mission to develop domestic high-performance computing technologies and reduce reliance on foreign processors. Supercomputer-scale simulations spanning climate science, drug discovery, energy research and more all require astonishing amounts of raw compute grunt.

By scaling up to 80 cores based on the latest Arm Neoverse V1, Rhea1 aims to go toe-to-toe with the world's most powerful processors optimized for supercomputing workloads. The SiPearl wants to utilize TSCM's N6 manufacturing process. The CPU will have 256-bit DDR5 memory connections, 104 PCIe 5.0 lanes, and four stacks of HBM2E memory. The roadmap shift also provides more time for the expansive European supercomputing ecosystem to prepare robust software stacks tailored for the upgraded Rhea silicon. Ensuring a smooth deployment with existing models and enabling future breakthroughs are top priorities. While the delay is a setback for SiPearl's launch schedule, the substantial upgrade could pay significant dividends for Europe's ambitions to join the elite ranks of worldwide supercomputer power. All eyes will be on Rhea's delivery in 2025, mainly from Europe's governments, which are funding the project.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Cerebras & G42 Break Ground on Condor Galaxy 3 - an 8 exaFLOPs AI Supercomputer

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the Abu Dhabi-based leading technology holding group, today announced the build of Condor Galaxy 3 (CG-3), the third cluster of their constellation of AI supercomputers, the Condor Galaxy. Featuring 64 of Cerebras' newly announced CS-3 systems - all powered by the industry's fastest AI chip, the Wafer-Scale Engine 3 (WSE-3) - Condor Galaxy 3 will deliver 8 exaFLOPs of AI with 58 million AI-optimized cores. The Cerebras and G42 strategic partnership already delivered 8 exaFLOPs of AI supercomputing performance via Condor Galaxy 1 and Condor Galaxy 2, each amongst the largest AI supercomputers in the world. Located in Dallas, Texas, Condor Galaxy 3 brings the current total of the Condor Galaxy network to 16 exaFLOPs.

"With Condor Galaxy 3, we continue to achieve our joint vision of transforming the worldwide inventory of AI compute through the development of the world's largest and fastest AI supercomputers," said Kiril Evtimov, Group CTO of G42. "The existing Condor Galaxy network has trained some of the leading open-source models in the industry, with tens of thousands of downloads. By doubling the capacity to 16exaFLOPs, we look forward to seeing the next wave of innovation Condor Galaxy supercomputers can enable." At the heart of Condor Galaxy 3 are 64 Cerebras CS-3 Systems. Each CS-3 is powered by the new 4 trillion transistor, 900,000 AI core WSE-3. Manufactured at TSMC at the 5-nanometer node, the WSE-3 delivers twice the performance at the same power and for the same price as the previous generation part. Purpose built for training the industry's largest AI models, WSE-3 delivers an astounding 125 petaflops of peak AI performance per chip.

The SEA Projects Prepare Europe for Exascale Supercomputing

The HPC research projects DEEP-SEA, IO-SEA and RED-SEA are wrapping up this month after a three-year project term. The three projects worked together to develop key technologies for European Exascale supercomputers, based on the Modular Supercomputing Architecture (MSA), a blueprint architecture for highly efficient and scalable heterogeneous Exascale HPC systems. To achieve this, the three projects collaborated on system software and programming environments, data management and storage, as well as interconnects adapted to this architecture. The results of their joint work will be presented at a co-design workshop and poster session at the EuroHPC Summit (Antwerp, 18-21 March, www.eurohpcsummit.eu).

NVIDIA Unveils "Eos" to Public - a Top Ten Supercomputer

Providing a peek at the architecture powering advanced AI factories, NVIDIA released a video that offers the first public look at Eos, its latest data-center-scale supercomputer. An extremely large-scale NVIDIA DGX SuperPOD, Eos is where NVIDIA developers create their AI breakthroughs using accelerated computing infrastructure and fully optimized software. Eos is built with 576 NVIDIA DGX H100 systems, NVIDIA Quantum-2 InfiniBand networking and software, providing a total of 18.4 exaflops of FP8 AI performance. Revealed in November at the Supercomputing 2023 trade show, Eos—named for the Greek goddess said to open the gates of dawn each day—reflects NVIDIA's commitment to advancing AI technology.

Eos Supercomputer Fuels Innovation
Each DGX H100 system is equipped with eight NVIDIA H100 Tensor Core GPUs. Eos features a total of 4,608 H100 GPUs. As a result, Eos can handle the largest AI workloads to train large language models, recommender systems, quantum simulations and more. It's a showcase of what NVIDIA's technologies can do, when working at scale. Eos is arriving at the perfect time. People are changing the world with generative AI, from drug discovery to chatbots to autonomous machines and beyond. To achieve these breakthroughs, they need more than AI expertise and development skills. They need an AI factory—a purpose-built AI engine that's always available and can help ramp their capacity to build AI models at scale Eos delivers. Ranked No. 9 in the TOP 500 list of the world's fastest supercomputers, Eos pushes the boundaries of AI technology and infrastructure.

GIGABYTE Advanced Data Center Solutions Unveils Telecom and AI Servers at MWC 2024

GIGABYTE Technology, an IT pioneer whose focus is to advance global industries through cloud and AI computing systems, is coming to MWC 2024 with its next-generation servers empowering telcos, cloud service providers, enterprises, and SMBs to swiftly harness the value of 5G and AI. Featured is a cutting-edge AI server boasting AMD Instinct MI300X 8-GPU, and a comprehensive AI/HPC server series supporting the latest chip technology from AMD, Intel, and NVIDIA. The showcase will also feature integrated green computing solutions excelling in heat dissipation and energy reduction.

Continuing the booth theme "Future of COMPUTING", GIGABYTE's presentation will cover servers for AI/HPC, RAN and Core networks, modular edge platforms, all-in-one green computing solutions, and AI-powered self-driving technology. The exhibits will demonstrate how industries extend AI applications from cloud to edge and terminal devices through 5G connectivity, expanding future opportunities with faster time to market and sustainable operations. The showcase spans from February 26th to 29th at Booth #5F60, Hall 5, Fira Gran Via, Barcelona.
Return to Keyword Browsing
Jul 2nd, 2025 22:35 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts