Monday, November 15th 2021

TOP500 Update Shows No Exascale Yet, Japanese Fugaku Supercomputer Still at the Top

The 58th annual edition of the TOP500 saw little change in the Top10. The Microsoft Azure system called Voyager-EUS2 was the only machine to shake up the top spots, claiming No. 10. Based on an AMD EPYC processor with 48 cores and 2.45GHz working together with an NVIDIA A100 GPU and 80 GB of memory, Voyager-EUS2 also utilizes a Mellanox HDR Infiniband for data transfer.

While there were no other changes to the positions of the systems in the Top10, Perlmutter at NERSC improved its performance to 70.9 Pflop/s. Housed at the Lawrence Berkeley National Laboratory, Perlmutter's increased performance couldn't move it from its previously held No. 5 spot.
Fugaku continues to hold the No. 1 position that it first earned in June 2020. Its HPL benchmark score is 442 Pflop/s, which exceeded the performance of Summit at No. 2 by 3x. Installed at the Riken Center for Computational Science (R-CCS) in Kobe, Japan, it was co-developed by Riken and Fujitsu and is based on Fujitsu's custom ARM A64FX processor. Fugaku also uses Fujitsu's Tofu D interconnect to transfer data between nodes.

In single or further-reduced precision, which are often used in machine learning and A.I. application, Fugaku has a peak performance above 1,000 PFlop/s (1 Exaflop/s). As a result, Fugaku is often introduced as the first "Exascale" supercomputer.

While there were also reports about several Chinese systems reaching Exaflop level performance, none of these systems submitted an HPL result to the TOP500.

Here's a summary of the systems in the Top10:
  • Fugaku remains the No. 1 system. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s. This puts it 3x ahead of the No. 2 system in the list.
  • Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, remains the fastest system in the U.S. and at the No. 2 spot worldwide. It has a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs, each with 80 streaming multiprocessors (S.M.). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
  • Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA, is at No. 3. Its architecture is very similar to the #2 systems Summit. It is built with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
  • Sunway TaihuLight is a system developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, China's Jiangsu province is listed at the No. 4 position with 93 Pflop/s.
  • Perlmutter at No. 5 was newly listed in the TOP10 in last June. It is based on the HPE Cray "Shasta" platform, and a heterogeneous system with AMD EPYC based nodes and 1536 NVIDIA A100 accelerated nodes. Perlmutter improved its performance to 70.9 Pflop/s
  • Selene, now at No. 6, is an NVIDIA DGX A100 SuperPOD installed in-house at NVIDIA in the USA. The system is based on an AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as a network. It achieved 63.4 Pflop/s.
  • Tianhe-2A (Milky Way-2A), a system developed by China's National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China, is now listed as the No. 7 system with 61.4 Pflop/s.
  • A system called "JUWELS Booster Module" is No. 8. The BullSequana system build by Atos is installed at the Forschungszentrum Juelich (FZJ) in Germany. The system uses an AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as a network similar to the Selene System. This system is the most powerful system in Europe, with 44.1 Pflop/s.
  • HPC5 at No. 9 is a PowerEdge system built by Dell and installed by the Italian company Eni S.p.A. It achieves a performance of 35.5 Pflop/s due to using NVIDIA Tesla V100 as accelerators and a Mellanox HDR InfiniBand as the network.
  • Voyager-EUS2, a Microsoft Azure system installed at Microsoft in the U.S., is the only new system in the TOP10. It achieved 30.05 Pflop/s and is listed at No. 10. This architecture is based on an AMD EPYC processor with 48 cores and 2.45GHz working together with an NVIDIA A100 GPU with 80 G.B. memory and utilizing a Mellanox HDR Infiniband for data transfer.
Other TOP500 highlights
While there were not many changes to the Top10, we did see a smattering of shifts within the Top15. The new Voyager-EUS system from Microsoft followed its sibling into the No. 11 spot, while the SSC-21 system from Samsung introduced itself to the list at No. 12. Polaris, also a new system, came in at No. 13 while the new CEA-HF took No. 15.

Like the last list, AMD processors are seeing a lot of success. Frontera, which has a Xeon Platinum 8280 processor, got bumped by Voyager-EUS2, which has an AMD EPYC processor. What's more, all of the new Top15 machines described above have AMD processors

Unsurprisingly, systems from China and the USA dominated the list. Although China dropped from 186 systems to 173, the USA increased from 123 machines to 150. All told, these two countries account for nearly two-thirds of the supercomputers on the TOP500.

The new edition of the list didn't showcase much change in terms of system interconnects. Ethernet still dominated at 240 machines, while Infiniband accounted for 180. Ominpath interconnects saw 40 spots on the list, there were 34 custom interconnects, and only 6 systems with proprietary networks.

Green500 results
The system to claim the No. 1 spot for the Green500 was MN-3 from Preferred Networks in Japan. Relying on the MN-Core chip and an accelerator optimized for matrix arithmetic, this machine was able to achieve an incredible 39.38 gigaflops/watt power-efficiency. This machine provided a performance 29.7- gigaflops/watt on the last list, clearly showcasing some impressive improvement. It also enhanced its standing on the TOP500 list, moving from No. 337 to No. 302.

The new SSC-21 Scalable Module an HPE Apollo 6500 system installed at Samsung Electronics in South Korea achieved an impressive 33.98 gigaflops/watt. They did so by submitting an power optimized run of the HPL benchmark. It is listed at position 292 in the TOP500.

NVIDIA installed a new liquid cooled DGX A100 prototype system called Tethys. With a power optimized HPL run Tethys achieved 31.5 gigaflops/watt and garne red the No. 3 spot on the Green500. It is listed at position 296 in the TOP500.

The Wilkes-3 system improved its results but was still pushed down to the No.4 spot on the Green500. Wilkes-3, which is housed at the University of Cambridge in the U.K., had a power-efficiency of 30.8 gigaflops/watt. However, it was pushed from No. 100 to No. 281 on the TOP500 list.

The University of Florida in the USA with its HiPerGator AI system was pushed from the No. 2 spot to the No. 5 spot. This machine held steady at 29.52 gigaflops/watt. This NVIDIA system has 138,880 cores and relies on an AMD EPYC 7742 processor. Despite this impressive performance, HiPerGator AI was pushed from No. 22 to No. 31 on the TOP500

HPCG Results
The TOP500 list has incorporated the High-Performance Conjugate Gradient (HPCG) Benchmark results, which provide an alternative metric for assessing supercomputer performance and is meant to complement the HPL measurement.

The HPCG results here are very similar to the last list. Fugaku was the clear winner with 16.0 HPCG-petaflops, while Summit retained its No. 2 spot with 2.93 HPCG-petaflops. Perlmutter, a USA machine housed at Lawrence Berkeley National Laboratory, took the No. 3 spot with 1.91 HPCG-petaflops.

HPL-AI Results
The HPL-AI benchmark seeks to highlight the convergence of HPC and artificial intelligence (AI) workloads based on machine learning and deep learning by solving a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.

Achieving an HPL-AI benchmark of 2 Exaflops, Fugaku is leading the pack in this regard. With such excellent metrics year-after-year, combined with a consideration by many as the first "Exascale" supercomputer, Fugaku is clearly an exciting system.
Add your own comment

7 Comments on TOP500 Update Shows No Exascale Yet, Japanese Fugaku Supercomputer Still at the Top

#1
Unregistered
Be interested to know the power consumption of just the top ten.
Posted on Edit | Reply
#2
DrCR
TiggerBe interested to know the power consumption of just the top ten.
It’s ok. The employees hold their breath during work hours to cancel out carbon emissions.
Posted on Reply
#4
Prima.Vera
I though Fugaku already is almost reaching 2 Exaflops due to continuous upgrades.
Posted on Reply
#5
GreiverBlade
Prima.VeraI though Fugaku already is almost reaching 2 [H]exaflops due to continuous upgrades.
yep .... also

that depend the benchmark tho ...
"Besides the system software, the supercomputer has run many kinds of applications, including several benchmarks. Running the mainstream HPL benchmark, used by TOP500, Fugaku is at petascale and almost halfway to exascale."

"in June 2020, it achieved 1.42 exaFLOPS (fp16 with fp64 precision) in HPL-AI benchmark making it the first ever supercomputer that achieved 1 exaFLOPS."
"Fugaku has set world records on at least three other benchmarks, including HPL-AI; at 2.0 exaflops, the system has exceeded the exascale threshold for the benchmark."
Posted on Reply
#6
Nosada
GreiverBladeyep .... also

that depend the benchmark tho ...
"Besides the system software, the supercomputer has run many kinds of applications, including several benchmarks. Running the mainstream HPL benchmark, used by TOP500, Fugaku is at petascale and almost halfway to exascale."

"in June 2020, it achieved 1.42 exaFLOPS (fp16 with fp64 precision) in HPL-AI benchmark making it the first ever supercomputer that achieved 1 exaFLOPS."
"Fugaku has set world records on at least three other benchmarks, including HPL-AI; at 2.0 exaflops, the system has exceeded the exascale threshold for the benchmark."
IIRC, it is at half-exa in double precision benchmarks, but at 2 exa in mixed precision, real world usage.
Posted on Reply
#7
GreiverBlade
NosadaIIRC, it is at half-exa in double precision benchmarks, but at 2 exa in mixed precision, real world usage.
thanks for the precision on precision, double or not ... ;)

well ... real world usage is what matters to me ... thus Fugaku is Exascale level super computer, then.

benchmark for ranking is ok tho ... i mean Fugaku is head shoulder and ... even ankle, above the others
Posted on Reply
Nov 21st, 2024 10:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts