• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,579 (0.97/day)
The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.




Frontier is the clear winner of the race to exascale, and it will require a lot of work and innovation to knock it from the top spot.

The Fugaku system at the Riken Center for Computational Science (R-CCS) in Kobe, Japan, previously held the top spot for two years in a row before being moved down by the Frontier machine. With an HPL score of 0.442 EFlop/s, Fugaku has retained its No. 2 spot from the previous list.

The LUMI system, which found its way to the No. 3 spot on the last list, has retained its spot. However, the system went through a major upgrade to keep it competitive. The upgrade doubled the machines size, which allowed it to achieve an HPL score of 0.309 EFlop/s.

The only new machine to grace the top of the list was the No. 4 Leonardo system at EuroHPC/CINECA in Bologna, Italy. The machine achieved an HPL score of 0.174 EFlop/s with 1,463,616 cores.

Here is a summary of the system in the Top 10:
  • Frontier is the No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a performance exceeding one EFlop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.102 EFlop/s using 8,730,112 cores. The new HPE Cray EX architecture combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI, with AMD Instinct 250X accelerators, and Slingshot-10 interconnect.
  • Fugaku, now the No. 2 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s.
  • The upgraded LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is the No. 3 with a performance of 309.1 Pflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC's data center in Kajaani, Finland.
  • The new No. 4 system Leonardo is installed at a different EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a Linpack performance of 174.7 Pflop/s.
  • Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, is now listed at the No. 5 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two POWER9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
  • Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA is at No. 6. Its architecture is very similar to the #5 system's Summit. It is built with 4,320 nodes with two POWER9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
  • Sunway TaihuLight, a system developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province is listed at the No. 7 position with 93 Pflop/s.
  • Perlmutter at No. 8 is based on the HPE Cray "Shasta" platform and a heterogeneous system with AMD EPYC-based nodes and 1,536 NVIDIA A100 accelerated nodes. Perlmutter achieved 64.6 Pflop/s
  • Selene now at No. 9 is an NVIDIA DGX A100 SuperPOD installed inhouse at NVIDIA in the USA. The system is based on AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as network and achieved 63.4 Pflop/s.
  • Tianhe-2A (Milky Way-2A), a system developed by China's National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China is now listed as the No. 10 system with 61.4 Pflop/s.

Other TOP500 Highlights
The data from this list shows that AMD processors are still the preferred choice for HPC systems. Frontier uses Gen AMD EPYC processors optimized for HPC and AI, as did the No. 3 LUMI system. That said, Xeon chips were also prevalent on the list. In fact, the new Leornardo system uses Xeon Platinum processors.

Once again, China and the United States took most of the entries on this list. While the US held strong at 126 machines on the TOP500, China dropped from 173 systems on the last list to 162 on this one. While these two countries make up nearly two-thirds of the machines on the list, it's clear that other countries are working hard to bring about their own HPC innovations. In fact, taken as a whole continent, Europe accounted for 131 machines on this list as opposed to the 118 machines that appeared on the June 2022 TOP500.

The system interconnects shown on this list are very similar to previous ones, with small changes. Ethernet interconnects increased from 226 to 233 machines, and Infiniband dropped from 196 to 194 machines. Omnipath dropped to 36 machines from the previous 40, and the number of proprietary networks dropped from 6 systems to 4 on the new list.

GREEN500 Results
The system to take the top spot on the GREEN500 was the Henri system at the Flatiron Institute in the United States. Although the machine ranked at No. 405 on the TOP500 list, it had a successful showing in terms of its energy efficiency. The system has an efficiency score of 65.09 GFlops/Watts, 5,920 cores, and an HPL score of 2.038 PFlop/s.

Last list's winner of the GREEN500 was the Frontier TDS machine, which has since moved down to the No. 2 spot. This system achieved an efficiency score of 62.68 GFlops/Watts, it has 120,832 total cores, and achieved an HPL score of 19.2 PFlop/s that earned it the No. 32 spot on the TOP500. Considering the Frontier TDS machine is basically just one rack identical to the ones used in the actual Frontier system, it would make sense that this machine is much more powerful than the No. 1 Henri system.

The No. 3 spot on the GREEN500 list saw an interesting development from the Adastra machine from France's GENCI-CINES. On top of its high ranking in energy efficiency, this machine also captured the No. 11 spot on the TOP500. The Adastra system achieved an energy efficiency rating of 58.02 GFlops/Watts on top of its impressive HPL score of 46.1 PFlop/s.

However, in terms of both power and energy efficiency, the Frontier system once again shows some impressive results. Despite dropping from the No. 2 spot on the last GREEN500 to the No. 6 spot on this list, Frontier still provided incredible power for the amount of energy going in. The machine is able to produce 1.102 EFlop/s in HPL performance, all with an energy efficiency of 52.23 GFlops/Watts. Frontier is proof that the most powerful machines in the world don't need to focus on performance at the expense of energy efficiency.

HPCG Results
The TOP500 list has incorporated the High-Performance Conjugate Gradient (HPCG) benchmark results, which provide an alternative metric for assessing supercomputer performance. This score is meant to complement the HPL measurement to give a fuller understanding of the machine.

The winner of this list, like the last list, is Fugaku with a score of 16.0 HPCG-petaflops. Unlike the last list, Frontier has submitted HPCG data and achieved an HPCG score of 14.054 HPCG-petaflops. This puts it at the No. 2 spot above LUMI, which scored 3.408 HPCG-petaflops.

HPL-MxP Results (formally HPL-AI)
The HPL-MxP benchmark seeks to highlight the convergence of HPC, and artificial intelligence (AI) workloads based on machine learning and deep learning by solving a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.

This year's winner is the Frontier machine with its exciting 7.9 EFlop/s score on the HPL-AI benchmark test. Lumi was in second place with a score of 2.2 Eflop/s, followed by the Fugaku machine with a score of 2.0 Eflop/s

View at TechPowerUp Main Site
 
Joined
Sep 6, 2013
Messages
3,328 (0.81/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 32GB - 16GB G.Skill RIPJAWS 3600+16GB G.Skill Aegis 3200 / 16GB JUHOR / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes/ NVMes, SATA Storage / NVMe boot(Clover), SATA storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Thank God EPYC and servers, or competition in the CPU market would have sort lived now that AMD is having difficulty marketing AM5 against Intel's offerings.
 
Joined
Sep 15, 2015
Messages
1,070 (0.32/day)
Location
Latvija
System Name Fujitsu Siemens, HP Workstation
Processor Athlon x2 5000+ 3.1GHz, i5 2400
Motherboard Asus
Memory 4GB Samsung
Video Card(s) rx 460 4gb
Storage 750 Evo 250 +2tb
Display(s) Asus 1680x1050 4K HDR
Audio Device(s) Pioneer
Power Supply 430W
Mouse Acme
Keyboard Trust
What they do with it?
 

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,059 (2.77/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
What they do with it?
Physics simulations mostly, AFAIK. It's installed at ORNL which is part of the Department of Energy, and one of the leading facilities for nuclear research in the US.
 
Joined
Jan 3, 2021
Messages
3,484 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Joined
Dec 26, 2006
Messages
3,826 (0.58/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
That’s a lot of flopping ;)
 
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
Physics simulations mostly, AFAIK. It's installed at ORNL which is part of the Department of Energy, and one of the leading facilities for nuclear research in the US.

Physics simulations can be a wide variety of subjects though.

* Finite Element Analysis -- Simulated car crashes. Simulated structures. Simulated earthquakes. Simulated heatsinks. Etc. etc. Instead of "building" a real object and then wrecking it to understand how it breaks... you build it in a computer and simulate how it would break when in a car crash or whatever. You use the data to make a new design over-and-over until you figured out the problem.

* Weather modeling -- Everyone wants to predict the weather better. It takes a supercomputer to calculate / forward simulate weather effects (ie: where the hurricanes will make landfall, and other such predictions).

* Protein Folding -- Understanding how molecules interact inside of the human body. Our molecules are very large and complicated. These sorts of things can help us understand the new mutations of COVID19 and other such "molecules" that interact with the human body, finding new cures or at least new levels of understanding for current diseases and/or medicines.

And of course: nuclear research. But there's plenty of physics simulations that have other applications. All the USA's national labs are supercomputer experts and each have multiple supercomputers for a reason. They have a lot of practical physics to research.

Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, is now listed at the No. 5 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two POWER9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.

Woah, Summit is still in the top 5? What a beast.
 

Leiesoldat

lazy gamer & woodworker
Supporter
Joined
Jun 29, 2021
Messages
122 (0.10/day)
System Name Arda
Processor AMD Ryzen 5800X3D
Motherboard Gigabyte X570-I AORUS Pro WiFi
Cooling Custom Loop - Aquacomputer, Optimus, EK, Bykski
Memory GSkill Trident Z RGB 32 GB (2x16) DDR4-3200
Video Card(s) Gigabyte Gaming OC RX 6800XT
Storage SK Hynix P41 1TB
Display(s) VIOTEK 3440 x 1440 144 Hz Curved
Case XTIA Proto-XL
Audio Device(s) Schiit Modius + Schiit Jotunheim
Power Supply Seasonic Prime 850W Titanium
Mouse Xtrfy MZ1 Zy's Rail Wireless
Keyboard Rainkeebs Yasui - Custom 40% Ortholinear
Software Windows 11 Pro
I've posted this is another thread a couple months back on what is actually run on machines like Frontier. This information can be found on the Exascale Computing Project's website (Frontier is the hardware portion of ECP, while the software and applications were being developed in tandem rather than after the fact). Nuclear non-proliferation and weapons research are just small portions of what gets run on a supercomputer at the United States' national labs. Frontier will most likely be unseated by El Capitan which is being installed at Lawrence Livermore National Laboratory but will most likely be running classified projects as Lawrence Livermore is one of the three weapons labs along with Los Alamos and Sandia.

  • Chemistry and Materials (This area focuses on simulation capabilities that attempt to precisely describe the underlying properties of matter needed to optimize and control the design of new materials and energy technologies.)
    • LatticeQCD - Validate Fundamental Laws of Nature (this is Quantum related which is the hot new area of science)
    • NWChemEx - Tackling Chemical, Materials, and Biomolecular Challenges in Exascale: Catalytic Conversion of Biomass-derived Alcohols
    • GAMESS - General Atomic and Molecular Electronic Structure System: Biofuel Catalyst Design
    • EXAALT - Molecular Dynamics at Exascale: Simultaneously address time, length, and accuracy requirements for predictive microstructural evolution of materials
    • ExaAM - Transforming Additive Manufacturing through Exascale Simulation: Additive Manufacturing of Qualifiable Metal Parts
    • QMCPACK - Quantum Mechanics at Exascale: Find, predict, and control materials and properties at quantum level
  • Co-Design (These projects target crosscutting algorithmic methods that capture the most common patterns of computation and communication [known as motifs] in the ECP applications.)
    • Adaptive Mesh Refinement - Adaptive mesh refinement (AMR) is like a computational microscope; it allows scientists to “zoom in” on particular regions of space that are more interesting than others.
    • Efficient Exascale Discretizations - Efficient exploitation of exascale architectures requires a rethink of the numerical algorithms used in large-scale applications of strategic interest to the DOE. Many large-scale applications employ unstructured finite element discretization methods—the process of dividing a large simulation into smaller components in preparation for computer analysis—where practical efficiency is measured by the accuracy achieved per unit computational time.
    • Online Data Analysis and Reduction at the Exascale
    • Particle-Based Applications - Particle-based simulation approaches are ubiquitous in computational science and engineering. The “particles” may represent, for example, the atomic nuclei of quantum and classical molecular dynamics methods or gravitationally interacting bodies or tracer particles in N-body simulations.
    • Efficient Implementation of Key Graph Algorithms
    • Exascale Machine Learning Technologies
    • Proxy Applications - Proxy applications (proxy apps) are small, simplified codes that allow application developers to share important features of larger production applications without forcing collaborators to assimilate large and complex code bases.
  • Data Analytics and Optimization
    • ExaSGD - Optimizing Stochastic Grid Dynamics at Exascale: Reliable and Efficient Planning of the Power Grid
    • CANDLE - Exascale Deep Learning–Enabled Precision Medicine for Cancer: Accelerate and Translate Cancer Research, Develop pre-clinical drug response models, predict mechanisms of RAS/RAF driven cancers, and develop treatment strategies
    • ExaBiome - Exascale Solutions for Microbiome Analysis: Metagenomics for Analysis of Biogeochemical Cycles
    • ExaFEL - Data Analytics at Exascale for Free Electron Lasers: Light Source–Enabled Analysis of Protein and Molecular Structures and Design
  • Earth and Space Science
    • ExaStar - Exascale Models of Stellar Explosions: Demystify Origin of Chemical Elements
    • ExaSky - Computing at the Extreme Scales: Cosmological Probe of the Standard Model of Particle Physics
    • EQSIM - High-Performance, Multidisciplinary Simulations for Regional-Scale Earthquake Hazard/ Risk Assessments: Earthquake Hazard Risk Assessment
    • Subsurface - Exascale Subsurface Simulator of Coupled Flow, Transport, Reactions, and Mechanics: Carbon Capture, Fossil Fuel Extraction, Waste Disposal
    • E3SM-MMF - Cloud-Resolving Climate Modeling of the Earth’s Water Cycle: Accurate Regional Impact Assessment in Earth Systems (modeling cloud formations for all of North America for instance)
  • Energy
    • ExaWind - Exascale Predictive Wind Plant Flow Physics Modeling: Turbine Wind Plant Efficiency
    • Combustion-PELE - High-efficiency, Low-emission Combustion Engine Design: Advance Understanding of Fundamental Turbulence- Chemistry Interactions in Device-relevant Conditions
    • MFIX-Exa - Performance Prediction of Multiphase Energy Conversion Device: Scale-up of Clean Fossil Fuel Combustion
    • WDMApp - High-fidelity Whole Device Modeling of Magnetically Confined Fusion Plasmas: High-fidelity Whole Device Modeling of Magnetically Confined Fusion Plasmas. Prepare for the International Thermonuclear Experimental Reactor (ITER) [fusion reaction in the south of France] experiments and increase return of investment (ROI) of validation data and understanding; prepare for beyond-ITER devices
    • ExaSMR - Coupled Monte Carlo Neutronics and Fluid Flow Simulation of Small Modular Reactors: Design and Commercialization of Small Modular Reactors
    • WarpX - Exascale Modeling of Advanced Particle Accelerators: Plasma Wakefield Accelerator Design
  • National Security
    • Ristra - Multi-physics simulation tools for weapons-relevant applications: The Ristra project is developing new multi-physics simulation tools that address emerging HPC challenges of massive, heterogeneous parallelism using novel programming models and data management.
    • MAPP - Multi-physics simulation tools for High Energy Density Physics (HEDP) and weapons- relevant applications for DOE and DoD
    • EMPIRE & SPARC - EMPIRE addresses electromagnetic plasma physics, and SPARC addresses reentry aerodynamics
  • Data and Visualization
    • ADIOS - Support efficient I/O and code coupling services
    • DataLib - Support efficient I/O, I/O monitoring and data services
    • VTK-m - Provide VTK- based scientific visualization software that supports shared memory parallelism
    • VeloC/SZ - Develop two software products: VeloC checkpoint restart and SZ lossy compression with strict error bounds
    • ExaIO - Develop an efficient system topology and storage hierarchy-aware HDF5 and UnifyFS parallel I/O libraries
    • Alpine/ZFP - Deliver in situ visualization and analysis algorithms, infrastructure and data reduction of floating-point arrays
  • Development Tools
    • EXA-PAPI++ - Develop a standardized interface to hardware performance counters
    • HPCToolkit - Develop an HPC Tool Kit for performance analysis
    • PROTEAS-TUNE - Develop a software tool chain for emerging architectures
    • SOLLVE - Develop/enhance OpenMP programming model
    • FLANG - Develop a Fortran front-end for LLVM
  • Mathematical Libraries
    • xSDK4ECP - Create a value-added aggregation of DOE math libraries to combine usability, standardization, and interoperability
    • PETSc/TAO - Deliver efficient libraries for sparse linear and nonlinear systems of equations and numerical optimization
    • STRUMPACK/SuperLU - Provide direct methods for linear systems of equations and Fourier transformations
    • SUNDIALS-hypre - Deliver adaptive time-stepping methods for dynamical systems and solvers
    • CLOVER - Develop scalable, portable numerical algorithms to facilitate efficient simulations
    • ALExa - Provide technologies for passing data among grids, computing surrogates, and accessing mathematical libraries from Fortran
  • Programming Models & Runtimes
    • Exascale MPI/MPICH - Enhance the MPI standard and the MPICH implementation of MPI for exascale
    • Legion - provides a data-centric programming system that allows scientists to describe the properties of their program data and dependencies, along with a runtime that extracts tasks and executes them using knowledge of the exascale systems to improve performance
    • PaRSEC - supports the development of domain-specific languages and tools to simplify and improve the productivity of scientists when using a task-based system and provides a low-level runtime
    • Pagoda: UPC++/GASNet - Develop/enhance a Partitioned Global Address Space (PGAS) programming model
    • SICM - addresses the emerging complexity of exascale memory hierarchies by providing a portable, simplified interface to complex memory
    • OMPI-X - Enhance the MPI standard and the Open MPI implementation of MPI for exascale
    • Kokkos/RAJA - Develop abstractions for node-level performance portability
    • Argo - Optimize existing low-level system software components to improve performance and scalability and improve functionality of exascale applications and runtime systems
  • Software Ecosystem and Delivery
    • E4S & SDK Efforts - The large number of software technologies being delivered to the application developers poses challenges, especially if the application needs to use more than one technology at the same time. The Software Development Kit (SDK) efforts identify meaningful aggregations of products within the programming models and runtimes, development tools, and data and visualization technical areas, with the goal of increasing the interoperability, availability and quality.
  • NNSA Software
    • LANL (Los Alamos National Laboratory) NNSA - LANL’s NNSA/ATDM software technology efforts include Legion (PM/R), LLVM (Tools), Cinema (Data/Vis), and BEE (Ecosystem)
    • LLNL (Lawrence Livermore National Laboratory) NNSA - LLNL’s NNSA/ATDM software technology efforts include Spack, Flux (Ecosystem), RAJA, Umpire (PMR), Debugging @ Scale, Flux/Power (Tools), and MFEM (Math Libs)
    • SNL (Sandia National Laboratory) NNSA - SNL’s NNSA/ATDM software technology efforts include Kokkos (PM/R) and Kokkos Kernels (Math Libs)
 

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,059 (2.77/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
ill be that guy

Can it run Cyrsis:wtf:
Not easily - no DirectX support! I suppose they can emulate it somehow but it would be a project...
 
Top