News Posts matching #MI300A

Return to Keyword Browsing

TOP500: El Capitan Achieves Top Spot, Frontier and Aurora Follow Behind

The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier and Aurora. Both systems have since moved down to No. 2 and No. 3 spots, respectively. Additionally, new systems have found their way onto the Top 10.

The new El Capitan system at the Lawrence Livermore National Laboratory in California, U.S.A., has debuted as the most powerful system on the list with an HPL score of 1.742 EFlop/s. It has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. El Capitan relies on a Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 58.89 GigaFLOPS/watt. This power efficiency rating helped El Capitan achieve No. 18 on the GREEN500 list as well.

AMD Powers El Capitan: The World's Fastest Supercomputer

Today, AMD showcased its ongoing high performance computing (HPC) leadership at Supercomputing 2024 by powering the world's fastest supercomputer for the sixth straight Top 500 list.

The El Capitan supercomputer, housed at Lawrence Livermore National Laboratory (LLNL), powered by AMD Instinct MI300A APUs and built by Hewlett Packard Enterprise (HPE), is now the fastest supercomputer in the world with a High-Performance Linpack (HPL) score of 1.742 exaflops based on the latest Top 500 list. Both El Capitan and the Frontier system at Oak Ridge National Lab claimed numbers 18 and 22, respectively, on the Green 500 list, showcasing the impressive capabilities of the AMD EPYC processors and AMD Instinct GPUs to drive leadership performance and energy efficiency for HPC workloads.

NEC to Build Japan's Newest Supercomputer Based on Intel Xeon 6900P and AMD Instinct MI300A

NEC Corporation (NEC; TSE: 6701) has received an order for a next-generation supercomputer system from Japan's National Institutes for Quantum Science and Technology (QST), under the National Research and Development Agency, and the National Institute for Fusion Science (NIFS), part of the National Institutes of Natural Sciences under the Inter-University Research Institute Corporation. The new supercomputer system is scheduled to be operational from July 2025. The next-generation supercomputer system will feature multi-architecture with the latest CPUs and GPUs and will consist of large storage capacity and a high-speed network. This system is expected to be used for various research and development in the field of fusion science research.

Specifically, the system will be used for precise prediction of experiments and creation of operation scenarios in the ITER project, which is being promoted as an international project, and the Satellite Tokamak (JT-60SA) project, which is being promoted as a Broader Approach activity, and for design of DEMO reactors. The DEMO project promotes large-scale numerical calculations for DEMO design and R&D to accelerate the realization of a DEMO reactor that contributes to carbon neutrality. In addition, NIFS will conduct numerical simulation research using the supercomputer for multi-scale and multi-physics systems, including fusion plasmas, to broadly accelerate research on the science and applications of fusion plasmas, and as an Inter-University Research Institute, will provide universities and research institutes nationwide with opportunities for collaborative research using the state-of-the-art supercomputer.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

Unannounced AMD Instinct MI388X Accelerator Pops Up in SEC Filing

AMD's Instinct family has welcomed a new addition—the MI388X AI accelerator—as discovered in a lengthy regulatory 10K filing (submitted to the SEC). The document reveals that the unannounced SKU—along with the MI250, MI300X and MI300A integrated circuits—cannot be sold to Chinese customers due to updated US trade regulations (new requirements were issued around October 2023). Versal VC2802 and VE2802 FPGA products are also mentioned in the same section. Earlier this month, AMD's Chinese market-specific Instinct MI309 package was deemed to be too powerful for purpose by the US Department of Commerce.

AMD has not published anything about the Instinct MI388X's official specification, and technical details have not emerged via leaks. The "X" tag likely implies that it has been designed for AI and HPC applications, akin to the recently launched MI300X accelerator. The designation of a higher model number could (naturally) point to a potentially more potent spec sheet, although Tom's Hardware posits that MI388X is a semi-custom spinoff of an existing model.

GIGABYTE Announces NVIDIA GH200 and AMD MI300A Based Servers for AI Edge Applications, at MWC 2024

GIGABYTE Technology, an IT pioneer advancing global industries through cloud and AI computing systems, is presenting innovative enterprise computing solutions at MWC 2024, featuring trailblazing servers, green computing solutions, and edge AI technologies, under the theme "Future of COMPUTING." These advancements usher in new possibilities for agile and sustainable IT strategies, enabling industries to harness real-time intelligence across hyperconnected data centers, cloud, edge, and devices, resulting in enhanced efficiency, cost-effectiveness, and competitive advantages, all propelled by the synergies of 5G and AI technologies.

GIGABYTE presents G593-ZX1/ZX2, the AI server featuring AMD Instinct MI300X 8-GPU, which is a new addition to GIGABYTE's flagship AI/HPC server series. Other highlighted exhibits include the high-density H223-V10 supporting the NVIDIA Grace Hopper Superchip, the G383-R80 server supporting four AMD Instinct MI300A APUs, and a G593 series AI server equipped with the powerful NVIDIA HGX H100 8-GPU.

GIGABYTE Advanced Data Center Solutions Unveils Telecom and AI Servers at MWC 2024

GIGABYTE Technology, an IT pioneer whose focus is to advance global industries through cloud and AI computing systems, is coming to MWC 2024 with its next-generation servers empowering telcos, cloud service providers, enterprises, and SMBs to swiftly harness the value of 5G and AI. Featured is a cutting-edge AI server boasting AMD Instinct MI300X 8-GPU, and a comprehensive AI/HPC server series supporting the latest chip technology from AMD, Intel, and NVIDIA. The showcase will also feature integrated green computing solutions excelling in heat dissipation and energy reduction.

Continuing the booth theme "Future of COMPUTING", GIGABYTE's presentation will cover servers for AI/HPC, RAN and Core networks, modular edge platforms, all-in-one green computing solutions, and AI-powered self-driving technology. The exhibits will demonstrate how industries extend AI applications from cloud to edge and terminal devices through 5G connectivity, expanding future opportunities with faster time to market and sustainable operations. The showcase spans from February 26th to 29th at Booth #5F60, Hall 5, Fira Gran Via, Barcelona.

Financial Analyst Outs AMD Instinct MI300X "Projected" Pricing

AMD's December 2023 launch of new Instinct series accelerators has generated a lot of tech news buzz and excitement within the financial world, but not many folks are privy to Team Red's MSRP for the CDNA 3.0 powered MI300X and MI300A models. A Citi report has pulled back the curtain, albeit with "projected" figures—an inside source claims that Microsoft has purchased the Instinct MI300X 192 GB model for ~$10,000 a piece. North American enterprise customers appear to have taken delivery of the latest MI300 products around mid-January time—inevitably, top secret information has leaked out to news investigators. SeekingAlpha's article (based on Citi's findings) alleges that the Microsoft data center division is AMD's top buyer of MI300X hardware—GPT-4 is reportedly up and running on these brand new accelerators.

The leakers claim that businesses further down the (AI and HPC) food chain are having to shell out $15,000 per MI300X unit, but this is a bargain when compared to NVIDIA's closest competing package—the venerable H100 SXM5 80 GB professional card. Team Green, similarly, does not reveal its enterprise pricing to the wider public—Tom's Hardware has kept tabs on H100 insider info and market leaks: "over the recent quarters, we have seen NVIDIA's H100 80 GB HBM2E add-in-card available for $30,000, $40,000, and even much more at eBay. Meanwhile, the more powerful H100 80 GB SXM with 80 GB of HBM3 memory tends to cost more than an H100 80 GB AIB." Citi's projection has Team Green charging up to four times more for its H100 product, when compared to Team Red MI300X pricing. NVIDIA's dominant AI GPU market position could be challenged by cheaper yet still very performant alternatives—additionally chip shortages have caused Jensen & Co. to step outside their comfort zone. Tom's Hardware reached out to AMD for comment on the Citi pricing claims—a company representative declined this invitation.

Supermicro Extends AI and GPU Rack Scale Solutions with Support for AMD Instinct MI300 Series Accelerators

Supermicro, Inc., a Total IT Solution Manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing three new additions to its AMD-based H13 generation of GPU Servers, optimized to deliver leading-edge performance and efficiency, powered by the new AMD Instinct MI300 Series accelerators. Supermicro's powerful rack scale solutions with 8-GPU servers with the AMD Instinct MI300X OAM configuration are ideal for large model training.

The new 2U liquid-cooled and 4U air-cooled servers with the AMD Instinct MI300A Accelerated Processing Units (APUs) accelerators are available and improve data center efficiencies and power the fast-growing complex demands in AI, LLM, and HPC. The new systems contain quad APUs for scalable applications. Supermicro can deliver complete liquid-cooled racks for large-scale environments with up to 1,728 TFlops of FP64 performance per rack. Supermicro worldwide manufacturing facilities streamline the delivery of these new servers for AI and HPC convergence.

AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series

Today, AMD announced the availability of the AMD Instinct MI300X accelerators - with industry leading memory bandwidth for generative AI and leadership performance for large language model (LLM) training and inferencing - as well as the AMD Instinct MI300A accelerated processing unit (APU) - combining the latest AMD CDNA 3 architecture and "Zen 4" CPUs to deliver breakthrough performance for HPC and AI workloads.

"AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be in large scale cloud and enterprise deployments," said Victor Peng, president, AMD. "By leveraging our leadership hardware, software and open ecosystem approach, cloud providers, OEMs and ODMs are bringing to market technologies that empower enterprises to adopt and deploy AI-powered solutions."
Return to Keyword Browsing
Nov 23rd, 2024 04:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts