News Posts matching #Grace Hopper

Return to Keyword Browsing

ASRock Rack Brings End-to-End AI and HPC Server Portfolio to SC24

ASRock Rack Inc., a leading innovative server company, today announces its presence at SC24, held at the Georgia World Congress Center in Atlanta from November 18-21. At booth #3609, ASRock Rack will showcase a comprehensive high-performance portfolio of server boards, systems, and rack solutions with NVIDIA accelerated computing platforms, helping address the needs of enterprises, organizations, and data centers.

Artificial intelligence (AI) and high-performance computing (HPC) continue to reshape technology. ASRock Rack is presenting a complete suite of solutions spanning edge, on-premise, and cloud environments, engineered to meet the demand of AI and HPC. The 2U short-depth MECAI, incorporating the NVIDIA GH200 Grace Hopper Superchip, is developed to supercharge accelerated computing and generative AI in space-constrained environments. The 4U10G-TURIN2 and 4UXGM-GNR2, supporting ten and eight NVIDIA H200 NVL PCIe GPUs respectively, are aiming to help enterprises and researchers tackle every AI and HPC challenge with enhanced performance and greater energy efficiency. NVIDIA H200 NVL is ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations, delivering acceleration for AI and HPC workloads regardless of size.

NVIDIA Shifts Gears: Open-Source Linux GPU Drivers Take Center Stage

Just a few months after hiring Ben Skeggs, a lead maintainer of the open-source NVIDIA GPU driver for Linux kernel, NVIDIA has announced a complete transition to open-source GPU kernel modules in its upcoming R560 driver release for Linux. This decision comes two years after the company's initial foray into open-source territory with the R515 driver in May 2022. The tech giant began focusing on data center compute GPUs, while GeForce and Workstation GPU support remained in the alpha stages. Now, after extensive development and optimization, NVIDIA reports that its open-source modules have achieved performance parity with, and in some cases surpassed, their closed-source counterparts. This transition brings a host of new capabilities, including heterogeneous memory management support, confidential computing features, and compatibility with NVIDIA's Grace platform's coherent memory architectures.

The move to open-source is expected to foster greater collaboration within the Linux ecosystem and potentially lead to faster bug fixes and feature improvements. However, not all GPUs will be compatible with the new open-source modules. While cutting-edge platforms like NVIDIA Grace Hopper and Blackwell will require open-source drivers, older GPUs from the Maxwell, Pascal, or Volta architectures must stick with proprietary drivers. NVIDIA has developed a detection helper script to guide driver selection for users who are unsure about compatibility. The shift also brings changes to NVIDIA's installation processes. The default driver version for most installation methods will now be the open-source variant. This affects package managers with the CUDA meta package, run file installations and even Windows Subsystem for Linux.

Noctua Shows Ampere Altra and NVIDIA GH200 CPU Coolers at Computex 2024

Noctua unveiled its new Ampere Altra family of CPU coolers for Ampere Altra and Altra Max Arm processors at the Computex 2024 show, as well as the upcoming NVIDIA GH200 Grace Hopper superchip cooler. In addition, it also showcased its new cooperation with Seasonic with PRIME TX-1600 Noctua Edition power supply and a rather unique Kaelo wine cooler.

In addition to the new and upcoming standard CPU coolers and fans, Noctua also unveiled the new Ampere Altra family of CPU coolers at the Computex 2024 show, aimed to be used with recently launched Ampere Altra and Altra Max Arm processors with up to 128 cores. The new Noctua Ampere Altra CPU coolers are based on the proven models for Intel Xeon and AMD Threadripper or EPYC platforms. The Noctua Ampere Altra family of CPU coolers use Noctua's SecuFirm2 mounting system for LGA4926 socket and come with pre-applied NT-H2 thermal paste. According to Noctua, these provide exceptional performance and whisper-quiet operation which are ideal for Arm based workstations in noise-sensitive environments. The Ampere Altra lineup should be already available over at Newegg. In addition, Nocuta has unveiled its new prototype of NVIDIA GH200 Grace Hopper superchip cooler, which integrates two custom NH-U12A heatsinks in order to cool both the Grace CPU and Hopper GPU. It supports up to 1,000 W of heat emissions, and aimed at noise-sensitive environments like local HPC applications and self-hosted open source LLMs. The NVIDIA GH200 cooler is expected in Q4 this year and offered to clients on pre-order basis.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

NVIDIA Grace Hopper Ignites New Era of AI Supercomputing

Driving a fundamental shift in the high-performance computing industry toward AI-powered systems, NVIDIA today announced nine new supercomputers worldwide are using NVIDIA Grace Hopper Superchips to speed scientific research and discovery. Combined, the systems deliver 200 exaflops, or 200 quintillion calculations per second, of energy-efficient AI processing power.

New Grace Hopper-based supercomputers coming online include EXA1-HE, in France, from CEA and Eviden; Helios at Academic Computer Centre Cyfronet, in Poland, from Hewlett Packard Enterprise (HPE); Alps at the Swiss National Supercomputing Centre, from HPE; JUPITER at the Jülich Supercomputing Centre, in Germany; DeltaAI at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign; and Miyabi at Japan's Joint Center for Advanced High Performance Computing - established between the Center for Computational Sciences at the University of Tsukuba and the Information Technology Center at the University of Tokyo.

NVIDIA Accelerates Quantum Computing Centers Worldwide With CUDA-Q Platform

NVIDIA today announced that it will accelerate quantum computing efforts at national supercomputing centers around the world with the open-source NVIDIA CUDA-Q platform. Supercomputing sites in Germany, Japan and Poland will use the platform to power the quantum processing units (QPUs) inside their NVIDIA-accelerated high-performance computing systems.

QPUs are the brains of quantum computers that use the behavior of particles like electrons or photons to calculate differently than traditional processors, with the potential to make certain types of calculations faster. Germany's Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich is installing a QPU built by IQM Quantum Computers as a complement to its JUPITER supercomputer, supercharged by the NVIDIA GH200 Grace Hopper Superchip. The ABCI-Q supercomputer, located at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, is designed to advance the nation's quantum computing initiative. Powered by the NVIDIA Hopper architecture, the system will add a QPU from QuEra. Poland's Poznan Supercomputing and Networking Center (PSNC) has recently installed two photonic QPUs, built by ORCA Computing, connected to a new supercomputer partition accelerated by NVIDIA Hopper.

Gigabyte Unveils Comprehensive and Powerful AI Platforms at NVIDIA GTC

GIGABYTE Technology and Giga Computing, a subsidiary of GIGABYTE and an industry leader in enterprise solutions, will showcase their solutions at the GIGABYTE booth #1224 at NVIDIA GTC, a global AI developer conference running through March 21. This event will offer GIGABYTE the chance to connect with its valued partners and customers, and together explore what the future in computing holds.

The GIGABYTE booth will focus on GIGABYTE's enterprise products that demonstrate AI training and inference delivered by versatile computing platforms based on NVIDIA solutions, as well as direct liquid cooling (DLC) for improved compute density and energy efficiency. Also not to be missed at the NVIDIA booth is the MGX Pavilion, which features a rack of GIGABYTE servers for the NVIDIA GH200 Grace Hopper Superchip architecture.

NVIDIA Calls for Global Investment into Sovereign AI

Nations have long invested in domestic infrastructure to advance their economies, control their own data and take advantage of technology opportunities in areas such as transportation, communications, commerce, entertainment and healthcare. AI, the most important technology of our time, is turbocharging innovation across every facet of society. It's expected to generate trillions of dollars in economic dividends and productivity gains. Countries are investing in sovereign AI to develop and harness such benefits on their own. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.

Why Sovereign AI Is Important
The global imperative for nations to invest in sovereign AI capabilities has grown since the rise of generative AI, which is reshaping markets, challenging governance models, inspiring new industries and transforming others—from gaming to biopharma. It's also rewriting the nature of work, as people in many fields start using AI-powered "copilots." Sovereign AI encompasses both physical and data infrastructures. The latter includes sovereign foundation models, such as large language models, developed by local teams and trained on local datasets to promote inclusiveness with specific dialects, cultures and practices. For example, speech AI models can help preserve, promote and revitalize indigenous languages. And LLMs aren't just for teaching AIs human languages, but for writing software code, protecting consumers from financial fraud, teaching robots physical skills and much more.

NVIDIA Grace Hopper Systems Gather at GTC

The spirit of software pioneer Grace Hopper will live on at NVIDIA GTC. Accelerated systems using powerful processors - named in honor of the pioneer of software programming - will be on display at the global AI conference running March 18-21, ready to take computing to the next level. System makers will show more than 500 servers in multiple configurations across 18 racks, all packing NVIDIA GH200 Grace Hopper Superchips. They'll form the largest display at NVIDIA's booth in the San Jose Convention Center, filling the MGX Pavilion.

MGX Speeds Time to Market
NVIDIA MGX is a blueprint for building accelerated servers with any combination of GPUs, CPUs and data processing units (DPUs) for a wide range of AI, high performance computing and NVIDIA Omniverse applications. It's a modular reference architecture for use across multiple product generations and workloads. GTC attendees can get an up-close look at MGX models tailored for enterprise, cloud and telco-edge uses, such as generative AI inference, recommenders and data analytics. The pavilion will showcase accelerated systems packing single and dual GH200 Superchips in 1U and 2U chassis, linked via NVIDIA BlueField-3 DPUs and NVIDIA Quantum-2 400 Gb/s InfiniBand networks over LinkX cables and transceivers. The systems support industry standards for 19- and 21-inch rack enclosures, and many provide E1.S bays for nonvolatile storage.

GIGABYTE Announces NVIDIA GH200 and AMD MI300A Based Servers for AI Edge Applications, at MWC 2024

GIGABYTE Technology, an IT pioneer advancing global industries through cloud and AI computing systems, is presenting innovative enterprise computing solutions at MWC 2024, featuring trailblazing servers, green computing solutions, and edge AI technologies, under the theme "Future of COMPUTING." These advancements usher in new possibilities for agile and sustainable IT strategies, enabling industries to harness real-time intelligence across hyperconnected data centers, cloud, edge, and devices, resulting in enhanced efficiency, cost-effectiveness, and competitive advantages, all propelled by the synergies of 5G and AI technologies.

GIGABYTE presents G593-ZX1/ZX2, the AI server featuring AMD Instinct MI300X 8-GPU, which is a new addition to GIGABYTE's flagship AI/HPC server series. Other highlighted exhibits include the high-density H223-V10 supporting the NVIDIA Grace Hopper Superchip, the G383-R80 server supporting four AMD Instinct MI300A APUs, and a G593 series AI server equipped with the powerful NVIDIA HGX H100 8-GPU.

ASRock Rack Offers World's Smallest NVIDIA Grace Hopper Superchip Server, Other Innovations at MWC 2024

ASRock Rack is a subsidiary of ASRock that deals with servers, workstations, and other data-center hardware, and comes with the enormous brand trust not just of ASRock, but also the firm hand of parent company and OEM giant Pegatron. At the 2024 Mobile World Congress, ASRock Rack introduced several new server innovations relevant to the AI Edge, and 5G cellular carrier industries. A star attraction here is the new ASRock Rack MECAI-GH200, claimed to be the world's smallest server powered by the NVIDIA GH200 Grace Hopper Superchip.

The ASRock Rack MECAI-GH200 executes one of NVIDIA's original design goals behind the Grace Hopper Superchip—AI deployments in an edge environment. The GH200 module combines an NVIDIA Grace CPU with a Hopper AI GPU, and a performance-optimized NVLink interconnect between them. The CPU features 72 Arm Neoverse V2 cores, and 480 GB of LPDDR5X memory; while the Hopper GPU has 132 SM with 528 Tensor cores, and 96 GB of HBM3 memory across a 6144-bit memory interface. Given that the newer HBM3e version of the GH200 won't come out before Q2-2024, this has to be the version with HBM3. What makes the MECAI-GH200 the world's smallest server with the GH200 has to be its compact 2U form-factor—competing solutions tend to be 3U or larger.

Supermicro Accelerates Performance of 5G and Telco Cloud Workloads with New and Expanded Portfolio of Infrastructure Solutions

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, delivers an expanded portfolio of purpose-built infrastructure solutions to accelerate performance and increase efficiency in 5G and telecom workloads. With one of the industry's most diverse offerings, Supermicro enables customers to expand public and private 5G infrastructures with improved performance per watt and support for new and innovative AI applications. As a long-term advocate of open networking platforms and a member of the O-RAN Alliance, Supermicro's portfolio incorporates systems featuring 5th Gen Intel Xeon processors, AMD EPYC 8004 Series processors, and the NVIDIA Grace Hopper Superchip.

"Supermicro is expanding our broad portfolio of sustainable and state-of-the-art servers to address the demanding requirements of 5G and telco markets and Edge AI," said Charles Liang, president and CEO of Supermicro. "Our products are not just about technology, they are about delivering tangible customer benefits. We quickly bring data center AI capabilities to the network's edge using our Building Block architecture. Our products enable operators to offer new capabilities to their customers with improved performance and lower energy consumption. Our edge servers contain up to 2 TB of high-speed DDR5 memory, 6 PCIe slots, and a range of networking options. These systems are designed for increased power efficiency and performance-per-watt, enabling operators to create high-performance, customized solutions for their unique requirements. This reassures our customers that they are investing in reliable and efficient solutions."

NVIDIA Accelerates Quantum Computing Exploration at Australia's Pawsey Supercomputing Centre

NVIDIA today announced that Australia's Pawsey Supercomputing Research Centre will add the NVIDIA CUDA Quantum platform accelerated by NVIDIA Grace Hopper Superchips to its National Supercomputing and Quantum Computing Innovation Hub, furthering its work driving breakthroughs in quantum computing.

Researchers at the Perth-based center will leverage CUDA Quantum - an open-source hybrid quantum computing platform that features powerful simulation tools, and capabilities to program hybrid CPU, GPU and QPU systems - as well as, the NVIDIA cuQuantum software development kit of optimized libraries and tools for accelerating quantum computing workflows. The NVIDIA Grace Hopper Superchip - which combines the NVIDIA Grace CPU and Hopper GPU architectures - provides extreme performance to run high-fidelity and scalable quantum simulations on accelerators and seamlessly interface with future quantum hardware infrastructure.

Indian Client Purchases Additional $500 Million Batch of NVIDIA AI GPUs

Indian data center operator Yotta is reportedly set to spend big with another placed with NVIDIA—a recent Reuters article outlines a $500 million purchase of Team Green AI GPUs. Yotta is in the process of upgrading its AI Cloud infrastructure, and their total tally for this endeavor (involving Hopper and newer Grace Hopper models) is likely to hit $1 billion. An official company statement from December confirmed the existence of an extra procurement of GPUs, but they did not provide any details regarding budget or hardware choices at that point in time. Reuters contacted Sunil Gupta, Yotta's CEO, last week for a comment on the situation. The co-founder elaborated: "that the order would comprise nearly 16,000 of NVIDIA's artificial intelligence chips H100 and GH200 and will be placed by March 2025."

Team Green is ramping up its embrace of the Indian data center market, as US sanctions have made it difficult to conduct business with enterprise customers in nearby Chinese territories. Reuters state that Gupta's firm (Yotta) is: "part of Indian billionaire Niranjan Hiranandani's real estate group, (in turn) a partner firm for NVIDIA in India and runs three data centre campuses, in Mumbai, Gujarat and near New Delhi." Microsoft, Google and Amazon are investing heavily in cloud and data centers situated in India. Shankar Trivedi, an NVIDIA executive, recently attended Vibrant Gujarat Global Summit—the article's reporter conducted a brief interview with him. Trivedi stated that Yotta is targeting a March 2024 start for a new NVIDIA-powered AI data center located in the region's tech hub: Gujarat International Finance Tec-City.

AWS and NVIDIA Partner to Deliver 65 ExaFLOP AI Supercomputer, Other Solutions

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced an expansion of their strategic collaboration to deliver the most-advanced infrastructure, software and services to power customers' generative artificial intelligence (AI) innovations. The companies will bring together the best of NVIDIA and AWS technologies—from NVIDIA's newest multi-node systems featuring next-generation GPUs, CPUs and AI software, to AWS Nitro System advanced virtualization and security, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability—that are ideal for training foundation models and building generative AI applications.

The expanded collaboration builds on a longstanding relationship that has fueled the generative AI era by offering early machine learning (ML) pioneers the compute performance required to advance the state-of-the-art in these technologies.

AMI to Enable Arm Ecosystem with Arm SystemReady SR-SIE Certified UEFI and BMC Firmware on the NVIDIA GH200

AMI is pleased to announce that it has become one of the first Independent Firmware Vendors (IFV) to receive the Arm SystemReady SR v2.4 with Security Interface Extension (SIE) v1.2 certificate for the NVIDIA GH200 P4352 Reference Platform with AMI's Aptio V System Firmware solution. This marks another noteworthy achievement for AMI's solutions as they continue to enable Arm SystemReady SR certificates on NVIDIA GH200-based platforms. "The certification allows them to bet on a wide range of software applications, infrastructure solutions, firmware, and even entire operating systems with drivers that may have never been run before on our latest silicon before with the confidence that it "just works," says Ian Finder, Principal Product Lead, Grace at NVIDIA.

As the leading UEFI and BMC firmware provider for the Arm and x86 ecosystem, AMI recognizes the significance of the Arm SystemReady certification program, ensuring that Arm-based systems and solutions "just work" out of the box with standard operating systems, hypervisors, and software. AMI is focused on delivering interoperable, scalable, and secure foundational firmware solutions to the Arm ecosystem to reduce development and maintenance costs while enhancing reliability and hardware support.
Return to Keyword Browsing
Nov 21st, 2024 07:57 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts