News Posts matching #Grace

Return to Keyword Browsing

NVIDIA Proposes that AI Will Accelerate Climate Research Innovation

AI and accelerated computing will help climate researchers achieve the miracles they need to achieve breakthroughs in climate research, NVIDIA founder and CEO Jensen Huang said during a keynote Monday at the Berlin Summit for the Earth Virtualization Engines initiative. "Richard Feynman once said that "what I can't create, I don't understand" and that's the reason why climate modeling is so important," Huang told 180 attendees at the Harnack House in Berlin, a storied gathering place for the region's scientific and research community. "And so the work that you do is vitally important to policymakers to researchers to the industry," he added.

To advance this work, the Berlin Summit brings together participants from around the globe to harness AI and high-performance computing for climate prediction. In his talk, Huang outlined three miracles that will have to happen for climate researchers to achieve their goals, and touched on NVIDIA's own efforts to collaborate with climate researchers and policymakers with its Earth-2 efforts. The first miracle required will be to simulate the climate fast enough, and with a high enough resolution - on the order of just a couple of square kilometers.

NVIDIA Ada Lovelace Successor Set for 2025

According to the NVIDIA roadmap that was spotted in the recently published MLCommons training results, the Ada Lovelace successor is set to come in 2025. The roadmap also reveals the schedule for Hopper Next and Grace Next GPUs, as well as the BlueField-4 DPU.

While the roadmap does not provide a lot of details, it does give us a general idea of when to expect NVIDIA's next GeForce architecture. Since NVIDIA usually launches a new GeForce architecture every two years or so, the latest schedule might sound like a small delay, at least if it plans to launch the Ada Lovelace Next in early 2025 and not later. NVIDIA Pascal was launched in May 2016, Turing in September 2018, Ampere in May 2020, and Ada Lovelace in October 2022.

Gigabyte Shows AI/HPC and Data Center Servers at Computex

GIGABYTE is exhibiting cutting-edge technologies and solutions at COMPUTEX 2023, presenting the theme "Future of COMPUTING". From May 30th to June 2nd, GIGABYTE is showcasing over 110 products that are driving future industry transformation, demonstrating the emerging trends of AI technology and sustainability, on the 1st floor, Taipei Nangang Exhibition Center, Hall 1.

GIGABYTE and its subsidiary, Giga Computing, are introducing unparalleled AI/HPC server lineups, leading the era of exascale supercomputing. One of the stars is the industry's first NVIDIA-certified HGX H100 8-GPU SXM5 server, G593-SD0. Equipped with the 4th Gen Intel Xeon Scalable Processors and GIGABYTE's industry-leading thermal design, G593-SD0 can perform extremely intensive workloads from generative AI and deep learning model training within a density-optimized 5U server chassis, making it a top choice for data centers aimed for AI breakthroughs. In addition, GIGABYTE is debuting AI computing servers supporting NVIDIA Grace CPU and Grace Hopper Superchips. The high-density servers are accelerated with NVLink-C2C technology under the ARM Neoverse V2 platform, setting a new standard for AI/HPC computing efficiency and bandwidth.

Giga Computing Goes Big with Green Computing and HPC and AI at Computex

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a major presence at Computex 2023, held May 30 to June 2, with a GIGABYTE booth that inspires while showcasing more than fifty servers that span GIGABYTE's comprehensive enterprise portfolio, including green computing solutions that feature liquid cooled servers and immersion cooling technology. The international computer expo attracts over 100,000 visitors annually and GIGABYTE will be ready with a spacious and attractive booth that will draw in curious minds, and at the same time there will be plenty of knowledgeable staff to answer questions about how our products are being utilized today.

The slogan for Computex 2023 is "Together we create." And just like parts that make a whole, GIGABYTE's slogan of "Future of COMPUTING" embodies all the distinct computing products from consumer to enterprise applications. For the enterprise business unit, there will be sections with themes: "Win Big with AI HPC," "Advance Data Centers," and "Embrace Sustainability." Each theme will show off cutting edge technologies that span x86 and ARM platforms, and great attention is placed on solutions that address challenges that come with more powerful computing.

NVIDIA Grace Drives Wave of New Energy-Efficient Arm Supercomputers

NVIDIA today announced a supercomputer built on the NVIDIA Grace CPU Superchip, adding to a wave of new energy-efficient supercomputers based on the Arm Neoverse platform. The Isambard 3 supercomputer to be based at the Bristol & Bath Science Park, in the U.K., will feature 384 Arm-based NVIDIA Grace CPU Superchips to power medical and scientific research, and is expected to deliver 6x the performance and energy efficiency of Isambard 2, placing it among Europe's most energy-efficient systems.

It will achieve about 2.7 petaflops of FP64 peak performance and consume less than 270 kilowatts of power, ranking it among the world's three greenest non-accelerated supercomputers. The project is being led by the University of Bristol, as part of the research consortium the GW4 Alliance, together with the universities of Bath, Cardiff and Exeter.

Tenstorrent Tech Talk Reveals Hints of AMD's "Zen 5" Performance

Tenstorrent hosted their "Nerds Talking to Nerds About RISC-V" event this week in India where a dozen high profile industry experts hosted technical talks and panels about every facet of the RISC-V landscape and future. Among these are some familiar names to anyone who's been keeping up on the CPU industry; Raja Koduri of his own AI Generative Gaming startup company, Lars Bergstrom of Google, Naveed Sherwani of Rapid Silicon, and of course Jim Keller the CEO of Tenstorrent itself. On the first day of the event a mere 42 minutes into the YouTube live stream during his keynote talk, Jim Keller is providing an overview of Tenstorrent's latest silicon design goals. He presents a slide showing a wide comparison of various competitor's integer performance in SPEC CPU 2017 INT wherein a raw performance value for AMD's yet released "Zen 5" is listed, as well as the operating frequency and TDP of the supposed sample.

The slide shows all of AMD's recent architectures starting with the original "Zen" (Naples) and the improvements each successive generation has made. Also shown is one of Intel's latest "Sapphire Rapids" Xeons, a projected performance point of NVIDIA's in-house CPU architecture "Grace", Amazon's "Graviton" series with a projected result for "Graviton 3," and Tenstorrent's own 8-wide RISC-V architecture as it currently performs in their labs. While all of these are fascinating results in their own right, we're going to narrow in on the "Zen 4" (Genoa) and "Zen 5" results. We can see from the Frequency and TDP charts that "Zen 4" is clocked at 3.8 GHz as it's equal to the Xeon Platinum 8480+ (which itself boosts to 3.8 GHz in light threaded workloads such as this) so is therefore likely a variant of EPYC 9354 or 9454 with its TDP configured at the minimum 240 W. The unnamed "Zen 5" CPU is shown to be running at around 4.0 GHz with the same 240 W TDP, a tiny 5% bump in core clock, while delivering a substantial 30% jump in performance. The most interesting detail here is that nowhere is it listed—as with "Grace" and "Graviton 3"—that this is a projected result.

NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

In tests of real workloads, the NVIDIA Grace CPU Superchip scored 2x performance gains over x86 processors at the same power envelope across major data center CPU applications. That opens up a whole new set of opportunities. It means data centers can handle twice as much peak traffic. They can slash their power bills by as much as half. They can pack more punch into the confined spaces at the edge of their networks - or any combination of the above.

Data center managers need these options to thrive in today's energy-efficient era. Moore's law is effectively dead. Physics no longer lets engineers pack more transistors in the same space at the same power. That's why new x86 CPUs typically offer gains over prior generations of less than 30%. It's also why a growing number of data centers are power capped. With the added threat of global warming, data centers don't have the luxury of expanding their power, but they still need to respond to the growing demands for computing.

NVIDIA Announces New System for Accelerated Quantum-Classical Computing

NVIDIA today announced a new system built with Quantum Machines that provides a revolutionary new architecture for researchers working in high-performance and low-latency quantum-classical computing. The world's first GPU-accelerated quantum computing system, the NVIDIA DGX Quantum brings together the world's most powerful accelerated computing platform - enabled by the NVIDIA Grace Hopper Superchip and CUDA Quantum open-source programming model - with the world's most advanced quantum control platform, OPX, by Quantum Machines.

The combination allows researchers to build extraordinarily powerful applications that combine quantum computing with state-of-the-art classical computing, enabling calibration, control, quantum error correction and hybrid algorithms. "Quantum-accelerated supercomputing has the potential to reshape science and industry with capabilities that can serve humanity in enormous ways," said Tim Costa, director of HPC and quantum at NVIDIA. "NVIDIA DGX Quantum will enable researchers to push the boundaries of quantum-classical computing."

Arm Announces Next-Generation Neoverse Cores for High Performance Computing

The demand for data is insatiable, from 5G to the cloud to smart cities. As a society we want more autonomy, information to fuel our decisions and habits, and connection - to people, stories, and experiences.

To address these demands, the cloud infrastructure of tomorrow will need to handle the coming data explosion and the effective processing of evermore complex workloads … all while increasing power efficiency and minimizing carbon footprint. It's why the industry is increasingly looking to the performance, power efficiency, specialized processing and workload acceleration enabled by Arm Neoverse to redefine and transform the world's computing infrastructure.

NVIDIA Grace CPU Specs Remind Us Why Intel Never Shared x86 with the Green Team

NVIDIA designed the Grace CPU, a processor in the classical sense, to replace the Intel Xeon or AMD EPYC processors it was having to cram into its pre-built HPC compute servers for serial-processing roles, and mainly because those half-a-dozen GPU HPC processors need to be interconnected by a CPU. The company studied the CPU-level limitations and bottlenecks not just with I/O, but also the machine-architecture, and realized its compute servers need a CPU purpose-built for the role, with an architecture that's heavily optimized for NVIDIA's APIs. This, the NVIDIA Grace CPU was born.

This is NVIDIA's first outing with a CPU with a processing footprint rivaling server processors from Intel and AMD. Built on the TSMC N4 (4 nm EUV) silicon fabrication process, it is a monolithic chip that's deployed standalone with an H100 HPC processor on a single board that NVIDIA calls a "Superchip." A board with a Grace and an H100, makes up a "Grace Hopper" Superchip. A board with two Grace CPUs makes a Grace CPU Superchip. Each Grace CPU contains a 900 GB/s switching fabric, a coherent interface, which has seven times the bandwidth of PCI-Express 5.0 x16. This is key to connecting the companion H100 processor, or neighboring Superchips on the node, with coherent memory access.

NVIDIA Claims Grace CPU Superchip is 2X Faster Than Intel Ice Lake

When NVIDIA announced its Grace CPU Superchip, the company officially showed its efforts of creating an HPC-oriented processor to compete with Intel and AMD. The Grace CPU Superchip combines two Grace CPU modules that use the NVLink-C2C technology to deliver 144 Arm v9 cores and 1 TB/s of memory bandwidth. Each core is Arm Neoverse N2 Perseus design, configured to achieve the highest throughput and bandwidth. As far as performance is concerned, the only detail NVIDIA provides on its website is the estimated SPECrate 2017_int_base score of over 740. Thanks to the colleges over at Tom's Hardware, we have another performance figure to look at.

NVIDIA has made a slide about comparison with Intel's Ice Lake server processors. One Grace CPU Superchip was compared to two Xeon Platinum 8360Y Ice Lake CPUs configured in a dual-socket server node. The Grace CPU Superchip outperformed the Ice Lake configuration by two times and provided 2.3 times the efficiency in WRF simulation. This HPC application is CPU-bound, allowing the new Grace CPU to show off. This is all thanks to the Arm v9 Neoverse N2 cores pairing efficiently with outstanding performance. NVIDIA made a graph showcasing all HPC applications running on Arm today, with many more to come, which you can see below. Remember that NVIDIA provides this information, so we have to wait for the 2023 launch to see it in action.

NVIDIA Unveils Grace CPU Superchip with 144 Cores and 1 TB/s Bandwidth

NVIDIA has today announced its Grace CPU Superchip, a monstrous design focused on heavy HPC and AI processing workloads. Previously, team green has teased an in-house developed CPU that is supposed to go into servers and create an entirely new segment for the company. Today, we got a more detailed look at the plan with the Grace CPU Superchip. The Superchip package represents a package of two Grace processors, each containing 72 cores. These cores are based on Arm v9 in structure set architecture iteration and two CPUs total for 144 cores in the Superchip module. These cores are surrounded by a now unknown amount of LPDDR5x with ECC memory, running at 1 TB/s total bandwidth.

NVIDIA Grace CPU Superchip uses the NVLink-C2C cache coherent interconnect, which delivers 900 GB/s bandwidth, seven times more than the PCIe 5.0 protocol. The company targets two-fold performance per Watt improvement over today's CPUs and wants to bring efficiency and performance together. We have some preliminary benchmark information provided by NVIDIA. In the SPECrate2017_int_base integer benchmark, the Grace CPU Superchip scores over 740 points, which is just the simulation for now. This means that the performance target is not finalized yet, teasing a higher number in the future. The company expects to ship the Grace CPU Superchip in the first half of 2023, with an already supported ecosystem of software, including NVIDIA RTX, HPC, NVIDIA AI, and NVIDIA Omniverse software stacks and platforms.
NVIDIA Grace CPU Superchip
Return to Keyword Browsing
May 21st, 2024 10:48 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts