Tuesday, March 21st 2023

NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

In tests of real workloads, the NVIDIA Grace CPU Superchip scored 2x performance gains over x86 processors at the same power envelope across major data center CPU applications. That opens up a whole new set of opportunities. It means data centers can handle twice as much peak traffic. They can slash their power bills by as much as half. They can pack more punch into the confined spaces at the edge of their networks - or any combination of the above.

Data center managers need these options to thrive in today's energy-efficient era. Moore's law is effectively dead. Physics no longer lets engineers pack more transistors in the same space at the same power. That's why new x86 CPUs typically offer gains over prior generations of less than 30%. It's also why a growing number of data centers are power capped. With the added threat of global warming, data centers don't have the luxury of expanding their power, but they still need to respond to the growing demands for computing.
Wanted: Same Power, More Performance
Compute demand is growing 10% a year in the U.S., and will double in the eight years from 2022-2030, according to a McKinsey study.

"Pressure to make data centers sustainable is therefore high, and some regulators and governments are imposing sustainability standards on newly built data centers," it said.

With the end of Moore's law, the data center's progress in computing efficiency has stalled, according to a survey that McKinsey cited (see chart below).
In today's environment, the 2x gains NVIDIA Grace offers are the eye-popping equivalent of a multi-generational leap. It meets the requirements of today's data center executives.

Zac Smith - the head of edge infrastructure at Equinix, a global service provider that manages more than 240 data centers - articulated these needs in an article about energy-efficient computing.

"The performance you get for the carbon impact you have is what we need to drive toward," he said.

"We have 10,000 customers counting on us for help with this journey. They demand more data and more intelligence, often with AI, and they want it in a sustainable way," he added.

A Trio of CPU Innovations
The Grace CPU delivers that efficient performance thanks to three innovations.

It uses an ultra-fast fabric to connect 72 Arm Neoverse V2 cores in a single die that sports 3.2 terabytes per second in fabric bisection bandwidth, a standard measure of throughput. Then it connects two of those dies in a superchip package with the NVIDIA NVLink-C2C interconnect, delivering 900 GB/s of bandwidth.

Finally, it's the first data center CPU to use server-class LPDDR5X memory. That provides up to 50% more memory bandwidth at similar cost but one-eighth the power of typical server memory. And its compact size enables 2x the density of typical card-based memory designs.
The First Results Are In
NVIDIA engineers are running real data center workloads on Grace today.

They found that compared to the leading x86 CPUs in data centers using the same power footprint, Grace is:
  • 2.3x faster for microservices,
  • 2x faster in memory intensive data processing
  • and 1.9x faster in computational fluid dynamics, used in many technical computing apps.
Data centers usually have to wait two or more CPU generations to get these benefits, summarized in the chart below.
Even before these results on working CPUs, users responded to the innovations in Grace.

The Los Alamos National Laboratory announced in May it will use Grace in Venado, a 10 exaflop AI supercomputer that will advance the lab's work in areas such as materials science and renewable energy. Meanwhile, data centers in Europe and Asia are evaluating Grace for their workloads.

NVIDIA Grace is sampling now with production in the second half of the year. ASUS, Atos, GIGABYTE, Hewlett Packard Enterprise, QCT, Supermicro, Wistron and ZT Systems are building servers that use it.
Source: NVIDIA
Add your own comment

7 Comments on NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

#1
john_
While all these are marketing materials, I think Nvidia is on another level compared to the competition. Any competition. AMD, Intel, you name it. It is the modern Intel, meaning what Intel was 10-15 years ago.
Posted on Reply
#2
Shou Miko
I am still working on AMD, Intel and Nvidia to grow a pair and gives us about 100W GPU's that can run everything on max at 1440p with at least 200fps.

Because I don't care much to see oh look our new 800watt graphics card is 2-3 times faster than our old generation but uses double the power :banghead:

If we need to lower every onces power consumption we don't need cpu and grahics cards that requires 1KW.
Posted on Reply
#3
CyberCT
Monitoring a Kill-A-Watt with my PC plugged into it, the highest power consumption I saw on my OC'd 9700K 4090 build was 594 watts running Shadow of the Tomb Raider at 1080P with all graphics maxed out except DLSS (power consumption was actually a bit less with DLSS). VSYNC off of course. I've tested about a dozen games and this one seems to pull the most power. I'm in the USA so that's 120V power. Most games that I play, the total power consumed is in the 300-400 watt range.

With VSYNC on it's over 50 watts less. I use VSYNC at downsampled 4K though, with DLSS, so it doesn't pull as much power. I play on a 1080P projector. Downsampling on it looks fantastic for a 100" screen. It's because the projected image isn't quite as sharp as a true 4K TV.
Posted on Reply
#4
ZoneDymo
Shou MikoI am still working on AMD, Intel and Nvidia to grow a pair and gives us about 100W GPU's that can run everything on max at 1440p with at least 200fps.

Because I don't care much to see oh look our new 800watt graphics card is 2-3 times faster than our old generation but uses double the power :banghead:

If we need to lower every onces power consumption we don't need cpu and grahics cards that requires 1KW.
agreed, I would honestly love if some cap was set for those products, let them actually innovate within constrains, that usually leads to the best results.
Posted on Reply
#5
Minus Infinity
So an ASIC performs more efficiecntly than a general purpose CPU, right got you. Amazing what Nvidia can invent.
Posted on Reply
#6
Jism
john_While all these are marketing materials, I think Nvidia is on another level compared to the competition. Any competition. AMD, Intel, you name it. It is the modern Intel, meaning what Intel was 10-15 years ago.
They have the computational hardware and they are expanding it into every aspect of the business possible. This is where the big bucks are.

However AMD has a sleeve of it's own with the Mi300 series. Fully stacked with HBM and all that, they do have the horsepower.
Posted on Reply
#7
john_
JismThey have the computational hardware and they are expanding it into every aspect of the business possible. This is where the big bucks are.

However AMD has a sleeve of it's own with the Mi300 series. Fully stacked with HBM and all that, they do have the horsepower.
Nvidia is betting it's future on AI, they are going full in, hardware and software. AMD just provides the hardware and expects from others to take advantage of it. Nvidia is accelerating at a pace that no other can follow. Their only fear is probably that someone else will come up with something different than GPUs that can perform many times better. Until then, they are the company that will enjoy the biggest growth in the next years.

That's from an AMD fan that 5 years ago was predicting Intel and AMD to start eating Nvidia's launch by taking over OEM orders for both CPUs and descrete graphics cards. Oh, I was so so so wrong.
Posted on Reply
May 21st, 2024 07:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts