• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

GFreeman

News Editor
Staff member
Joined
Mar 6, 2023
Messages
1,681 (2.41/day)
In tests of real workloads, the NVIDIA Grace CPU Superchip scored 2x performance gains over x86 processors at the same power envelope across major data center CPU applications. That opens up a whole new set of opportunities. It means data centers can handle twice as much peak traffic. They can slash their power bills by as much as half. They can pack more punch into the confined spaces at the edge of their networks - or any combination of the above.

Data center managers need these options to thrive in today's energy-efficient era. Moore's law is effectively dead. Physics no longer lets engineers pack more transistors in the same space at the same power. That's why new x86 CPUs typically offer gains over prior generations of less than 30%. It's also why a growing number of data centers are power capped. With the added threat of global warming, data centers don't have the luxury of expanding their power, but they still need to respond to the growing demands for computing.



Wanted: Same Power, More Performance
Compute demand is growing 10% a year in the U.S., and will double in the eight years from 2022-2030, according to a McKinsey study.

"Pressure to make data centers sustainable is therefore high, and some regulators and governments are imposing sustainability standards on newly built data centers," it said.

With the end of Moore's law, the data center's progress in computing efficiency has stalled, according to a survey that McKinsey cited (see chart below).



In today's environment, the 2x gains NVIDIA Grace offers are the eye-popping equivalent of a multi-generational leap. It meets the requirements of today's data center executives.

Zac Smith - the head of edge infrastructure at Equinix, a global service provider that manages more than 240 data centers - articulated these needs in an article about energy-efficient computing.

"The performance you get for the carbon impact you have is what we need to drive toward," he said.

"We have 10,000 customers counting on us for help with this journey. They demand more data and more intelligence, often with AI, and they want it in a sustainable way," he added.

A Trio of CPU Innovations
The Grace CPU delivers that efficient performance thanks to three innovations.

It uses an ultra-fast fabric to connect 72 Arm Neoverse V2 cores in a single die that sports 3.2 terabytes per second in fabric bisection bandwidth, a standard measure of throughput. Then it connects two of those dies in a superchip package with the NVIDIA NVLink-C2C interconnect, delivering 900 GB/s of bandwidth.

Finally, it's the first data center CPU to use server-class LPDDR5X memory. That provides up to 50% more memory bandwidth at similar cost but one-eighth the power of typical server memory. And its compact size enables 2x the density of typical card-based memory designs.



The First Results Are In
NVIDIA engineers are running real data center workloads on Grace today.

They found that compared to the leading x86 CPUs in data centers using the same power footprint, Grace is:

  • 2.3x faster for microservices,
  • 2x faster in memory intensive data processing
  • and 1.9x faster in computational fluid dynamics, used in many technical computing apps.
Data centers usually have to wait two or more CPU generations to get these benefits, summarized in the chart below.



Even before these results on working CPUs, users responded to the innovations in Grace.

The Los Alamos National Laboratory announced in May it will use Grace in Venado, a 10 exaflop AI supercomputer that will advance the lab's work in areas such as materials science and renewable energy. Meanwhile, data centers in Europe and Asia are evaluating Grace for their workloads.

NVIDIA Grace is sampling now with production in the second half of the year. ASUS, Atos, GIGABYTE, Hewlett Packard Enterprise, QCT, Supermicro, Wistron and ZT Systems are building servers that use it.

View at TechPowerUp Main Site | Source
 
Joined
Sep 6, 2013
Messages
3,483 (0.84/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
While all these are marketing materials, I think Nvidia is on another level compared to the competition. Any competition. AMD, Intel, you name it. It is the modern Intel, meaning what Intel was 10-15 years ago.
 
Joined
Aug 29, 2005
Messages
7,347 (1.04/day)
Location
Stuck somewhere in the 80's Jpop era....
System Name Lynni PS \ Lenowo TwinkPad L14 G2
Processor AMD Ryzen 7 7700 Raphael (Waiting on 9800X3D) \ i5-1135G7 Tiger Lake-U
Motherboard ASRock B650M PG Riptide Bios v. 3.10 AMD AGESA 1.2.0.2a \ Lenowo BDPLANAR Bios 1.68
Cooling Noctua NH-D15 Chromax.Black (Only middle fan) \ Lenowo C-267C-2
Memory G.Skill Flare X5 2x16GB DDR5 6000MHZ CL36-36-36-96 AMD EXPO \ Willk Elektronik 2x16GB 2666MHZ CL17
Video Card(s) Asus GeForce RTX™ 4070 Dual OC (Waiting on RX 8800 XT) | Intel® Iris® Xe Graphics
Storage Gigabyte M30 1TB|Sabrent Rocket 2TB| HDD: 10TB|1TB \ WD RED SN700 1TB
Display(s) KTC M27T20S 1440p@165Hz | LG 48CX OLED 4K HDR | Innolux 14" 1080p
Case Asus Prime AP201 White Mesh | Lenowo L14 G2 chassis
Audio Device(s) Steelseries Arctis Pro Wireless
Power Supply Be Quiet! Pure Power 12 M 750W Goldie | 65W
Mouse Logitech G305 Lightspeedy Wireless | Lenowo TouchPad & Logitech G305
Keyboard Ducky One 3 Daybreak Fullsize | L14 G2 UK Lumi
Software Win11 IoT Enterprise 24H2 UK | Win11 IoT Enterprise LTSC 24H2 UK / Arch (Fan)
Benchmark Scores 3DMARK: https://www.3dmark.com/3dm/89434432? GPU-Z: https://www.techpowerup.com/gpuz/details/v3zbr
I am still working on AMD, Intel and Nvidia to grow a pair and gives us about 100W GPU's that can run everything on max at 1440p with at least 200fps.

Because I don't care much to see oh look our new 800watt graphics card is 2-3 times faster than our old generation but uses double the power :banghead:

If we need to lower every onces power consumption we don't need cpu and grahics cards that requires 1KW.
 
Joined
Mar 20, 2010
Messages
246 (0.05/day)
Monitoring a Kill-A-Watt with my PC plugged into it, the highest power consumption I saw on my OC'd 9700K 4090 build was 594 watts running Shadow of the Tomb Raider at 1080P with all graphics maxed out except DLSS (power consumption was actually a bit less with DLSS). VSYNC off of course. I've tested about a dozen games and this one seems to pull the most power. I'm in the USA so that's 120V power. Most games that I play, the total power consumed is in the 300-400 watt range.

With VSYNC on it's over 50 watts less. I use VSYNC at downsampled 4K though, with DLSS, so it doesn't pull as much power. I play on a 1080P projector. Downsampling on it looks fantastic for a 100" screen. It's because the projected image isn't quite as sharp as a true 4K TV.
 
Joined
Feb 11, 2009
Messages
5,656 (0.97/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
I am still working on AMD, Intel and Nvidia to grow a pair and gives us about 100W GPU's that can run everything on max at 1440p with at least 200fps.

Because I don't care much to see oh look our new 800watt graphics card is 2-3 times faster than our old generation but uses double the power :banghead:

If we need to lower every onces power consumption we don't need cpu and grahics cards that requires 1KW.

agreed, I would honestly love if some cap was set for those products, let them actually innovate within constrains, that usually leads to the best results.
 
Joined
May 3, 2018
Messages
2,881 (1.17/day)
So an ASIC performs more efficiecntly than a general purpose CPU, right got you. Amazing what Nvidia can invent.
 
Joined
Dec 30, 2010
Messages
2,216 (0.43/day)
While all these are marketing materials, I think Nvidia is on another level compared to the competition. Any competition. AMD, Intel, you name it. It is the modern Intel, meaning what Intel was 10-15 years ago.

They have the computational hardware and they are expanding it into every aspect of the business possible. This is where the big bucks are.

However AMD has a sleeve of it's own with the Mi300 series. Fully stacked with HBM and all that, they do have the horsepower.
 
Joined
Sep 6, 2013
Messages
3,483 (0.84/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
They have the computational hardware and they are expanding it into every aspect of the business possible. This is where the big bucks are.

However AMD has a sleeve of it's own with the Mi300 series. Fully stacked with HBM and all that, they do have the horsepower.
Nvidia is betting it's future on AI, they are going full in, hardware and software. AMD just provides the hardware and expects from others to take advantage of it. Nvidia is accelerating at a pace that no other can follow. Their only fear is probably that someone else will come up with something different than GPUs that can perform many times better. Until then, they are the company that will enjoy the biggest growth in the next years.

That's from an AMD fan that 5 years ago was predicting Intel and AMD to start eating Nvidia's launch by taking over OEM orders for both CPUs and descrete graphics cards. Oh, I was so so so wrong.
 
Top