News Posts matching #Grace CPU

Return to Keyword Browsing

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Gigabyte Shows AI/HPC and Data Center Servers at Computex

GIGABYTE is exhibiting cutting-edge technologies and solutions at COMPUTEX 2023, presenting the theme "Future of COMPUTING". From May 30th to June 2nd, GIGABYTE is showcasing over 110 products that are driving future industry transformation, demonstrating the emerging trends of AI technology and sustainability, on the 1st floor, Taipei Nangang Exhibition Center, Hall 1.

GIGABYTE and its subsidiary, Giga Computing, are introducing unparalleled AI/HPC server lineups, leading the era of exascale supercomputing. One of the stars is the industry's first NVIDIA-certified HGX H100 8-GPU SXM5 server, G593-SD0. Equipped with the 4th Gen Intel Xeon Scalable Processors and GIGABYTE's industry-leading thermal design, G593-SD0 can perform extremely intensive workloads from generative AI and deep learning model training within a density-optimized 5U server chassis, making it a top choice for data centers aimed for AI breakthroughs. In addition, GIGABYTE is debuting AI computing servers supporting NVIDIA Grace CPU and Grace Hopper Superchips. The high-density servers are accelerated with NVLink-C2C technology under the ARM Neoverse V2 platform, setting a new standard for AI/HPC computing efficiency and bandwidth.

Giga Computing Goes Big with Green Computing and HPC and AI at Computex

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a major presence at Computex 2023, held May 30 to June 2, with a GIGABYTE booth that inspires while showcasing more than fifty servers that span GIGABYTE's comprehensive enterprise portfolio, including green computing solutions that feature liquid cooled servers and immersion cooling technology. The international computer expo attracts over 100,000 visitors annually and GIGABYTE will be ready with a spacious and attractive booth that will draw in curious minds, and at the same time there will be plenty of knowledgeable staff to answer questions about how our products are being utilized today.

The slogan for Computex 2023 is "Together we create." And just like parts that make a whole, GIGABYTE's slogan of "Future of COMPUTING" embodies all the distinct computing products from consumer to enterprise applications. For the enterprise business unit, there will be sections with themes: "Win Big with AI HPC," "Advance Data Centers," and "Embrace Sustainability." Each theme will show off cutting edge technologies that span x86 and ARM platforms, and great attention is placed on solutions that address challenges that come with more powerful computing.

NVIDIA Grace Drives Wave of New Energy-Efficient Arm Supercomputers

NVIDIA today announced a supercomputer built on the NVIDIA Grace CPU Superchip, adding to a wave of new energy-efficient supercomputers based on the Arm Neoverse platform. The Isambard 3 supercomputer to be based at the Bristol & Bath Science Park, in the U.K., will feature 384 Arm-based NVIDIA Grace CPU Superchips to power medical and scientific research, and is expected to deliver 6x the performance and energy efficiency of Isambard 2, placing it among Europe's most energy-efficient systems.

It will achieve about 2.7 petaflops of FP64 peak performance and consume less than 270 kilowatts of power, ranking it among the world's three greenest non-accelerated supercomputers. The project is being led by the University of Bristol, as part of the research consortium the GW4 Alliance, together with the universities of Bath, Cardiff and Exeter.

NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

In tests of real workloads, the NVIDIA Grace CPU Superchip scored 2x performance gains over x86 processors at the same power envelope across major data center CPU applications. That opens up a whole new set of opportunities. It means data centers can handle twice as much peak traffic. They can slash their power bills by as much as half. They can pack more punch into the confined spaces at the edge of their networks - or any combination of the above.

Data center managers need these options to thrive in today's energy-efficient era. Moore's law is effectively dead. Physics no longer lets engineers pack more transistors in the same space at the same power. That's why new x86 CPUs typically offer gains over prior generations of less than 30%. It's also why a growing number of data centers are power capped. With the added threat of global warming, data centers don't have the luxury of expanding their power, but they still need to respond to the growing demands for computing.

NVIDIA Grace CPU Specs Remind Us Why Intel Never Shared x86 with the Green Team

NVIDIA designed the Grace CPU, a processor in the classical sense, to replace the Intel Xeon or AMD EPYC processors it was having to cram into its pre-built HPC compute servers for serial-processing roles, and mainly because those half-a-dozen GPU HPC processors need to be interconnected by a CPU. The company studied the CPU-level limitations and bottlenecks not just with I/O, but also the machine-architecture, and realized its compute servers need a CPU purpose-built for the role, with an architecture that's heavily optimized for NVIDIA's APIs. This, the NVIDIA Grace CPU was born.

This is NVIDIA's first outing with a CPU with a processing footprint rivaling server processors from Intel and AMD. Built on the TSMC N4 (4 nm EUV) silicon fabrication process, it is a monolithic chip that's deployed standalone with an H100 HPC processor on a single board that NVIDIA calls a "Superchip." A board with a Grace and an H100, makes up a "Grace Hopper" Superchip. A board with two Grace CPUs makes a Grace CPU Superchip. Each Grace CPU contains a 900 GB/s switching fabric, a coherent interface, which has seven times the bandwidth of PCI-Express 5.0 x16. This is key to connecting the companion H100 processor, or neighboring Superchips on the node, with coherent memory access.

Taiwan's Tech Titans Adopt World's First NVIDIA Grace CPU-Powered System Designs

NVIDIA today announced that Taiwan's leading computer makers are set to release the first wave of systems powered by the NVIDIA Grace CPU Superchip and Grace Hopper Superchip for a wide range of workloads spanning digital twins, AI, high performance computing, cloud graphics and gaming. Dozens of server models from ASUS, Foxconn Industrial Internet, GIGABYTE, QCT, Supermicro and Wiwynn are expected starting in the first half of 2023. The Grace-powered systems will join x86 and other Arm-based servers to offer customers a broad range of choice for achieving high performance and efficiency in their data centers.

"A new type of data center is emerging—AI factories that process and refine mountains of data to produce intelligence—and NVIDIA is working closely with our Taiwan partners to build the systems that enable this transformation," said Ian Buck, vice president of Hyperscale and HPC at NVIDIA. "These new systems from our partners, powered by our Grace Superchips, will bring the power of accelerated computing to new markets and industries globally."

NVIDIA Claims Grace CPU Superchip is 2X Faster Than Intel Ice Lake

When NVIDIA announced its Grace CPU Superchip, the company officially showed its efforts of creating an HPC-oriented processor to compete with Intel and AMD. The Grace CPU Superchip combines two Grace CPU modules that use the NVLink-C2C technology to deliver 144 Arm v9 cores and 1 TB/s of memory bandwidth. Each core is Arm Neoverse N2 Perseus design, configured to achieve the highest throughput and bandwidth. As far as performance is concerned, the only detail NVIDIA provides on its website is the estimated SPECrate 2017_int_base score of over 740. Thanks to the colleges over at Tom's Hardware, we have another performance figure to look at.

NVIDIA has made a slide about comparison with Intel's Ice Lake server processors. One Grace CPU Superchip was compared to two Xeon Platinum 8360Y Ice Lake CPUs configured in a dual-socket server node. The Grace CPU Superchip outperformed the Ice Lake configuration by two times and provided 2.3 times the efficiency in WRF simulation. This HPC application is CPU-bound, allowing the new Grace CPU to show off. This is all thanks to the Arm v9 Neoverse N2 cores pairing efficiently with outstanding performance. NVIDIA made a graph showcasing all HPC applications running on Arm today, with many more to come, which you can see below. Remember that NVIDIA provides this information, so we have to wait for the 2023 launch to see it in action.

NVIDIA Unveils Grace CPU Superchip with 144 Cores and 1 TB/s Bandwidth

NVIDIA has today announced its Grace CPU Superchip, a monstrous design focused on heavy HPC and AI processing workloads. Previously, team green has teased an in-house developed CPU that is supposed to go into servers and create an entirely new segment for the company. Today, we got a more detailed look at the plan with the Grace CPU Superchip. The Superchip package represents a package of two Grace processors, each containing 72 cores. These cores are based on Arm v9 in structure set architecture iteration and two CPUs total for 144 cores in the Superchip module. These cores are surrounded by a now unknown amount of LPDDR5x with ECC memory, running at 1 TB/s total bandwidth.

NVIDIA Grace CPU Superchip uses the NVLink-C2C cache coherent interconnect, which delivers 900 GB/s bandwidth, seven times more than the PCIe 5.0 protocol. The company targets two-fold performance per Watt improvement over today's CPUs and wants to bring efficiency and performance together. We have some preliminary benchmark information provided by NVIDIA. In the SPECrate2017_int_base integer benchmark, the Grace CPU Superchip scores over 740 points, which is just the simulation for now. This means that the performance target is not finalized yet, teasing a higher number in the future. The company expects to ship the Grace CPU Superchip in the first half of 2023, with an already supported ecosystem of software, including NVIDIA RTX, HPC, NVIDIA AI, and NVIDIA Omniverse software stacks and platforms.
NVIDIA Grace CPU Superchip
Return to Keyword Browsing
Dec 19th, 2024 14:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts