News Posts matching #HPE

Return to Keyword Browsing

AMD Powers El Capitan: The World's Fastest Supercomputer

Today, AMD showcased its ongoing high performance computing (HPC) leadership at Supercomputing 2024 by powering the world's fastest supercomputer for the sixth straight Top 500 list.

The El Capitan supercomputer, housed at Lawrence Livermore National Laboratory (LLNL), powered by AMD Instinct MI300A APUs and built by Hewlett Packard Enterprise (HPE), is now the fastest supercomputer in the world with a High-Performance Linpack (HPL) score of 1.742 exaflops based on the latest Top 500 list. Both El Capitan and the Frontier system at Oak Ridge National Lab claimed numbers 18 and 22, respectively, on the Green 500 list, showcasing the impressive capabilities of the AMD EPYC processors and AMD Instinct GPUs to drive leadership performance and energy efficiency for HPC workloads.

Ultra Accelerator Link Consortium Plans Year-End Launch of UALink v1.0

Ultra Accelerator Link (UALink ) Consortium, led by Board Members from AMD, Amazon Web Services (AWS), Astera Labs, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft, have announced the incorporation of the Consortium and are extending an invitation for membership to the community. The UALink Promoter Group was founded in May 2024 to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI pods & clusters. "The UALink standard defines high-speed and low latency communication for scale-up AI systems in data centers"

Intel and AMD Form x86 Ecosystem Advisory Group

Intel Corp. (INTC) and AMD (NASDAQ: AMD) today announced the creation of an x86 ecosystem advisory group bringing together technology leaders to shape the future of the world's most widely used computing architecture. x86 is uniquely positioned to meet customers' emerging needs by delivering superior performance and seamless interoperability across hardware and software platforms. The group will focus on identifying new ways to expand the x86 ecosystem by enabling compatibility across platforms, simplifying software development, and providing developers with a platform to identify architectural needs and features to create innovative and scalable solutions for the future.

For over four decades, x86 has served as the bedrock of modern computing, establishing itself as the preferred architecture in data centers and PCs worldwide. In today's evolving landscape - characterized by dynamic AI workloads, custom chiplets, and advancements in 3D packaging and system architectures - the importance of a robust and expanding x86 ecosystem is more crucial than ever.

HPE Announces Industry's First 100% Fanless Direct Liquid Cooling Systems Architecture

Hewlett Packard Enterprise announced the industry's first 100% fanless direct liquid cooling systems architecture to enhance the energy and cost efficiency of large-scale AI deployments. The company introduced the innovation at its AI Day, held for members of the financial community at one of its state-of-the-art AI systems manufacturing facilities. During the event, the company showcased its expertise and leadership in AI across enterprises, sovereign governments, service providers and model builders.

Industry's first 100% fanless direct liquid cooling system
While efficiency has improved in next-generation accelerators, power consumption is continuing to intensify with AI adoption, outstripping traditional cooling techniques.

HP Launches HPE ProLiant Compute XD685 Servers Powered by 5th Gen AMD EPYC Processors and AMD Instinct MI325X Accelerators

Hewlett Packard Enterprise today announced the HPE ProLiant Compute XD685 for complex AI model training tasks, powered by 5th Gen AMD EPYC processors and AMD Instinct MI325X accelerators. The new HPE system is optimized to quickly deploy high-performing, secure and energy-efficient AI clusters for use in large language model training, natural language processing and multi-modal training.

The race is on to unlock the promise of AI and its potential to dramatically advance outcomes in workforce productivity, healthcare, climate sciences and much more. To capture this potential, AI service providers, governments and large model builders require flexible, high-performance solutions that can be brought to market quickly.

NVIDIA Blackwell Sets New Standard for Generative AI in MLPerf Inference Benchmark

As enterprises race to adopt generative AI and bring new services to market, the demands on data center infrastructure have never been greater. Training large language models is one challenge, but delivering LLM-powered real-time services is another. In the latest round of MLPerf industry benchmarks, Inference v4.1, NVIDIA platforms delivered leading performance across all data center tests. The first-ever submission of the upcoming NVIDIA Blackwell platform revealed up to 4x more performance than the NVIDIA H100 Tensor Core GPU on MLPerf's biggest LLM workload, Llama 2 70B, thanks to its use of a second-generation Transformer Engine and FP4 Tensor Cores.

The NVIDIA H200 Tensor Core GPU delivered outstanding results on every benchmark in the data center category - including the latest addition to the benchmark, the Mixtral 8x7B mixture of experts (MoE) LLM, which features a total of 46.7 billion parameters, with 12.9 billion parameters active per token. MoE models have gained popularity as a way to bring more versatility to LLM deployments, as they're capable of answering a wide variety of questions and performing more diverse tasks in a single deployment. They're also more efficient since they only activate a few experts per inference - meaning they deliver results much faster than dense models of a similar size.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

Blackwell Shipments Imminent, Total CoWoS Capacity Expected to Surge by Over 70% in 2025

TrendForce reports that NVIDIA's Hopper H100 began to see a reduction in shortages in 1Q24. The new H200 from the same platform is expected to gradually ramp in Q2, with the Blackwell platform entering the market in Q3 and expanding to data center customers in Q4. However, this year will still primarily focus on the Hopper platform, which includes the H100 and H200 product lines. The Blackwell platform—based on how far supply chain integration has progressed—is expected to start ramping up in Q4, accounting for less than 10% of the total high-end GPU market.

The die size of Blackwell platform chips like the B100 is twice that of the H100. As Blackwell becomes mainstream in 2025, the total capacity of TSMC's CoWoS is projected to grow by 150% in 2024 and by over 70% in 2025, with NVIDIA's demand occupying nearly half of this capacity. For HBM, the NVIDIA GPU platform's evolution sees the H100 primarily using 80 GB of HBM3, while the 2025 B200 will feature 288 GB of HBM3e—a 3-4 fold increase in capacity per chip. The three major manufacturers' expansion plans indicate that HBM production volume will likely double by 2025.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Intel Previews Xeon "Sierra Forest" 288 E-core Processor at MWC 2024 for Telecom Applications

To answer network operators' demands for energy-efficient scaling, Intel Corporation disclosed two major updates to drive footprint, power and total cost of ownership (TCO) savings across 5G core networks: the preview of its Intel Xeon next-gen processors, code-named Sierra Forest, with up to 288 Efficient-cores (E-cores), and the commercial availability of the Intel Infrastructure Power Manager (IPM) software for 5G core.

"Communication service providers require greater infrastructure efficiency as 5G core networks continue to build out. With the majority of 5G core networks deployed on Intel Xeon processors today, Intel is uniquely positioned to address these efficiency challenges. By introducing new Efficient-cores to our roadmap and with the commercial adoption of our Intel Infrastructure Power Manager software, service providers can slash TCO while achieving unmatched performance and power savings across their networks," said Alex Quach, Intel vice president and general manager of Wireline and Core Network Division. Energy consumption and reduction of the infrastructure footprint remain top challenges that network operators face in building out their wireless 5G core network.

Kioxia Joins HPE Servers on Space Launch Destined for the International Space Station

Today, KIOXIA SSDs took flight with the launch of the NG-20 mission rocket, which is delivering an updated HPE Spaceborne Computer-2, based on HPE EdgeLine and ProLiant servers from Hewlett Packard Enterprise (HPE), to the International Space Station (ISS). KIOXIA SSDs provide robust flash storage in HPE Spaceborne Computer-2 to conduct scientific experiments aboard the space station.

HPE Spaceborne Computer-2, based on commercial off-the-shelf technology, provides edge computing and AI capabilities on board the research outpost as part of a greater mission to significantly advance computing power in space and reduce dependency on communications as space exploration continues to expand. Designed to perform various high-performance computing (HPC) workloads in space, including real-time image processing, deep learning, and scientific simulations, HPE Spaceborne Computer-2 can be used to compute a number of experiment types including healthcare, natural disaster recovery, 3D printing, 5G, AI, and more.

Intel Appoints Justin Hotard to Lead Data Center and AI Group

Intel Corporation today announced the appointment of Justin Hotard as executive vice president and general manager of its Data Center and AI Group (DCAI), effective Feb. 1. He joins Intel with more than 20 years of experience driving transformation and growth in computing and data center businesses, and is a leader in delivering scalable AI systems for the enterprise.

Hotard will become a member of Intel's executive leadership team and report directly to CEO Pat Gelsinger. He will be responsible for Intel's suite of data center products spanning enterprise and cloud, including its Intel Xeon processor family, graphics processing units (GPUs) and accelerators. He will also play an integral role in driving the company's mission to bring AI everywhere.

Intel's New 5th Gen "Emerald Rapids" Xeon Processors are Built with AI Acceleration in Every Core

Today at the "AI Everywhere" event, Intel launched its 5th Gen Intel Xeon processors (code-named Emerald Rapids) that deliver increased performance per watt and lower total cost of ownership (TCO) across critical workloads for artificial intelligence, high performance computing (HPC), networking, storage, database and security. This launch marks the second Xeon family upgrade in less than a year, offering customers more compute and faster memory at the same power envelope as the previous generation. The processors are software- and platform-compatible with 4th Gen Intel Xeon processors, allowing customers to upgrade and maximize the longevity of infrastructure investments while reducing costs and carbon emissions.

"Designed for AI, our 5th Gen Intel Xeon processors provide greater performance to customers deploying AI capabilities across cloud, network and edge use cases. As a result of our long-standing work with customers, partners and the developer ecosystem, we're launching 5th Gen Intel Xeon on a proven foundation that will enable rapid adoption and scale at lower TCO." -Sandra Rivera, Intel executive vice president and general manager of Data Center and AI Group.

New Kioxia RM7 Series Value SAS SSDs Debut on HPE Servers

Kioxia Corporation, a world leader in memory solutions, today announced that its lineup of KIOXIA RM7 Series Value SAS SSDs are now available in HPE ProLiant Gen11 servers from Hewlett Packard Enterprise (HPE). KIOXIA RM7 Series SSDs are the latest generation of the company's 12 Gb/s Value SAS SSDs, which provide server applications with higher performance, reliability and lower latency than SATA SSDs, delivering higher IOPS/W and IOPS/$.

In addition to being available in ProLiant servers, KIOXIA RM Series Value SAS SSDs are being used in the HPE Spaceborne Computer-2 (SBC-2). As part of the program, KIOXIA SSDs provide robust flash storage in HPE Edgeline and HPE ProLiant servers in a test environment to conduct scientific experiments aboard the International Space Station (ISS).

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

NVIDIA Grace Hopper Superchip Powers 40+ AI Supercomputers

Dozens of new supercomputers for scientific computing will soon hop online, powered by NVIDIA's breakthrough GH200 Grace Hopper Superchip for giant-scale AI and high performance computing. The NVIDIA GH200 enables scientists and researchers to tackle the world's most challenging problems by accelerating complex AI and HPC applications running terabytes of data.

At the SC23 supercomputing show, NVIDIA today announced that the superchip is coming to more systems worldwide, including from Dell Technologies, Eviden, Hewlett Packard Enterprise (HPE), Lenovo, QCT and Supermicro. Bringing together the Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C interconnect technology, GH200 also serves as the engine behind scientific supercomputing centers across the globe. Combined, these GH200-powered centers represent some 200 exaflops of AI performance to drive scientific innovation.

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

NVIDIA AI-Ready Servers From World's Leading System Manufacturers to Supercharge Generative AI for Enterprises

NVIDIA today announced the world's leading system manufacturers will deliver AI-ready servers that support VMware Private AI Foundation with NVIDIA, announced separately today, to help companies customize and deploy generative AI applications using their proprietary business data. NVIDIA AI-ready servers will include NVIDIA L40S GPUs, NVIDIA BlueField -3 DPUs and NVIDIA AI Enterprise software to enable enterprises to fine-tune generative AI foundation models and deploy generative AI applications like intelligent chatbots, search and summarization tools. These servers also provide NVIDIA-accelerated infrastructure and software to power VMware Private AI Foundation with NVIDIA.

NVIDIA L40S-powered servers from leading global system manufacturers - Dell Technologies, Hewlett Packard Enterprise and Lenovo . will be available by year-end to accelerate enterprise AI. "A new computing era has begun," said Jensen Huang, founder and CEO of NVIDIA. "Companies in every industry are racing to adopt generative AI. With our ecosystem of world-leading software and system partners, we are bringing generative AI to the world's enterprises."

NVIDIA Announces NVIDIA OVX servers Featuring New NVIDIA L40S GPU for Generative AI and Industrial Digitalization

NVIDIA today announced NVIDIA OVX servers featuring the new NVIDIA L40S GPU, a powerful, universal data center processor designed to accelerate the most compute-intensive, complex applications, including AI training and inference, 3D design and visualization, video processing and industrial digitalization with the NVIDIA Omniverse platform. The new GPU powers accelerated computing workloads for generative AI, which is transforming workflows and services across industries, including text, image and video generation, chatbots, game development, product design and healthcare.

"As generative AI transforms every industry, enterprises are increasingly seeking large-scale compute resources in the data center," said Bob Pette, vice president of professional visualization at NVIDIA. "OVX systems with NVIDIA L40S GPUs accelerate AI, graphics and video processing workloads, and meet the demanding performance requirements of an ever-increasing set of complex and diverse applications."

NVIDIA AI Workbench Speeds Adoption of Custom Generative AI

NVIDIA today announced NVIDIA AI Workbench, a unified, easy-to-use toolkit that allows developers to quickly create, test and customize pretrained generative AI models on a PC or workstation - then scale them to virtually any data center, public cloud or NVIDIA DGX Cloud. AI Workbench removes the complexity of getting started with an enterprise AI project. Accessed through a simplified interface running on a local system, it allows developers to customize models from popular repositories like Hugging Face, GitHub and NVIDIA NGC using custom data. The models can then be shared easily across multiple platforms.

"Enterprises around the world are racing to find the right infrastructure and build generative AI models and applications," said Manuvir Das, vice president of enterprise computing at NVIDIA. "NVIDIA AI Workbench provides a simplified path for cross-organizational teams to create the AI-based applications that are increasingly becoming essential in modern business."

AMD Reports Second Quarter 2023 Financial Results, Revenue Down 18% YoY

AMD today announced revenue for the second quarter of 2023 of $5.4 billion, gross margin of 46%, operating loss of $20 million, net income of $27 million and diluted earnings per share of $0.02. On a non-GAAP basis, gross margin was 50%, operating income was $1.1 billion, net income was $948 million and diluted earnings per share was $0.58.

"We delivered strong results in the second quarter as 4th Gen EPYC and Ryzen 7000 processors ramped significantly," said AMD Chair and CEO Dr. Lisa Su. "Our AI engagements increased by more than seven times in the quarter as multiple customers initiated or expanded programs supporting future deployments of Instinct accelerators at scale. We made strong progress meeting key hardware and software milestones to address the growing customer pull for our data center AI solutions and are on-track to launch and ramp production of MI300 accelerators in the fourth quarter."

Leading Cloud Service, Semiconductor, and System Providers Unite to Form Ultra Ethernet Consortium

Announced today, Ultra Ethernet Consortium (UEC) is bringing together leading companies for industry-wide cooperation to build a complete Ethernet-based communication stack architecture for high-performance networking. Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads are rapidly evolving and require best-in-class functionality, performance, interoperability and total cost of ownership, without sacrificing developer and end-user friendliness. The Ultra Ethernet solution stack will capitalize on Ethernet's ubiquity and flexibility for handling a wide variety of workloads while being scalable and cost-effective.

Ultra Ethernet Consortium is founded by companies with long-standing history and experience in high-performance solutions. Each member is contributing significantly to the broader ecosystem of high-performance in an egalitarian manner. The founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta and Microsoft, who collectively have decades of networking, AI, cloud and high-performance computing-at-scale deployments.

Two-ExaFLOP El Capitan Supercomputer Starts Installation Process with AMD Instinct MI300A

When Lawrence Livermore National Laboratory (LLNL) announced the creation of a two-ExaFLOP supercomputer named El Capitan, we heard that AMD would power it with its Instinct MI300 accelerator. Today, LNLL published a Tweet that states, "We've begun receiving & installing components for El Capitan, @NNSANews' first #exascale #supercomputer. While we're still a ways from deploying it for national security purposes in 2024, it's exciting to see years of work becoming reality." As published images show, HPE racks filled with AMD Instinct MI300 are showing up now at LNLL's facility, and the supercomputer is expected to go operational in 2024. This could mean that November 2023 TOP500 list update wouldn't feature El Capitan, as system enablement would be very hard to achieve in four months until then.

The El Capitan supercomputer is expected to run on AMD Instinct MI300A accelerator, which features 24 Zen4 cores, CDNA3 architecture, and 128 GB of HBM3 memory. All paired together in a four-accelerator configuration goes inside each node from HPE, also getting water cooling treatment. While we don't have many further details on the memory and storage of El Capitan, we know that the system will exceed two ExFLOPS at peak and will consume close to 40 MW of power.
Return to Keyword Browsing
Nov 21st, 2024 07:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts