News Posts matching #HPE

Return to Keyword Browsing

HPE Expands ProLiant Gen12 Server Portfolio With 5th Gen AMD EPYC Processors

HPE today announced an expansion to the HPE ProLiant Compute Gen12 server portfolio, which delivers next-level security, performance and efficiency. The expanded portfolio includes two new servers powered by 5th Gen AMD EPYC processors to optimize memory-intensive workloads, and new automation features for greater visibility and control delivered through HPE Compute Ops Management.

In addition, HPE ProLiant Compute servers are now available with HPE Morpheus VM Essentials Software support. HPE Morpheus VM Essentials is an open virtualization solution that helps reduce costs, minimize vendor lock-in, and simplify IT management. HPE also announced new HPE for Azure Local solutions with the HPE ProLiant DL145 Gen11 server to empower expansion of purpose-built edge capabilities across distributed environments.

NVIDIA and HPE Join Forces to Construct Advanced Supercomputer in Germany

NVIDIA and Hewlett Packard Enterprise announced Tuesday at a supercomputing conference in Hamburg their partnership with Germany's Leibniz Supercomputing Centre to build a new supercomputer called Blue Lion which will deliver approximately 30 times more computing power than the current SuperMUC-NG system. The Blue Lion supercomputer will run on NVIDIA's upcoming Vera Rubin architecture. This setup combines the Rubin GPU with NVIDIA's first custom CPU Vera. The integrated system aims to unite simulation, data processing, and AI into one high-bandwidth low-latency platform. Optimized to support scientific research it boasts shared memory coherent compute abilities, and in-network acceleration.

HPE will build the system using its next-gen Cray technology by including NVIDIA GPUs along with cutting-edge storage and interconnect systems. The Blue Lion supercomputer will use HPE's 100% fanless direct liquid-cooling setup. This design circulates warm water through pipes for efficient cooling while the generated system's heat output will be reused to warm buildings nearby. The Blue Lion project comes after NVIDIA said Lawrence Berkeley National Lab in the US will also set up a Vera Rubin-powered system called Doudna next year. Scientists will have access to the Blue Lion supercomputer beginning in early 2027. The Blue Lion supercomputer, based in Germany will be utilized by researchers working on climate, physics, and machine learning. In contrast, Doudna, the U.S. Department of Energy's next supercomputer, will get its data from telescopes, genome sequencers, and fusion experiments.

HPE Expands Its Aruba Networking Wired and Wireless Portfolio

Hewlett Packard Enterprise today announced expansions of its HPE Aruba Networking wired and wireless portfolio, along with new HPE Aruba Networking CX 10K distributed services switches, which feature built-in programmable data processing units (DPU) from AMD Pensando to offload security and network services to free up resources for complex AI workload processing.

The new expansions from HPE Aruba Networking include:
  • The HPE Aruba Networking CX 10040 is HPE's latest distributed services switch -- also known as a "smart switch" -- that doubles the scale and performance of the previous networking and security solution.
  • Four new HPE Aruba Networking CX 6300M campus networking switches, which provide faster data speeds for enterprise IoT, AI, or high-performance computing with a more compact footprint.
  • New Wi-Fi 7 access points (APs) and capabilities for AI-driven indoor and outdoor connectivity that deliver the highest quality of service for data, voice, and video communications.

NVIDIA & Storage Industry Leaders Unveil New Class of Enterprise Infrastructure for the AI Era

At GTC 2025, NVIDIA announced the NVIDIA AI Data Platform, a customizable reference design that leading providers are using to build a new class of AI infrastructure for demanding AI inference workloads: enterprise storage platforms with AI query agents fueled by NVIDIA accelerated computing, networking and software. Using the NVIDIA AI Data Platform, NVIDIA-Certified Storage providers can build infrastructure to speed AI reasoning workloads with specialized AI query agents. These agents help businesses generate insights from data in near real time, using NVIDIA AI Enterprise software—including NVIDIA NIM microservices for the new NVIDIA Llama Nemotron models with reasoning capabilities—as well as the new NVIDIA AI-Q Blueprint.

Storage providers can optimize their infrastructure to power these agents with NVIDIA Blackwell GPUs, NVIDIA BlueField DPUs, NVIDIA Spectrum-X networking and the NVIDIA Dynamo open-source inference library. Leading data platform and storage providers—including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA—are collaborating with NVIDIA to create customized AI data platforms that can harness enterprise data to reason and respond to complex queries. "Data is the raw material powering industries in the age of AI," said Jensen Huang, founder and CEO of NVIDIA. "With the world's storage leaders, we're building a new class of enterprise infrastructure that companies need to deploy and scale agentic AI across hybrid data centers."

Intel Xeon 6 Processors With E-Core Achieve Ecosystem Adoption Speed by Industry-Leading 5G Core Solution Partners

Intel today showcased how Intel Xeon 6 processors with Efficient-cores (E-cores) have dramatically accelerated time-to-market adoption for the company's solutions in collaboration with the ecosystem. Since product introduction in June 2024, 5G core solution partners have independently validated a 3.2x performance improvement, a 3.8x performance per watt increase and, in collaboration with the Intel Infrastructure Power Manager launched at MWC 2024, a 60% reduction in run-time power consumption.

"As 5G core networks continue to build out using Intel Xeon processors, which are deployed in the vast majority of 5G networks worldwide, infrastructure efficiency, power savings and uncompromised performance are essential criteria for communication service providers (CoSPs). Intel is pleased to announce that our 5G core solution partners have accelerated the adoption of Intel Xeon 6 with E-cores and are immediately passing along these benefits to their customers. In addition, with Intel Infrastructure Power Manager, our partners have a run-time software solution that is showing tremendous progress in reducing server power in CoSP environments on existing and new infrastructure." -Alex Quach, Intel vice president and general manager of Wireline and Core Network Division

HPE Announces First Shipment of NVIDIA "Grace Blackwell" System

Hewlett Packard Enterprise announced today that it has shipped its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72. This rack-scale system by HPE is designed to help service providers and large enterprises quickly deploy very large, complex AI clusters with advanced, direct liquid cooling solutions to optimize efficiency and performance. "AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment," said Trish Damkroger, senior vice president and general manager of HPC & AI Infrastructure Solutions, HPE. "As builders of the world's top three fastest systems with direct liquid cooling, HPE offers customers lower cost per token training and best-in-class performance with industry-leading services expertise."

The NVIDIA GB200 NVL72 features shared-memory, low-latency architecture with the latest GPU technology designed for extremely large AI models of over a trillion parameters, in one memory space. GB200 NVL72 offers seamless integration of NVIDIA CPUs, GPUs, compute and switch trays, networking, and software, bringing together extreme performance to address heavily parallelizable workloads, like generative AI (GenAI) model training and inferencing, along with NVIDIA software applications. "Engineers, scientists and researchers need cutting-edge liquid cooling technology to keep up with increasing power and compute requirements," said Bob Pette, vice president of enterprise platforms at NVIDIA. "Building on continued collaboration between HPE and NVIDIA, HPE's first shipment of NVIDIA GB200 NVL72 will help service providers and large enterprises efficiently build, deploy and scale large AI clusters."

HPE Introduces Next-Generation ProLiant Servers

Hewlett Packard Enterprise today announced eight new HPE ProLiant Compute Gen12 servers, the latest additions to a new generation of enterprise servers that introduce industry-first security capabilities, optimize performance for complex workloads and boost productivity with management features enhanced by artificial intelligence (AI). The new servers will feature upcoming Intel Xeon 6 processors for data center and edge environments.

"Our customers are tackling workloads that are overwhelmingly data-intensive and growing ever-more demanding," said Krista Satterthwaite, senior vice president and general manager, Compute at HPE. "The new HPE ProLiant Compute Gen12 servers give organizations - spanning public sector, enterprise and vertical industries like finance, healthcare and more - the horsepower and management insights they need to thrive while balancing their sustainability goals and managing costs. This is a modern enterprise platform engineered for the hybrid world, designed with innovative security and control capabilities to help companies prevail over the evolving threat landscape and performance challenges that their legacy hardware cannot address."

Argonne Releases Aurora: Intel-based Exascale Supercomputer Available to Researchers

The U.S. Department of Energy's (DOE) Argonne National Laboratory has released its Aurora exascale supercomputer to researchers across the world, heralding a new era of computing-driven discoveries. With powerful capabilities for simulation, artificial intelligence (AI), and data analysis, Aurora will drive breakthroughs in a range of fields including airplane design, cosmology, drug discovery, and nuclear energy research.

"We're ecstatic to officially deploy Aurora for open scientific research," said Michael Papka, director of the Argonne Leadership Computing Facility (ALCF), a DOE Office of science user facility. "Early users have given us a glimpse of Aurora's vast potential. We're eager to see how the broader scientific community will use the system to transform their research."

AMD Powers El Capitan: The World's Fastest Supercomputer

Today, AMD showcased its ongoing high performance computing (HPC) leadership at Supercomputing 2024 by powering the world's fastest supercomputer for the sixth straight Top 500 list.

The El Capitan supercomputer, housed at Lawrence Livermore National Laboratory (LLNL), powered by AMD Instinct MI300A APUs and built by Hewlett Packard Enterprise (HPE), is now the fastest supercomputer in the world with a High-Performance Linpack (HPL) score of 1.742 exaflops based on the latest Top 500 list. Both El Capitan and the Frontier system at Oak Ridge National Lab claimed numbers 18 and 22, respectively, on the Green 500 list, showcasing the impressive capabilities of the AMD EPYC processors and AMD Instinct GPUs to drive leadership performance and energy efficiency for HPC workloads.

Ultra Accelerator Link Consortium Plans Year-End Launch of UALink v1.0

Ultra Accelerator Link (UALink ) Consortium, led by Board Members from AMD, Amazon Web Services (AWS), Astera Labs, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft, have announced the incorporation of the Consortium and are extending an invitation for membership to the community. The UALink Promoter Group was founded in May 2024 to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI pods & clusters. "The UALink standard defines high-speed and low latency communication for scale-up AI systems in data centers"

Intel and AMD Form x86 Ecosystem Advisory Group

Intel Corp. (INTC) and AMD (NASDAQ: AMD) today announced the creation of an x86 ecosystem advisory group bringing together technology leaders to shape the future of the world's most widely used computing architecture. x86 is uniquely positioned to meet customers' emerging needs by delivering superior performance and seamless interoperability across hardware and software platforms. The group will focus on identifying new ways to expand the x86 ecosystem by enabling compatibility across platforms, simplifying software development, and providing developers with a platform to identify architectural needs and features to create innovative and scalable solutions for the future.

For over four decades, x86 has served as the bedrock of modern computing, establishing itself as the preferred architecture in data centers and PCs worldwide. In today's evolving landscape - characterized by dynamic AI workloads, custom chiplets, and advancements in 3D packaging and system architectures - the importance of a robust and expanding x86 ecosystem is more crucial than ever.

HPE Announces Industry's First 100% Fanless Direct Liquid Cooling Systems Architecture

Hewlett Packard Enterprise announced the industry's first 100% fanless direct liquid cooling systems architecture to enhance the energy and cost efficiency of large-scale AI deployments. The company introduced the innovation at its AI Day, held for members of the financial community at one of its state-of-the-art AI systems manufacturing facilities. During the event, the company showcased its expertise and leadership in AI across enterprises, sovereign governments, service providers and model builders.

Industry's first 100% fanless direct liquid cooling system
While efficiency has improved in next-generation accelerators, power consumption is continuing to intensify with AI adoption, outstripping traditional cooling techniques.

HP Launches HPE ProLiant Compute XD685 Servers Powered by 5th Gen AMD EPYC Processors and AMD Instinct MI325X Accelerators

Hewlett Packard Enterprise today announced the HPE ProLiant Compute XD685 for complex AI model training tasks, powered by 5th Gen AMD EPYC processors and AMD Instinct MI325X accelerators. The new HPE system is optimized to quickly deploy high-performing, secure and energy-efficient AI clusters for use in large language model training, natural language processing and multi-modal training.

The race is on to unlock the promise of AI and its potential to dramatically advance outcomes in workforce productivity, healthcare, climate sciences and much more. To capture this potential, AI service providers, governments and large model builders require flexible, high-performance solutions that can be brought to market quickly.

NVIDIA Blackwell Sets New Standard for Generative AI in MLPerf Inference Benchmark

As enterprises race to adopt generative AI and bring new services to market, the demands on data center infrastructure have never been greater. Training large language models is one challenge, but delivering LLM-powered real-time services is another. In the latest round of MLPerf industry benchmarks, Inference v4.1, NVIDIA platforms delivered leading performance across all data center tests. The first-ever submission of the upcoming NVIDIA Blackwell platform revealed up to 4x more performance than the NVIDIA H100 Tensor Core GPU on MLPerf's biggest LLM workload, Llama 2 70B, thanks to its use of a second-generation Transformer Engine and FP4 Tensor Cores.

The NVIDIA H200 Tensor Core GPU delivered outstanding results on every benchmark in the data center category - including the latest addition to the benchmark, the Mixtral 8x7B mixture of experts (MoE) LLM, which features a total of 46.7 billion parameters, with 12.9 billion parameters active per token. MoE models have gained popularity as a way to bring more versatility to LLM deployments, as they're capable of answering a wide variety of questions and performing more diverse tasks in a single deployment. They're also more efficient since they only activate a few experts per inference - meaning they deliver results much faster than dense models of a similar size.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

Blackwell Shipments Imminent, Total CoWoS Capacity Expected to Surge by Over 70% in 2025

TrendForce reports that NVIDIA's Hopper H100 began to see a reduction in shortages in 1Q24. The new H200 from the same platform is expected to gradually ramp in Q2, with the Blackwell platform entering the market in Q3 and expanding to data center customers in Q4. However, this year will still primarily focus on the Hopper platform, which includes the H100 and H200 product lines. The Blackwell platform—based on how far supply chain integration has progressed—is expected to start ramping up in Q4, accounting for less than 10% of the total high-end GPU market.

The die size of Blackwell platform chips like the B100 is twice that of the H100. As Blackwell becomes mainstream in 2025, the total capacity of TSMC's CoWoS is projected to grow by 150% in 2024 and by over 70% in 2025, with NVIDIA's demand occupying nearly half of this capacity. For HBM, the NVIDIA GPU platform's evolution sees the H100 primarily using 80 GB of HBM3, while the 2025 B200 will feature 288 GB of HBM3e—a 3-4 fold increase in capacity per chip. The three major manufacturers' expansion plans indicate that HBM production volume will likely double by 2025.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Intel Previews Xeon "Sierra Forest" 288 E-core Processor at MWC 2024 for Telecom Applications

To answer network operators' demands for energy-efficient scaling, Intel Corporation disclosed two major updates to drive footprint, power and total cost of ownership (TCO) savings across 5G core networks: the preview of its Intel Xeon next-gen processors, code-named Sierra Forest, with up to 288 Efficient-cores (E-cores), and the commercial availability of the Intel Infrastructure Power Manager (IPM) software for 5G core.

"Communication service providers require greater infrastructure efficiency as 5G core networks continue to build out. With the majority of 5G core networks deployed on Intel Xeon processors today, Intel is uniquely positioned to address these efficiency challenges. By introducing new Efficient-cores to our roadmap and with the commercial adoption of our Intel Infrastructure Power Manager software, service providers can slash TCO while achieving unmatched performance and power savings across their networks," said Alex Quach, Intel vice president and general manager of Wireline and Core Network Division. Energy consumption and reduction of the infrastructure footprint remain top challenges that network operators face in building out their wireless 5G core network.

Kioxia Joins HPE Servers on Space Launch Destined for the International Space Station

Today, KIOXIA SSDs took flight with the launch of the NG-20 mission rocket, which is delivering an updated HPE Spaceborne Computer-2, based on HPE EdgeLine and ProLiant servers from Hewlett Packard Enterprise (HPE), to the International Space Station (ISS). KIOXIA SSDs provide robust flash storage in HPE Spaceborne Computer-2 to conduct scientific experiments aboard the space station.

HPE Spaceborne Computer-2, based on commercial off-the-shelf technology, provides edge computing and AI capabilities on board the research outpost as part of a greater mission to significantly advance computing power in space and reduce dependency on communications as space exploration continues to expand. Designed to perform various high-performance computing (HPC) workloads in space, including real-time image processing, deep learning, and scientific simulations, HPE Spaceborne Computer-2 can be used to compute a number of experiment types including healthcare, natural disaster recovery, 3D printing, 5G, AI, and more.

Intel Appoints Justin Hotard to Lead Data Center and AI Group

Intel Corporation today announced the appointment of Justin Hotard as executive vice president and general manager of its Data Center and AI Group (DCAI), effective Feb. 1. He joins Intel with more than 20 years of experience driving transformation and growth in computing and data center businesses, and is a leader in delivering scalable AI systems for the enterprise.

Hotard will become a member of Intel's executive leadership team and report directly to CEO Pat Gelsinger. He will be responsible for Intel's suite of data center products spanning enterprise and cloud, including its Intel Xeon processor family, graphics processing units (GPUs) and accelerators. He will also play an integral role in driving the company's mission to bring AI everywhere.

Intel's New 5th Gen "Emerald Rapids" Xeon Processors are Built with AI Acceleration in Every Core

Today at the "AI Everywhere" event, Intel launched its 5th Gen Intel Xeon processors (code-named Emerald Rapids) that deliver increased performance per watt and lower total cost of ownership (TCO) across critical workloads for artificial intelligence, high performance computing (HPC), networking, storage, database and security. This launch marks the second Xeon family upgrade in less than a year, offering customers more compute and faster memory at the same power envelope as the previous generation. The processors are software- and platform-compatible with 4th Gen Intel Xeon processors, allowing customers to upgrade and maximize the longevity of infrastructure investments while reducing costs and carbon emissions.

"Designed for AI, our 5th Gen Intel Xeon processors provide greater performance to customers deploying AI capabilities across cloud, network and edge use cases. As a result of our long-standing work with customers, partners and the developer ecosystem, we're launching 5th Gen Intel Xeon on a proven foundation that will enable rapid adoption and scale at lower TCO." -Sandra Rivera, Intel executive vice president and general manager of Data Center and AI Group.

New Kioxia RM7 Series Value SAS SSDs Debut on HPE Servers

Kioxia Corporation, a world leader in memory solutions, today announced that its lineup of KIOXIA RM7 Series Value SAS SSDs are now available in HPE ProLiant Gen11 servers from Hewlett Packard Enterprise (HPE). KIOXIA RM7 Series SSDs are the latest generation of the company's 12 Gb/s Value SAS SSDs, which provide server applications with higher performance, reliability and lower latency than SATA SSDs, delivering higher IOPS/W and IOPS/$.

In addition to being available in ProLiant servers, KIOXIA RM Series Value SAS SSDs are being used in the HPE Spaceborne Computer-2 (SBC-2). As part of the program, KIOXIA SSDs provide robust flash storage in HPE Edgeline and HPE ProLiant servers in a test environment to conduct scientific experiments aboard the International Space Station (ISS).

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

NVIDIA Grace Hopper Superchip Powers 40+ AI Supercomputers

Dozens of new supercomputers for scientific computing will soon hop online, powered by NVIDIA's breakthrough GH200 Grace Hopper Superchip for giant-scale AI and high performance computing. The NVIDIA GH200 enables scientists and researchers to tackle the world's most challenging problems by accelerating complex AI and HPC applications running terabytes of data.

At the SC23 supercomputing show, NVIDIA today announced that the superchip is coming to more systems worldwide, including from Dell Technologies, Eviden, Hewlett Packard Enterprise (HPE), Lenovo, QCT and Supermicro. Bringing together the Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C interconnect technology, GH200 also serves as the engine behind scientific supercomputing centers across the globe. Combined, these GH200-powered centers represent some 200 exaflops of AI performance to drive scientific innovation.
Return to Keyword Browsing
Jul 3rd, 2025 21:36 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts