News Posts matching #HPE

Return to Keyword Browsing

Intel Previews Xeon "Sierra Forest" 288 E-core Processor at MWC 2024 for Telecom Applications

To answer network operators' demands for energy-efficient scaling, Intel Corporation disclosed two major updates to drive footprint, power and total cost of ownership (TCO) savings across 5G core networks: the preview of its Intel Xeon next-gen processors, code-named Sierra Forest, with up to 288 Efficient-cores (E-cores), and the commercial availability of the Intel Infrastructure Power Manager (IPM) software for 5G core.

"Communication service providers require greater infrastructure efficiency as 5G core networks continue to build out. With the majority of 5G core networks deployed on Intel Xeon processors today, Intel is uniquely positioned to address these efficiency challenges. By introducing new Efficient-cores to our roadmap and with the commercial adoption of our Intel Infrastructure Power Manager software, service providers can slash TCO while achieving unmatched performance and power savings across their networks," said Alex Quach, Intel vice president and general manager of Wireline and Core Network Division. Energy consumption and reduction of the infrastructure footprint remain top challenges that network operators face in building out their wireless 5G core network.

Kioxia Joins HPE Servers on Space Launch Destined for the International Space Station

Today, KIOXIA SSDs took flight with the launch of the NG-20 mission rocket, which is delivering an updated HPE Spaceborne Computer-2, based on HPE EdgeLine and ProLiant servers from Hewlett Packard Enterprise (HPE), to the International Space Station (ISS). KIOXIA SSDs provide robust flash storage in HPE Spaceborne Computer-2 to conduct scientific experiments aboard the space station.

HPE Spaceborne Computer-2, based on commercial off-the-shelf technology, provides edge computing and AI capabilities on board the research outpost as part of a greater mission to significantly advance computing power in space and reduce dependency on communications as space exploration continues to expand. Designed to perform various high-performance computing (HPC) workloads in space, including real-time image processing, deep learning, and scientific simulations, HPE Spaceborne Computer-2 can be used to compute a number of experiment types including healthcare, natural disaster recovery, 3D printing, 5G, AI, and more.

Intel Appoints Justin Hotard to Lead Data Center and AI Group

Intel Corporation today announced the appointment of Justin Hotard as executive vice president and general manager of its Data Center and AI Group (DCAI), effective Feb. 1. He joins Intel with more than 20 years of experience driving transformation and growth in computing and data center businesses, and is a leader in delivering scalable AI systems for the enterprise.

Hotard will become a member of Intel's executive leadership team and report directly to CEO Pat Gelsinger. He will be responsible for Intel's suite of data center products spanning enterprise and cloud, including its Intel Xeon processor family, graphics processing units (GPUs) and accelerators. He will also play an integral role in driving the company's mission to bring AI everywhere.

Intel's New 5th Gen "Emerald Rapids" Xeon Processors are Built with AI Acceleration in Every Core

Today at the "AI Everywhere" event, Intel launched its 5th Gen Intel Xeon processors (code-named Emerald Rapids) that deliver increased performance per watt and lower total cost of ownership (TCO) across critical workloads for artificial intelligence, high performance computing (HPC), networking, storage, database and security. This launch marks the second Xeon family upgrade in less than a year, offering customers more compute and faster memory at the same power envelope as the previous generation. The processors are software- and platform-compatible with 4th Gen Intel Xeon processors, allowing customers to upgrade and maximize the longevity of infrastructure investments while reducing costs and carbon emissions.

"Designed for AI, our 5th Gen Intel Xeon processors provide greater performance to customers deploying AI capabilities across cloud, network and edge use cases. As a result of our long-standing work with customers, partners and the developer ecosystem, we're launching 5th Gen Intel Xeon on a proven foundation that will enable rapid adoption and scale at lower TCO." -Sandra Rivera, Intel executive vice president and general manager of Data Center and AI Group.

New Kioxia RM7 Series Value SAS SSDs Debut on HPE Servers

Kioxia Corporation, a world leader in memory solutions, today announced that its lineup of KIOXIA RM7 Series Value SAS SSDs are now available in HPE ProLiant Gen11 servers from Hewlett Packard Enterprise (HPE). KIOXIA RM7 Series SSDs are the latest generation of the company's 12 Gb/s Value SAS SSDs, which provide server applications with higher performance, reliability and lower latency than SATA SSDs, delivering higher IOPS/W and IOPS/$.

In addition to being available in ProLiant servers, KIOXIA RM Series Value SAS SSDs are being used in the HPE Spaceborne Computer-2 (SBC-2). As part of the program, KIOXIA SSDs provide robust flash storage in HPE Edgeline and HPE ProLiant servers in a test environment to conduct scientific experiments aboard the International Space Station (ISS).

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

NVIDIA Grace Hopper Superchip Powers 40+ AI Supercomputers

Dozens of new supercomputers for scientific computing will soon hop online, powered by NVIDIA's breakthrough GH200 Grace Hopper Superchip for giant-scale AI and high performance computing. The NVIDIA GH200 enables scientists and researchers to tackle the world's most challenging problems by accelerating complex AI and HPC applications running terabytes of data.

At the SC23 supercomputing show, NVIDIA today announced that the superchip is coming to more systems worldwide, including from Dell Technologies, Eviden, Hewlett Packard Enterprise (HPE), Lenovo, QCT and Supermicro. Bringing together the Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C interconnect technology, GH200 also serves as the engine behind scientific supercomputing centers across the globe. Combined, these GH200-powered centers represent some 200 exaflops of AI performance to drive scientific innovation.

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

NVIDIA AI-Ready Servers From World's Leading System Manufacturers to Supercharge Generative AI for Enterprises

NVIDIA today announced the world's leading system manufacturers will deliver AI-ready servers that support VMware Private AI Foundation with NVIDIA, announced separately today, to help companies customize and deploy generative AI applications using their proprietary business data. NVIDIA AI-ready servers will include NVIDIA L40S GPUs, NVIDIA BlueField -3 DPUs and NVIDIA AI Enterprise software to enable enterprises to fine-tune generative AI foundation models and deploy generative AI applications like intelligent chatbots, search and summarization tools. These servers also provide NVIDIA-accelerated infrastructure and software to power VMware Private AI Foundation with NVIDIA.

NVIDIA L40S-powered servers from leading global system manufacturers - Dell Technologies, Hewlett Packard Enterprise and Lenovo . will be available by year-end to accelerate enterprise AI. "A new computing era has begun," said Jensen Huang, founder and CEO of NVIDIA. "Companies in every industry are racing to adopt generative AI. With our ecosystem of world-leading software and system partners, we are bringing generative AI to the world's enterprises."

NVIDIA Announces NVIDIA OVX servers Featuring New NVIDIA L40S GPU for Generative AI and Industrial Digitalization

NVIDIA today announced NVIDIA OVX servers featuring the new NVIDIA L40S GPU, a powerful, universal data center processor designed to accelerate the most compute-intensive, complex applications, including AI training and inference, 3D design and visualization, video processing and industrial digitalization with the NVIDIA Omniverse platform. The new GPU powers accelerated computing workloads for generative AI, which is transforming workflows and services across industries, including text, image and video generation, chatbots, game development, product design and healthcare.

"As generative AI transforms every industry, enterprises are increasingly seeking large-scale compute resources in the data center," said Bob Pette, vice president of professional visualization at NVIDIA. "OVX systems with NVIDIA L40S GPUs accelerate AI, graphics and video processing workloads, and meet the demanding performance requirements of an ever-increasing set of complex and diverse applications."

NVIDIA AI Workbench Speeds Adoption of Custom Generative AI

NVIDIA today announced NVIDIA AI Workbench, a unified, easy-to-use toolkit that allows developers to quickly create, test and customize pretrained generative AI models on a PC or workstation - then scale them to virtually any data center, public cloud or NVIDIA DGX Cloud. AI Workbench removes the complexity of getting started with an enterprise AI project. Accessed through a simplified interface running on a local system, it allows developers to customize models from popular repositories like Hugging Face, GitHub and NVIDIA NGC using custom data. The models can then be shared easily across multiple platforms.

"Enterprises around the world are racing to find the right infrastructure and build generative AI models and applications," said Manuvir Das, vice president of enterprise computing at NVIDIA. "NVIDIA AI Workbench provides a simplified path for cross-organizational teams to create the AI-based applications that are increasingly becoming essential in modern business."

AMD Reports Second Quarter 2023 Financial Results, Revenue Down 18% YoY

AMD today announced revenue for the second quarter of 2023 of $5.4 billion, gross margin of 46%, operating loss of $20 million, net income of $27 million and diluted earnings per share of $0.02. On a non-GAAP basis, gross margin was 50%, operating income was $1.1 billion, net income was $948 million and diluted earnings per share was $0.58.

"We delivered strong results in the second quarter as 4th Gen EPYC and Ryzen 7000 processors ramped significantly," said AMD Chair and CEO Dr. Lisa Su. "Our AI engagements increased by more than seven times in the quarter as multiple customers initiated or expanded programs supporting future deployments of Instinct accelerators at scale. We made strong progress meeting key hardware and software milestones to address the growing customer pull for our data center AI solutions and are on-track to launch and ramp production of MI300 accelerators in the fourth quarter."

Leading Cloud Service, Semiconductor, and System Providers Unite to Form Ultra Ethernet Consortium

Announced today, Ultra Ethernet Consortium (UEC) is bringing together leading companies for industry-wide cooperation to build a complete Ethernet-based communication stack architecture for high-performance networking. Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads are rapidly evolving and require best-in-class functionality, performance, interoperability and total cost of ownership, without sacrificing developer and end-user friendliness. The Ultra Ethernet solution stack will capitalize on Ethernet's ubiquity and flexibility for handling a wide variety of workloads while being scalable and cost-effective.

Ultra Ethernet Consortium is founded by companies with long-standing history and experience in high-performance solutions. Each member is contributing significantly to the broader ecosystem of high-performance in an egalitarian manner. The founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta and Microsoft, who collectively have decades of networking, AI, cloud and high-performance computing-at-scale deployments.

Two-ExaFLOP El Capitan Supercomputer Starts Installation Process with AMD Instinct MI300A

When Lawrence Livermore National Laboratory (LLNL) announced the creation of a two-ExaFLOP supercomputer named El Capitan, we heard that AMD would power it with its Instinct MI300 accelerator. Today, LNLL published a Tweet that states, "We've begun receiving & installing components for El Capitan, @NNSANews' first #exascale #supercomputer. While we're still a ways from deploying it for national security purposes in 2024, it's exciting to see years of work becoming reality." As published images show, HPE racks filled with AMD Instinct MI300 are showing up now at LNLL's facility, and the supercomputer is expected to go operational in 2024. This could mean that November 2023 TOP500 list update wouldn't feature El Capitan, as system enablement would be very hard to achieve in four months until then.

The El Capitan supercomputer is expected to run on AMD Instinct MI300A accelerator, which features 24 Zen4 cores, CDNA3 architecture, and 128 GB of HBM3 memory. All paired together in a four-accelerator configuration goes inside each node from HPE, also getting water cooling treatment. While we don't have many further details on the memory and storage of El Capitan, we know that the system will exceed two ExFLOPS at peak and will consume close to 40 MW of power.

Intel & HPE Declare Aurora Supercomputer Blade Installation Complete

What's New: The Aurora supercomputer at Argonne National Laboratory is now fully equipped with all 10,624 compute blades, boasting 63,744 Intel Data Center GPU Max Series and 21,248 Intel Xeon CPU Max Series processors. "Aurora is the first deployment of Intel's Max Series GPU, the biggest Xeon Max CPU-based system, and the largest GPU cluster in the world. We're proud to be part of this historic system and excited for the groundbreaking AI, science and engineering Aurora will enable."—Jeff McVeigh, Intel corporate vice president and general manager of the Super Compute Group

What Aurora Is: A collaboration of Intel, Hewlett Packard Enterprise (HPE) and the Department of Energy (DOE), the Aurora supercomputer is designed to unlock the potential of the three pillars of high performance computing (HPC): simulations, data analytics and artificial intelligence (AI) on an extremely large scale. The system incorporates more than 1,024 storage nodes (using DAOS, Intel's distributed asynchronous object storage), providing 220 terabytes (TB) of capacity at 31TBs of total bandwidth, and leverages the HPE Slingshot high-performance fabric. Later this year, Aurora is expected to be the world's first supercomputer to achieve a theoretical peak performance of more than 2 exaflops (an exaflop is 1018 or a billion billion operations per second) when it enters the TOP 500 list.

AMD EPYC Embedded Series Processors Power New HPE Alletra Storage MP Solution

AMD today announced that its AMD EPYC Embedded Series processors are powering Hewlett Packard Enterprise's new modular, multi-protocol storage solution, HPE Alletra Storage MP. AMD EPYC Embedded processors provide the performance and energy efficiency required for enterprise-class storage systems with high availability, resilience, and industry-leading connectivity and longevity.

The HPE Alletra Storage MP supports a disaggregated infrastructure with multiple storage protocols on the same hardware that can scale independently for performance and capacity. Configurable for block and file stores, HPE Alletra Storage MP gives customers the ability to deploy, manage, and orchestrate data and storage services via the HPE GreenLake edge-to-cloud platform, regardless of the workload and storage protocol. This eliminates data silos, reducing cost and complexity while improving performance.

Frontier Remains As Sole Exaflop Machine on TOP500 List

Increasing its HPL score from 1.02 Eflop/s in November 2022 to an impressive 1.194 Eflop/s on this list, Frontier was able to improve upon its score after a stagnation between June 2022 and November 2022. Considering exascale was only a goal to aspire to just a few years ago, a roughly 17% increase here is an enormous success. Additionally, Frontier earned a score of 9.95 Eflop/s on the HLP-MxP benchmark, which measures performance for mixed-precision calculation. This is also an increase over the 7.94 EFlop/s that the system achieved on the previous list and nearly 10 times more powerful than the machine's HPL score. Frontier is based on the HPE Cray EX235a architecture and utilizes AMD EPYC 64C 2 GHz processors. It also has 8,699,904 cores and an incredible energy efficiency rating of 52.59 Gflops/watt. It also relies on gigabit ethernet for data transfer.

Ampere Computing Unveils New AmpereOne Processor Family with 192 Custom Cores

Ampere Computing today announced a new AmpereOne Family of processors with up to 192 single threaded Ampere cores - the highest core count in the industry. This is the first product from Ampere based on the company's new custom core, built from the ground up and leveraging the company's internal IP. CEO Renée James, who founded Ampere Computing to offer a modern alternative to the industry with processors designed specifically for both efficiency and performance in the Cloud, said there was a fundamental shift happening that required a new approach.

"Every few decades of compute there has emerged a driving application or use of performance that sets a new bar of what is required of performance," James said. "The current driving uses are AI and connected everything combined with our continued use and desire for streaming media. We cannot continue to use power as a proxy for performance in the data center. At Ampere, we design our products to maximize performance at a sustainable power, so we can continue to drive the future of the industry."

KIOXIA First to Launch Data Center NVMe E3.S SSDs on Hewlett Packard Enterprise Systems

KIOXIA America, Inc. today announced that its lineup of CD7 Series Enterprise and Datacenter Standard Form Factor (EDSFF) E3.S NVMe SSDs are first to ship on servers and storage from Hewlett Packard Enterprise (HPE). The industry's first EDSFF drives designed with PCIe 5.0 technology, KIOXIA CD7 E3.S SSDs increase flash storage density per drive for optimized power efficiency and rack consolidation. HPE ProLiant Gen11 servers, HPE Alletra 4000 data storage servers and the HPE Synergy 480 Gen11 Compute Module are enabled with the latest PCIe 5.0 interface, enabling up to twice the performance over PCIe 4.0, and come with optionally equipped EDSFF E3.S drive bays.

As a natural evolution of the 2.5-inch form factor, EDSFF E3.S is designed for the needs of high performance flash storage. E3.S enables more dense, efficient deployments in the same rack unit compared to 2.5-inch drives, while improving cooling and thermal characteristics and raising capacities by up to 1.5 - 2x.

TrendForce: YoY Growth Rate of Global Server Shipments for 2023 Has Been Lowered to 1.31%

The four major North American cloud service providers (CSPs) have made cuts to their server procurement quantities for this year because of economic headwinds and high inflation. Turning to server OEMs such as Dell and HPE, they are observed to have scaled back the production of server motherboards at their ODM partners. Given these developments, TrendForce now projects that global server shipments will grow by just 1.31% YoY to 14.43 million units for 2023. This latest figure is a downward correction from the earlier estimation. The revisions that server OEMs have made to their outlooks on shipments shows that the demand for end products has become much weaker than expected. They also highlight factors such as buyers of enterprise servers imposing a stricter control of their budgets and server OEMs' inventory corrections.

KIOXIA and HPE Team Up to Send SSDs into Space, Bound for the International Space Station

Today, KIOXIA America, Inc. announces its proud participation in the Hewlett Packard Enterprise (HPE) Spaceborne Computer-2 (SBC-2) program. As part of the program, KIOXIA SSDs provide robust flash storage in HPE Edgeline and HPE ProLiant servers in a test environment to conduct scientific experiments aboard the International Space Station (ISS).

"By bringing KIOXIA's expertise and its SSDs, one of the industry's leading NAND flash capabilities, with HPE Spaceborne Computer-2, together we are pushing the boundaries of scientific discovery and innovation at the most extreme edge."

AMD Expected to Occupy Over 20% of Server CPU Market and Arm 8% in 2023

AMD and Arm have been gaining up on Intel in the server CPU market in the past few years, and the margins of the share that AMD had won over were especially large in 2022 as datacenter operators and server brands began finding that solutions from the number-2 maker growing superior to those of the long-time leader, according to Frank Kung, DIGITIMES Research analyst focusing primarily on the server industry, who anticipates that AMD's share will well stand above 20% in 2023, while Arm will get 8%.

Prices are one of the three major drivers that resulted in datacenter operators and server brands switching to AMD. Comparing server CPUs from AMD and Intel with similar numbers of cores, clockspeed, and hardware specifications, the price tags of most of the former's products are at least 30% cheaper than the latter's, and the differences could go as high as over 40%, Kung said.

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.
Return to Keyword Browsing
Apr 30th, 2024 22:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts