News Posts matching #HPE

Return to Keyword Browsing

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.

AMD-Powered Frontier Supercomputer Faces Difficulties, Can't Operate a Day without Issues

When AMD announced that the company would deliver the world's fastest supercomputer, Frontier, the company also took a massive task to provide a machine capable of producing one ExaFLOP of total sustained ability to perform computing tasks. While the system is finally up and running, making a machine of that size run properly is challenging. In the world of High-Performance Computing, getting the hardware is only a portion of running the HPC center. In an interview with InsideHPC, Justin Whitt, program director for the Oak Ridge Leadership Computing Facility (OLCF), provided insight into what it is like to run the world's fastest supercomputer and what kinds of issues it is facing.

The Frontier system is powered by AMD EPYC 7A53s "Trento" 64-core 2.0 GHz CPUs and Instinct MI250X GPUs. Interconnecting everything is the HPE (Cray) Slingshot 64-port switch, which is responsible for sending data in and out of compute blades. The recent interview points out a rather interesting finding: exactly AMD Instinct MI250X GPUs and Slingshot interconnect cause hardware troubles for the Frontier. "It's mostly issues of scale coupled with the breadth of applications, so the issues we're encountering mostly relate to running very, very large jobs using the entire system … and getting all the hardware to work in concert to do that," says Justin Whitt. In addition to the limits of scale "The issues span lots of different categories, the GPUs are just one. A lot of challenges are focused around those, but that's not the majority of the challenges that we're seeing," he said. "It's a pretty good spread among common culprits of parts failures that have been a big part of it. I don't think that at this point that we have a lot of concern over the AMD products. We're dealing with a lot of the early-life kind of things we've seen with other machines that we've deployed, so it's nothing too out of the ordinary."

Arm Announces Next-Generation Neoverse Cores for High Performance Computing

The demand for data is insatiable, from 5G to the cloud to smart cities. As a society we want more autonomy, information to fuel our decisions and habits, and connection - to people, stories, and experiences.

To address these demands, the cloud infrastructure of tomorrow will need to handle the coming data explosion and the effective processing of evermore complex workloads … all while increasing power efficiency and minimizing carbon footprint. It's why the industry is increasingly looking to the performance, power efficiency, specialized processing and workload acceleration enabled by Arm Neoverse to redefine and transform the world's computing infrastructure.

AMD Pensando Distributed Services Card to Support VMware vSphere 8

AMD announced that the AMD Pensando Distributed Services Card, powered by the industry's most advanced data processing unit (DPU)1, will be one of the first DPU solutions to support VMware vSphere 8 available from leading server vendors including Dell Technologies, HPE and Lenovo.

As data center applications grow in scale and sophistication, the resulting workloads increase the demand on infrastructure services as well as crucial CPU resources. VMware vSphere 8 aims to reimagine IT infrastructure as a composable architecture with a goal of offloading infrastructure workloads such as networking, storage, and security from the CPU by leveraging the new vSphere Distributed Services Engine, freeing up valuable CPU cycles to be used for business functions and revenue generating applications.

HPE Announces Next-Generation ProLiant RL300 Gen11 Server with Ampere Altra 128-Core Arm Processor

Hewlett Packard Enterprise (NYSE: HPE) today announced that it is the first major server provider to deliver a new line of cloud-native compute solutions using processors from Ampere. The new HPE solutions provide service providers and enterprises embracing cloud-native development with an agile, extensible, and trusted compute foundation to drive innovation.

Available in Q3 2022, the new HPE ProLiant RL300 Gen11 server is the first in a series of HPE ProLiant RL Gen11 servers that deliver next-generation compute performance with higher power efficiency using Ampere Altra and Ampere Altra Max cloud-native processors.

Iceotope collaborates with Intel and HPE to accelerate sustainability and cut power for Edge and Data Center compute requirements by up to 30 Percent

Iceotope, the global leader in Precision Immersion Cooling, has announced that its chassis-level cooling system is being demonstrated in the Intel Booth at HPE Discover 2022, the prestigious "Edge-to-cloud Conference". Ku:l Data Center is the product of a close collaboration between Iceotope, Intel and HPE and promises a faster path to net zero operations by reducing edge and data center energy use by nearly a third. Once the sole preserve of arcane, high performance computing applications, liquid cooling is increasingly seen as essential technology for reliable and efficient operations of any IT load in any location. There is a pressing concern about sustainability impacts as distributed edge computing environments proliferate to meet the demand for data processing nearer the point of use, as well as growing facility power and cooling consumption driven by AI augmentation and hotter chips.

Working together with Intel and HPE, Iceotope benchmarked the power consumption of a sample IT installation being cooled respectively using air and precision immersion liquid cooling. The results show a substantial advantage in favour of liquid cooling, reducing overall power use across IT and cooling infrastructure.

Australia Installs First Room-Temperature Diamond Quantum Computer

Quantum computing is an upcoming acceleration aiding classical computational methods to achieve monumental speed-ups at a few select problems. Unlike classical computers, quantum systems usually require sub-ambient cooling to make them work. At Quantum Brilliance, an Australian-Germany startup company, researchers have been developing quantum accelerators based on diamonds. Today, we got the world's first installation of room-temperature on-premises quantum computers at Australia's Pawsey Supercomputing Centre. While we don't have much information about the computational capability of the system, we know that it is paired with HPE Setonix, Pawsey's HPE Cray EX supercomputer.

In a brief YouTube video shared by Pawsey, it is highlighted that the benefits of using quantum accelerators are real, and they are figuring out ways to integrate it with the center's hardware and software stack for better usage. Meanwhile, Quantum Brilliance diamond accelerators are still a black box of some sort as the technology is known to the startup and its collaborating Australian universities. All we know is that the company is harnessing nitrogen-vacancy (NV) center in diamonds, which supposedly have the longest coherence time of any room temperature quantum state. This translates to a qubit that can operate anywhere a classical computer can.

ORNL Frontier Supercomputer Officially Becomes the First Exascale Machine

Supercomputing game has been chasing various barriers over the years. This has included MegaFLOP, GigaFLOP, TeraFLOP, PetaFLOP, and now ExaFLOP computing. Today, we are witnessing for the first time an introduction of an Exascale-level machine contained at Oak Ridge National Laboratory. Called the Frontier, this system is not really new. We have known about its upcoming features for months now. What is new is the fact that it was completed and is successfully running at ORNL's facilities. Based on the HPE Cray EX235a architecture, the system uses 3rd Gen AMD EPYC 64-core processors with a 2 GHz frequency. In total, the system has 8,730,112 cores that work in conjunction with AMD Instinct MI250X GPUs.

As of today's TOP500 supercomputers list, the system is overtaking Fugaku's spot to become the fastest supercomputer on the planet. Delivering a sustained HPL (High-Performance Linpack) score of 1.102 Exaflop/s, it features a 52.23 GigaFLOPs/watt power efficiency rating. In the HPL-AI metric, dedicated to measuring the system's AI capabilities, the Frontier machine can output 6.86 exaFLOPs at reduced precisions. This alone is, of course, not a capable metric for Exascale machines as AI works with INT8/FP16/FP32 formats, while the official results are measured in FP64 double-precision form. Fugaku, the previous number one, scores about 2 ExaFLOPs in HPL-AI while delivering "only" 442 PetaFlop/s in HPL FP64 benchmarks.

HPE Build Supercomputer Factory in Czech Republic

Hewlett Packard Enterprise (NYSE: HPE) today announced its ongoing commitment in Europe by building its first factory in the region for next-generation high performance computing (HPC) and artificial intelligence (AI) systems to accelerate delivery to customers and strengthen the region's supplier ecosystem. The new site will manufacture HPE's industry-leading systems as custom-designed solutions to advance scientific research, mature AL/ML initiatives, and bolster innovation.

The dedicated HPC factory, which will become the fourth of HPE's global HPC sites, will be located in Kutná Hora, Czech Republic, next to HPE's existing European site for manufacturing its industry-standard servers and storage solutions. Operations will begin in summer 2022.

Ayar Labs Raises $130 Million for Light-based Chip-to-Chip Communication

Ayar Labs, the leader in chip-to-chip optical connectivity, today announced that the company has secured $130 million in additional financing led by Boardman Bay Capital Management to drive the commercialization of its breakthrough optical I/O solution. Hewlett Packard Enterprise (HPE) and NVIDIA entered this investment round, joining existing strategic investors Applied Ventures LLC, GlobalFoundries, Intel Capital, and Lockheed Martin Ventures. Other new strategic and financial investors participating in the round include Agave SPV, Atreides Capital, Berkeley Frontier Fund, IAG Capital Partners, Infinitum Capital, Nautilus Venture Partners, and Tyche Partners. They join existing investors such as BlueSky Capital, Founders Fund, Playground Global, and TechU Venture Partners.

"As a successful technology-focused crossover fund operating for over a decade, Ayar Labs represents our largest private investment to date," said Will Graves, Chief Investment Officer at Boardman Bay Capital Management. "We believe that silicon photonics-based optical interconnects in the data center and telecommunications markets represent a massive new opportunity and that Ayar Labs is the leader in this emerging space with proven technology, a fantastic team, and the right ecosystem partners and strategy."

NREL Acquires Next-Generation High Performance Computing System Based on NVIDIA Next-Generation GPU

The National Renewable Energy Laboratory (NREL) has selected Hewlett Packard Enterprise (HPE) to build its third-generation, high performance computing (HPC) system, called Kestrel. Named for a falcon with keen eyesight and intelligence, Kestrel's moniker is apropos for its mission—to rapidly advance the U.S. Department of Energy's (DOE's) energy research and development (R&D) efforts to deliver transformative energy solutions to the entire United States.

Installation of the new system will begin in the fall of 2022 in NREL's Energy Systems Integration Facility (ESIF) data center. Kestrel will complement the laboratory's current supercomputer, Eagle, during the transition. When completed—in early 2023—Kestrel will accelerate energy efficiency and renewable energy research at a pace and scale more than five times greater than Eagle, with approximately 44 petaflops of computing power.

TOP500 Update Shows No Exascale Yet, Japanese Fugaku Supercomputer Still at the Top

The 58th annual edition of the TOP500 saw little change in the Top10. The Microsoft Azure system called Voyager-EUS2 was the only machine to shake up the top spots, claiming No. 10. Based on an AMD EPYC processor with 48 cores and 2.45GHz working together with an NVIDIA A100 GPU and 80 GB of memory, Voyager-EUS2 also utilizes a Mellanox HDR Infiniband for data transfer.

While there were no other changes to the positions of the systems in the Top10, Perlmutter at NERSC improved its performance to 70.9 Pflop/s. Housed at the Lawrence Berkeley National Laboratory, Perlmutter's increased performance couldn't move it from its previously held No. 5 spot.

NVIDIA Quantum-2 Takes Supercomputing to New Heights, Into the Cloud

NVIDIA today announced NVIDIA Quantum-2, the next generation of its InfiniBand networking platform, which offers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers.

The most advanced end-to-end networking platform ever built, NVIDIA Quantum-2 is a 400 Gbps InfiniBand networking platform that consists of the NVIDIA Quantum-2 switch, the ConnectX-7 network adapter, the BlueField-3 data processing unit (DPU) and all the software that supports the new architecture.

Worldwide Enterprise WLAN Market Continued Strong Growth in Second Quarter 2021, According to IDC

Growth rates remained strong in the enterprise segment of the wireless local area networking (WLAN) market in the second quarter of 2021 (2Q21) as the market increased 22.4% on a year-over-year basis to $1.7 billion, according to the International Data Corporation (IDC) Worldwide Quarterly Wireless LAN Tracker. In the consumer segment of the WLAN market, revenues declined 5.7% in the quarter to $2.3 billion, giving the combined enterprise and consumer WLAN markets year-over-year growth of 4.6% in 2Q21.

The growth in the enterprise-class segment of the market builds on a strong first quarter of 2021 when revenues increased 24.6% year over year. For the first half of 2021, the market increased 23.5% compared to first two quarters of 2020. Compared to the second quarter of 2019, 2Q21 revenues increased 10.8%, indicating that demand in the enterprise WLAN is strong.

AMD EPYC Processors Picked by Argonne National Laboratory to Prepare for Exascale Future

AMD announced that the U.S. Department of Energy's (DOE) Argonne National Laboratory (Argonne) has chosen AMD EPYC processors to power a new supercomputer, called Polaris, which will prepare researchers for the forthcoming exascale supercomputer at Argonne called Aurora. Polaris is built by Hewlett Packard Enterprise (HPE), will use 2nd Gen EPYC processors and then upgrade to 3rd Gen AMD EPYC processors, and will allow scientists and developers to test and optimize software codes and applications to tackle a range of AI, engineering, and scientific projects.

"AMD EPYC server processors continue to be the leading choice for modern HPC research, delivering the performance and capabilities needed to help solve the complex problems that pre-exascale and exascale computing will address," said Forrest Norrod, senior vice president and general manager, Datacenter and Embedded Solutions Business Group, AMD. "We are extremely proud to support Argonne National Laboratory and their critical research into areas including low carbon technologies, medical research, astronomy, solar power and more as we draw closer to the exascale era."

IDC Forecasts Companies to Spend Almost $342 Billion on AI Solutions in 2021

Worldwide revenues for the artificial intelligence (AI) market, including software, hardware, and services, is estimated to grow 15.2% year over year in 2021 to $341.8 billion, according to the latest release of the International Data Corporation (IDC) Worldwide Semiannual Artificial Intelligence Tracker. The market is forecast to accelerate further in 2022 with 18.8% growth and remain on track to break the $500 billion mark by 2024. Among the three technology categories, AI Software occupied 88% of the overall AI market. However, in terms of growth, AI Hardware is estimated to grow the fastest in the next several years. From 2023 onwards, AI Services is forecast to become the fastest growing category.

Within the AI Software category, AI Applications has the lion's share at nearly 50% of revenues. In terms of growth, AI Platforms is the strongest with a five-year compound annual growth rate (CAGR) of 33.2%. The slowest will be AI System Infrastructure Software with a five-year CAGR of 14.4% while accounting for roughly 35% of all AI Software revenues. Within the AI Applications market, AI ERM is expected to grow slightly stronger than AI CRM over the next five years. Meanwhile, AI Lifecycle Software is forecast to grow the fastest among the markets within AI Platforms.

AMD MI200 "Aldebaran" Memory Size of 128GB Per Package Confirmed

The 128 GB per package memory size of AMD's upcoming Instinct MI200 HPC accelerator was confirmed, in a document released by Pawsey SuperComputing Centre, a Perth, Australia-based supercomputing firm that's popular with mineral prospecting companies located there. The company is currently working on Setonix, a 50-petaFLOP supercomputer being put together by HP Enterprise, which combines over 750 next-generation "Aldebaran" GPUs (referenced only as "AMD MI-Next GPUs"); and over 200,000 AMD EPYC "Milan" processor cores (the actual processor package count would be lower, and depend on the various core configs the builder is using).

The Pawsey document mentions 128 GB as the per-GPU memory. This corresponds with the rumored per-package memory of "Aldebaran." Recently imagined by Locuza_, an enthusiast who specializes in annotation of logic silicon dies, "Aldebaran" is a multi-chip module of two logic dies and eight HBM2E stacks. Each of the two logic dies, or chiplets, has 8,192 CDNA2 stream processors that add up to 16,384 on the package; and each of the two dies is wired to four HBM2E stacks over a 4096-bit memory bus. These are 128 Gbit (16 GB) stacks, so we have 64 GB memory per logic die, and 128 GB on the package. Find other drool worthy specs of the Pawsey Setonix in the screengrab below.

Qualcomm Introduces New 5G Distributed Unit Accelerator Card to Drive Global 5G Virtualized RAN Growth

Qualcomm Technologies, Inc. today announced the expansion of its 5G RAN Platforms portfolio with the addition of the Qualcomm 5G DU X100 Accelerator Card. The Qualcomm 5G DU X100 is designed to enable operators and infrastructure vendors the ability to readily reap the benefits of high performance, low latency, and power efficient 5G, while accelerating the cellular ecosystem's transition towards virtualized radio access networks.

The Qualcomm 5G DU X100 is a PCIe inline accelerator card with concurrent Sub-6 GHz and mmWave baseband support which is designed to simplify 5G deployments by offering a turnkey solution for ease of deployment with O-RAN fronthaul and 5G NR layer 1 High (L1 High) processing. The PCIe card is designed to seamlessly plug into standard Commercial-Off-The-Shelf (COTS) servers to offload CPUs from latency-sensitive and compute-intensive 5G baseband functions such as demodulation, beamforming, channel coding, and Massive MIMO computation needed for high-capacity deployments. For use in public or private networks, this accelerator card aims to give carriers the ability to increase overall network capacity and fully realize the transformative potential of 5G.

AMD Leads High Performance Computing Towards Exascale and Beyond

At this year's International Supercomputing 2021 digital event, AMD (NASDAQ: AMD) is showcasing momentum for its AMD EPYC processors and AMD Instinct accelerators across the High Performance Computing (HPC) industry. The company also outlined updates to the ROCm open software platform and introduced the AMD Instinct Education and Research (AIER) initiative. The latest Top500 list showcased the continued growth of AMD EPYC processors for HPC systems. AMD EPYC processors power nearly 5x more systems compared to the June 2020 list, and more than double the number of systems compared to November 2020. As well, AMD EPYC processors power half of the 58 new entries on the June 2021 list.

"High performance computing is critical to addressing the world's biggest and most important challenges," said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. "With our AMD EPYC processor family and Instinct accelerators, AMD continues to be the partner of choice for HPC. We are committed to enabling the performance and capabilities needed to advance scientific discoveries, break the exascale barrier, and continue driving innovation."

New Intel XPU Innovations Target HPC and AI

At the 2021 International Supercomputing Conference (ISC) Intel is showcasing how the company is extending its lead in high performance computing (HPC) with a range of technology disclosures, partnerships and customer adoptions. Intel processors are the most widely deployed compute architecture in the world's supercomputers, enabling global medical discoveries and scientific breakthroughs. Intel is announcing advances in its Xeon processor for HPC and AI as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.

"To maximize HPC performance we must leverage all the computer resources and technology advancements available to us," said Trish Damkroger, vice president and general manager of High Performance Computing at Intel. "Intel is the driving force behind the industry's move toward exascale computing, and the advancements we're delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization."

NVIDIA and Global Partners Launch New HGX A100 Systems to Accelerate Industrial AI and HPC

NVIDIA today announced it is turbocharging the NVIDIA HGX AI supercomputing platform with new technologies that fuse AI with high performance computing, making supercomputing more useful to a growing number of industries.

To accelerate the new era of industrial AI and HPC, NVIDIA has added three key technologies to its HGX platform: the NVIDIA A100 80 GB PCIe GPU, NVIDIA NDR 400G InfiniBand networking, and NVIDIA Magnum IO GPUDirect Storage software. Together, they provide the extreme performance to enable industrial HPC innovation.

AMD EPYC 7003 Processors to Power Singapore's Fastest Supercomputer

AMD announced that AMD EPYC 7003 Series processors will be used to power a new supercomputer for the National Supercomputing Centre (NSCC) Singapore, the national high-performance computing (HPC) resource center dedicated to supporting science and engineering computing needs.

The system will be based on the HPE Cray EX supercomputer and will use a combination of the EPYC 7763 and EPYC 75F3 processors. The supercomputer is planned to be fully operational by 2022 and is expected to have a peak theoretical performance of 10 petaFLOPS, 8x faster than NSCC's existing pool of HPC resources. Researchers will use the system to advance scientific research across biomedicine, genomics, diseases, climate, and more.

Global Server Shipment for 2021 Projected to Grow by More than 5% YoY, Says TrendForce

Enterprise demand for cloud services has been rising steady in the past two years owing to the rapidly changing global markets and uncertainties brought about by the COVID-19 pandemic. TrendForce's investigations find that most enterprises have been prioritizing cloud service adoption across applications ranging from AI to other emerging technologies as cloud services have relatively flexible costs. Case in point, demand from clients in the hyperscale data center segment constituted more than 40% of total demand for servers in 4Q20, while this figure may potentially approach 45% for 2021. For 2021, TrendForce expects global server shipment to increase by more than 5% YoY and ODM Direct server shipment to increase by more than 15% YoY.

HPE Lists 40-Core Intel Ice Lake-SP Xeon Server Processor

Hewlett Packard Enterprise, the company focused on making enterprise hardware and software, has today mistakenly listed some of Intel's upcoming 3rd generation Xeon Scalable processors. Called Ice Lake-SP, the latest server processor generation is expected to launch sometime in the coming days, with a possible launch date being the March 23rd "Intel Unleashed" webcast. The next generation of processors will finally bring a new vector of technologies Intel needs in server space. That means the support for PCIe 4.0 protocol for higher speed I/O and octa-channel DDR4 memory controller for much greater bandwidth. The CPU lineup will for the first time use Intel's advanced 10 nm node called 10 nm SuperFin.

Today, in the leaked HPE listing, we get to see some of the Xeon models Intel plans to launch. Starting from 32-core models, all the way to 40-core models, all SKUs above 28 cores are supposed to use dual die configuration to achieve high core counts. The limit of a single die is 28 cores. HPE listed a few models, with the highest-end one being the Intel Xeon Platinum XCC 8380 processor. It features 40 cores with 80 threads and a running frequency of 2.3 GHz. If you are wondering about TDP, it looks like the 10 nm SuperFin process is giving good results, as the CPU is rated only for 270 Watts of power.

NVIDIA Unveils AI Enterprise Software Suite to Help Every Industry Unlock the Power of AI

NVIDIA today announced NVIDIA AI Enterprise, a comprehensive software suite of enterprise-grade AI tools and frameworks optimized, certified and supported by NVIDIA, exclusively with VMware vSphere 7 Update 2, separately announced today.

Through a first-of-its-kind industry collaboration to develop an AI-Ready Enterprise platform, NVIDIA teamed with VMware to virtualize AI workloads on VMware vSphere with NVIDIA AI Enterprise. The offering gives enterprises the software required to develop a broad range of AI solutions, such as advanced diagnostics in healthcare, smart factories for manufacturing, and fraud detection in financial services.
NVIDIA AI Enterprise Software Suite
Return to Keyword Browsing
May 21st, 2024 22:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts