News Posts matching #HPC

Return to Keyword Browsing

Intel's Ponte Vecchio HPC GPU Successor Rialto Bridge Gets the Axe

Late on Friday in a newsroom posting by Intel's Interim GM Jeff McVeigh a roadmap uplift was quietly revealed. Rialto Bridge, the process improved version of Ponte Vecchio currently shipping under the Max Series GPU branding, has been pulled from the roadmap in favor of doubling down on the future design code-named Falcon Shores. Rialto Bridge was first announced last May at SC22 as the direct successor to Ponte Vecchio, and was set to begin sampling later this year. In the same post Intel also cancelled Lancaster Sound, their Visual Cloud GPU meant to replace the Arctic Sound Flex series of GPUs based on similar Xe cores to Arc Alchemist. In its stead the follow-up architecture Melville Sound will receive focused development efforts.

Falcon Shores is described as a new foundational chiplet architecture that will integrate more diverse compute tiles, creating what Intel originally dubbed the XPU. This next architectural step would combine what Intel is already doing with products such as Sapphire Rapids and Ponte Vecchio into one CPU+GPU package, and would offer even further flexibility to add other kinds of accelerators. With this roadmap update there is some uncertainty as to whether the XPU designation will make the transition as it is notably absent in the letter. It is clear though that Falcon Shores will directly replace Ponte Vecchio as the next HPC GPU, with or without CPU tiles included.

Revenue from Enterprise SSDs Totaled Just US$3.79 Billion for 4Q22 Due to Slumping Demand and Widening Decline in SSD Contract Prices, Says TrendForce

Looking back at 2H22, as server OEMs slowed down the momentum of their product shipments, Chinese server buyers also held a conservative outlook on future demand and focused on inventory reduction. Thus, the flow of orders for enterprise SSDs remained sluggish. However, NAND Flash suppliers had to step up shipments of enterprise SSDs during 2H22 because the demand for storage components equipped in notebook (laptop) computers and smartphones had undergone very large downward corrections. Compared with other categories of NAND Flash products, enterprise SSDs represented the only significant source of bit consumption. Ultimately, due to the imbalance between supply and demand, the QoQ decline in prices of enterprise SSDs widened to 25% for 4Q22. This price plunge, in turn, caused the quarterly total revenue from enterprise SSDs to drop by 27.4% QoQ to around US$3.79 billion. TrendForce projects that the NAND Flash industry will again post a QoQ decline in the revenue from this product category for 1Q23.

Ayar Labs Demonstrates Industry's First 4-Tbps Optical Solution, Paving Way for Next-Generation AI and Data Center Designs

Ayar Labs, a leader in the use of silicon photonics for chip-to-chip optical connectivity, today announced public demonstration of the industry's first 4 terabit-per-second (Tbps) bidirectional Wavelength Division Multiplexing (WDM) optical solution at the upcoming Optical Fiber Communication Conference (OFC) in San Diego on March 5-9, 2023. The company achieves this latest milestone as it works with leading high-volume manufacturing and supply partners including GlobalFoundries, Lumentum, Macom, Sivers Photonics and others to deliver the optical interconnects needed for data-intensive applications. Separately, the company was featured in an announcement with partner Quantifi Photonics on a CW-WDM-compliant test platform for its SuperNova light source, also at OFC.

In-package optical I/O uniquely changes the power and performance trajectories of system design by enabling compute, memory and network silicon to communicate with a fraction of the power and dramatically improved performance, latency and reach versus existing electrical I/O solutions. Delivered in a compact, co-packaged CMOS chiplet, optical I/O becomes foundational to next-generation AI, disaggregated data centers, dense 6G telecommunications systems, phased array sensory systems and more.

Supermicro Accelerates A Wide Range of IT Workloads with Powerful New Products Featuring 4th Gen Intel Xeon Scalable Processors

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, will be showcasing its latest generation of systems that accelerate workloads for the entire Telco industry, specifically at the edge of the network. These systems are part of the newly introduced Supermicro Intel-based product line; the better, faster, and greener systems based on the brand new 4th Gen Intel Xeon Scalable processors (formerly codenamed Sapphire Rapids) that deliver up to 60% better workload-optimized performance. From a performance standpoint these new systems that demonstrate up to 30X faster AI inference speedups on large models for AI and edge workloads with the NVIDIA H100 GPUs. In addition, Supermicro systems support the new Intel Data Center GPU Max Series (formerly codenamed Ponte Vecchio) across a wide range of servers. The Intel Data Center GPU Max Series contains up to 128 Xe-HPC cores and will accelerate a range of AI, HPC, and visualization workloads. Supermicro X13 AI systems will support next-generation built-in accelerators and GPUs up to 700 W from Intel, NVIDIA, and others.

Supermicro's wide range of product families is deployed in a broad range of industries to speed up workloads and allow faster and more accurate decisions. With the addition of purpose-built servers tuned for networking workloads, such as Open RAN deployments and private 5G, the 4th Gen Intel Xeon Scalable processor vRAN Boost technology reduces power consumption while improving performance. Supermicro continues to offer a wide range of environmentally friendly servers for workloads from the edge to the data center.

AMD Envisions Stacked DRAM on top of Compute Chiplets in the Near Future

AMD in its ISSCC 2023 presentation detailed how it has advanced data-center energy-efficiency and managed to keep up with Moore's Law, even as semiconductor foundry node advances have tapered. Perhaps its most striking prediction for server processors and HPC accelerators is multi-layer stacked DRAM. The company has, for some time now, made logic products, such as GPUs, with stacked HBM. These have been multi-chip modules (MCMs), in which the logic die and HBM stacks sit on top of a silicon interposer. While this conserves PCB real-estate compared to discrete memory chips/modules; it is inefficient on the substrate, and the interposer is essentially a silicon die that has microscopic wiring between the chips stacked on top of it.

AMD envisions that the high-density server processor of the near-future will have many layers of DRAM stacked on top of logic chips. Such a method of stacking conserves both PCB and substrate real-estate, allowing chip-designers to cram even more cores and memory per socket. The company also sees a greater role of in-memory compute, where trivial simple compute and data-movement functions can be executed directly on the memory, saving round-trips to the processor. Lastly, the company talked about the possibility of an on-package optical PHY, which would simplify network infrastructure.

Server DRAM Will Overtake Mobile DRAM in Supply in 2023 and Comprise 37.6% of Annual Total DRAM Bit Output, Says TrendForce

Since 2022, DRAM suppliers have been adjusting their product mixes so as to assign more wafer input to server DRAM products while scaling back the wafer input for mobile DRAM products. This trend is driven by two reasons. First, the demand outlook is bright for the server DRAM segment. Second, the mobile DRAM segment was in significant oversupply during 2022. Moving into 2023, the projections on the growth of smartphone shipments and the increase in the average DRAM content of smartphones remain quite conservative. Therefore, DRAM suppliers intend to keep expanding the share of server DRAM in their product mixes. According to TrendForce's analysis on the distribution of the DRAM industry's total bit output for 2023, server DRAM is estimated to comprise around 37.6%, whereas mobile DRAM is estimated to comprise around 36.8%. Hence, server DRAM will formally surpass mobile DRAM in terms of the portion of the overall supply within this year.

Atos to Build Max Planck Society's new BullSequana XH3000-based Supercomputer, Powered by AMD MI300 APU

Atos today announces a contract to build and install a new high-performance computer for the Max Planck Society, a world-leading science and technology research organization. The new system will be based on Atos' latest BullSequana XH3000 platform, which is powered by AMD EPYC CPUs and Instinct accelerators. In its final configuration, the application performance will be three times higher than the current "Cobra" system, which is also based on Atos technologies.

The new supercomputer, with a total order value of over 20 million euros, will be operated by the Max Planck Computing and Data Facility (MPCDF) in Garching near Munich and will provide high-performance computing (HPC) capacity for many institutes of the Max Planck Society. Particularly demanding scientific projects, such as those in astrophysics, life science research, materials research, plasma physics, and AI will benefit from the high-performance capabilities of the new system.

NVIDIA Pairs 4th Gen Intel Xeon Scalable Processors with H100 GPUs

AI is at the heart of humanity's most transformative innovations—from developing COVID vaccines at unprecedented speeds and diagnosing cancer to powering autonomous vehicles and understanding climate change. Virtually every industry will benefit from adopting AI, but the technology has become more resource intensive as neural networks have increased in complexity. To avoid placing unsustainable demands on electricity generation to run this computing infrastructure, the underlying technology must be as efficient as possible.

Accelerated computing powered by NVIDIA GPUs and the NVIDIA AI platform offer the efficiency that enables data centers to sustainably drive the next generation of breakthroughs. And now, timed with the launch of 4th Gen Intel Xeon Scalable processors, NVIDIA and its partners have kicked off a new generation of accelerated computing systems that are built for energy-efficient AI. When combined with NVIDIA H100 Tensor Core GPUs, these systems can deliver dramatically higher performance, greater scale and higher efficiency than the prior generation, providing more computation and problem-solving per watt.

TYAN Refines Server Performance with 4th Gen Intel Xeon Scalable Processors

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced 4th Gen Intel Xeon Scalable processor-based server platforms highlighting built-in accelerators to improve performance across the fastest-growing workloads in AI, analytics, cloud, storage, and HPC.

"Greater availability of new technology like 4th Gen Intel Xeon Scalable processors continue to drive the changes in the business landscape", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "The advances in TYAN's new portfolio of server platforms with features such as DDR5, PCIe 5.0 and Compute Express Link 1.1 are bringing high levels of compute power within reach from smaller organizations to data centers."

AIC Introduces Server Systems Powered By 4th Gen Intel Xeon Scalable Processors

AIC Inc., (from now on referred to as"AIC"), a leading provider in enterprise storage and server solutions, today unveiled its new server systems powered by 4th Gen Intel Xeon Scalable processors (formerly codenamed Sapphire Rapids). The new server platforms are designed to accelerate performance across the most in-demand workloads that businesses rely on including enterprise, storage, AI and HPC.

The newly launched AIC servers, SB102-HK, SB201-HK and HP202-KT, are designed to offer superior processing performance and energy efficiency by leveraging the innovative features of 4th Gen Intel Xeon Scalable processors. With built-in accelerators, the 4th Gen Intel Xeon Scalable processors optimize the utilization of CPU core resources and feature enhanced memory bandwidth with DDR5, advanced I/O with PCIe Gen 5 and Compute Express Link (CXL) 2.0/1.1, and the ability to accelerate PyTorch real-time inference performance by up to 10x using Intel Advanced Matrix Extensions (Intel AMX) compared to the previous generation. The new AIC servers are empowered by advanced security technologies from 4th Gen Intel Xeon Scalable processors, allowing them to protect data and unlock new opportunities for business collaborations.

Lenovo Unveils Next Generation of Intel-Based Smart Infrastructure Solutions to Accelerate IT Modernization

Today, Lenovo unveiled 25 new ThinkSystem and ThinkAgile server and hyperconverged solutions powered by Intel's 4th Generation Xeon Scalable Processors as part of its recently announced Infrastructure Solutions V3 portfolio. Designed to help accelerate global IT modernization for organizations of all sizes, the integrated solutions deliver advanced performance, efficiency and management capabilities specifically optimized for complex workloads, including mission-critical, AI, HPC and containerized applications.

"In today's competitive business climate, modern infrastructure solutions that generate faster insights and more efficiently enable complex workloads from the edge to the cloud are critical across every major industry," said Kamran Amini, Vice President and General Manager of Server & Storage, Lenovo Infrastructure Solutions Group. "With the performance and management improvements of the Intel-based ThinkSystem V3 portfolio, customers can reduce their IT footprint by up to three times to achieve greater ROI and more easily transform their infrastructure with one seamless platform designed for today's AI, virtualization, multi-cloud and sustainable computing demands."

Giga Computing Announces Its GIGABYTE Server Portfolio for the 4th Gen Intel Xeon Scalable Processor

Giga Computing is an industry leader in high-performance servers and workstations, today announced the next-generation of GIGABYTE servers and server motherboards for the new 4th Gen Intel Xeon Scalable processor to achieve efficient performance gains with built-in accelerators. The new processors have the most built-in accelerators of any processor on the market to help maximize performance efficiency for emerging workloads; and do so while boosting virtualization and AI performance. Generational improvements make this platform ideal for AI, cloud computing, advanced analytics, HPC, networking, and storage applications. For these markets, Giga Computing has announced fourteen new series that constitute seventy-eight configurations for customers to choose from. And all these new GIGABYTE products support the full portfolio of 4th Gen Intel Xeon Scalable processors, including those with high bandwidth memory (HBM) in the Intel Xeon Max Series.

Intel to Host 4th Gen Xeon Scalable and Max Series Launch on the 10th of January

On Jan. 10, Intel will officially welcome to market the 4th Gen Intel Xeon Scalable processors and the Intel Xeon CPU Max Series, as well as the Intel Data Center GPU Max Series for high performance computing (HPC) and AI. Hosted by Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel, and Lisa Spelman, corporate vice president and general manager of Intel Xeon Products, the event will highlight the value of 4th Gen Intel Xeon Scalable processors and the Intel Max Series product family, while showcasing customer, partner and ecosystem support.

The event will demonstrate how Intel is addressing critical needs in the marketplace with a focus on a workload-first approach, performance leadership in key areas such as AI, networking and HPC, the benefits of security and sustainability, and how the company is delivering significant outcomes for its customers and the industry.

Nfina Technologies Releases 3rd Gen Intel Xeon Scalable Processor-based Systems

Nfina announces the addition of three new server systems to its lineup, customized for hybrid/multi-cloud, hyperconverged HA infrastructure, HPC, backup/disaster recovery, and business storage solutions. Featuring 3rd Gen Intel Xeon Scalable Processors, Nfina-Store, and Nfina-View software, these scalable server systems fill a void in the marketplace, bringing exceptional multi-socket processing performance, easy-to-use management tools, built-in backup, and rapid disaster recovery.

"We know we must build systems for the business IT needs of today while planning for unknown future demands. Flexible infrastructure is key, optimized for hybrid/multi-cloud, backup/disaster recovery, HPC, and growing storage needs," says Warren Nicholson, President, and CEO of Nfina. He continues by saying, "Flexible infrastructure also means offering managed services like IaaS, DRaaS, etc., that provide customers with choices that fit the size of their application and budget - not a one size fits all approach like many of our competitors. Our goal is to serve many different business IT applications, any size, anywhere, at any time."

AWS Updates Custom CPU Offerings with Graviton3E for HPC Workloads

Amazon Web Services (AWS) cloud division is extensively developing custom Arm-based CPU solutions to suit its enterprise clients and is releasing new iterations of the Graviton series. Today, during the company re:Invent week, we are getting a new CPU custom-tailored to high-performance computing (HPC) workloads called Graviton3E. Given that HPC workloads require higher bandwidth, wider datapaths, and data types span in multiple dimensions, AWS redesigned the Graviton3 processor and enhanced it with new vector processing capabilities with a new name—Graviton3E. This CPU is promised to offer up to 35% higher performance in workloads that depend on heavy vector processing.

With the rising popularity of HPC in the cloud, AWS sees a significant market opportunity and is trying to capture it. Available in the AWS EC2 instance types, this chip will be available with up to 64 vCPU cores and 128 GiB of memory. The supported EC2 tiers that will offer this enhanced chip are C7gn and Hpc7g instances that provide 200 Gbps of dedicated network bandwidth that is optimized for traffic between instances in the same VPC. In addition, Intel-based R7iz instances are available for HPC users in the cloud, now powered by 4th generation Xeon Scalable processors codenamed Sapphire Rapids.

NVIDIA Announces Financial Results for Third Quarter Fiscal 2023

NVIDIA (NASDAQ: NVDA) today reported revenue for the third quarter ended October 30, 2022, of $5.93 billion, down 17% from a year ago and down 12% from the previous quarter. GAAP earnings per diluted share for the quarter were $0.27, down 72% from a year ago and up 4% from the previous quarter. Non-GAAP earnings per diluted share were $0.58, down 50% from a year ago and up 14% from the previous quarter.

"We are quickly adapting to the macro environment, correcting inventory levels and paving the way for new products," said Jensen Huang, founder and CEO of NVIDIA. "The ramp of our new platforms - Ada Lovelace RTX graphics, Hopper AI computing, BlueField and Quantum networking, Orin for autonomous vehicles and robotics, and Omniverse-is off to a great start and forms the foundation of our next phase of growth.

Supermicro Unveils a Broad Portfolio of Performance Optimized and Energy Efficient Systems Incorporating 4th Gen Intel Xeon Scalable Processors

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, at the 2022 Super Computing Conference is unveiling the most extensive portfolio of servers and storage systems in the industry based on the upcoming 4th Gen Intel Xeon Scalable processor, formerly codenamed Sapphire Rapids. Supermicro continues to use its Building Block Solutions approach to deliver state-of-the-art and secure systems for the most demanding AI, Cloud, and 5G Edge requirements. The systems support high-performance CPUs and DDR5 memory with up to 2X the performance and capacities up to 512 GB DIMMs and PCIe 5.0, which doubles I/O bandwidth. Intel Xeon CPU Max Series CPUs (formerly codenamed Sapphire Rapids HBM High Bandwidth Memory (HBM)) is also available on a range of Supermicro X13 systems. In addition, support for high ambient temperature environments at up to 40° C (104° F), with servers designed for air and liquid cooling for optimal efficiency, are rack-scale optimized with open industry standard designs and improved security and manageability.

"Supermicro is once again at the forefront of delivering the broadest portfolio of systems based on the latest technology from Intel," stated Charles Liang, president and CEO of Supermicro. "Our Total IT Solutions strategy enables us to deliver a complete solution to our customers, which includes hardware, software, rack-scale testing, and liquid cooling. Our innovative platform design and architecture bring the best from the 4th Gen Intel Xeon Scalable processors, delivering maximum performance, configurability, and power savings to tackle the growing demand for performance and energy efficiency. The systems are rack-scale optimized with Supermicro's significant growth of rack-scale manufacturing of up to 3X rack capacity."

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

SiPearl and AMD Join Forces to Develop European Exascale Systems

SiPearl, the company designing the highperformance, low-power microprocessor for European supercomputers, has entered into a business collaboration agreement with AMD to provide a joint offering for exascale supercomputing systems, combining SiPearl's HPC microprocessor, Rhea, with AMD Instinct accelerators.

Initially, AMD and SiPearl will jointly assess the interoperability of the AMD ROCm open software with the SiPearl Rhea microprocessor and build an optimized software solution that would strengthen the capabilities of a SiPearl microprocessor combined with an AMD Instinct accelerator. This joint work targets porting and optimization activities of the AMD HIP backend, openMP compilers and libraries, will enable scientific applications to benefit from both technologies.

Cerebras Unveils Andromeda, a 13.5 Million Core AI Supercomputer that Delivers Near-Perfect Linear Scaling for Large Language Models

Cerebras Systems, the pioneer in accelerating artificial intelligence (AI) compute, today unveiled Andromeda, a 13.5 million core AI supercomputer, now available and being used for commercial and academic work. Built with a cluster of 16 Cerebras CS-2 systems and leveraging Cerebras MemoryX and SwarmX technologies, Andromeda delivers more than 1 Exaflop of AI compute and 120 Petaflops of dense compute at 16-bit half precision. It is the only AI supercomputer to ever demonstrate near-perfect linear scaling on large language model workloads relying on simple data parallelism alone.

With more than 13.5 million AI-optimized compute cores and fed by 18,176 3rd Gen AMD EPYC processors, Andromeda features more cores than 1,953 Nvidia A100 GPUs and 1.6 times as many cores as the largest supercomputer in the world, Frontier, which has 8.7 million cores. Unlike any known GPU-based cluster, Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX.

TYAN Showcases Upcoming 4th Gen Intel Xeon Scalable Processor Powered HPC Platforms at SC22

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, brings its upcoming server platforms powered by 4th Gen Intel Xeon Scalable processors optimized for HPC and storage markets at SC22 on November 14-17, Booth#2000 in the Kay Bailey Hutchison Convention Center Dallas.

"Greater availability of new technology like 4th Gen Intel Xeon Scalable processors continue driving the changes in the HPC landscape", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "The advances in chip technology coupled with the rise in cloud computing has brought high levels of compute power within reach for smaller organizations. HPC now is affordable and accessible to a new generation of users."

AMD 4th Generation EPYC "Genoa" Processors Benchmarked

Yesterday, AMD announced its latest addition to the data center family of processors called EPYC Genoa. Named the 4th generation EPYC processors, they feature a Zen 4 design and bring additional I/O connectivity like PCIe 5.0, DDR5, and CXL support. To disrupt the cloud, enterprise, and HPC offerings, AMD decided to manufacture SKUs with up to 96 cores and 192 threads, an increase from the previous generation's 64C/128T designs. Today, we are learning more about the performance and power aspects of the 4th generation AMD EPYC Genoa 9654, 9554, and 9374F SKUs from 3rd party sources, and not the official AMD presentation. Tom's Hardware published a heap of benchmarks consisting of rendering, compilation, encoding, parallel computing, molecular dynamics, and much more.

In the comparison tests, we have AMD EPYC Milan 7763, 75F3, and Intel Xeon Platinum 8380, a current top-end Intel offering until Sapphire Rapids arrives. Comparing 3rd-gen EPYC 64C/128T SKUs with 4th-gen 64C/128T EPYC SKUs, the new generation brings about a 30% increase in compression and parallel compute benchmarks performance. When scaling to the 96C/192T SKU, the gap is widened, and AMD has a clear performance leader in the server marketplace. For more details about the benchmark results, go here to explore. As far as comparison to Intel offerings, AMD leads the pack as it has a more performant single and multi-threaded design. Of course, beating the Sapphire Rapids to market is a significant win for team red, so we are still waiting to see how the 4th generation Xeon stacks up against Genoa.

Rescale Teams with NVIDIA to Unite HPC and AI for Optimized Engineering in the Cloud

Rescale, the leader in high performance computing built for the cloud to accelerate engineering innovation, today announced it is teaming with NVIDIA to integrate the NVIDIA AI platform into Rescale's HPC-as-a-Service offering. The integration is designed to advance computational engineering simulation with AI and machine learning, helping enterprises commercialize new product innovations faster, more efficiently and at less cost.

Additionally, Rescale announced the world's first Compute Recommendation Engine (CRE) to power Intelligent Computing for HPC and AI workloads. Optimizing workload performance can be prohibitively complex as organizations seek to balance decisions among architectures, geographic regions, price points, scalability, service levels, compliance, and sustainability objectives. Developed using machine learning on NVIDIA architectures with infrastructure telemetry, industry benchmarks, and full-stack metadata spanning over 100 million production HPC workloads, Rescale CRE provides customers unprecedented insight to optimize overall performance.

ASUS Announces AMD EPYC 9004-Powered Rack Servers and Liquid-Cooling Solutions

ASUS, a leading provider of server systems, server motherboards and workstations, today announced new best-in-class server solutions powered by the latest AMD EPYC 9004 Series processors. ASUS also launched superior liquid-cooling solutions that dramatically improve the data-center power-usage effectiveness (PUE).

The breakthrough thermal design in this new generation delivers superior power and thermal capabilities to support class-leading features, including up to 400-watt CPUs, up to 350-watt GPUs, and 400 Gbps networking. All ASUS liquid-cooling solutions will be demonstrated in the ASUS booth (number 3816) at SC22 from November 14-17, 2022, at Kay Bailey Hutchison Convention Center in Dallas, Texas.

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.
Return to Keyword Browsing
Dec 18th, 2024 20:54 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts