News Posts matching #HPC

Return to Keyword Browsing

GIGABYTE Enterprise Servers & Motherboards Roll Out on European E-commerce Platform

GIGABYTE Technology, a pioneer in computer hardware, has taken a significant stride in shaping its European business model. Today, GIGABYTE has broadened its e-commerce platform, shop.gigabyte.eu, by integrating enterprise server and server motherboard solutions into its product portfolio. Being at the forefront of computer hardware manufacturing, GIGABYTE recognizes that it is imperative to expand its presence in the EMEA region to maintain its leadership across all markets. With the introduction of our enterprise-level server and motherboard solutions, we are dedicated to delivering a diverse range of high-performance products directly to our B2B clients.

GIGABYTE offers a complete product portfolio that addresses all workloads from the data center to edge including traditional and emerging workloads in HPC and AI to data analytics, 5G/edge, cloud computing, and more. Our enduring partnerships with key technology leaders ensure that our new products are at the forefront of innovation and launch with new partner platforms. Our systems embody performance, security, scalability, and sustainability. Within the e-commerce product portfolio, we offer a selection of models from our Edge, Rack, GPU, and Storage series. Additionally, the platform provides server motherboards for custom integration. The current selection comprises a mix of solutions tailored to online sales. For more complex solutions, customers can get in touch via the integrated contact form.

Lenovo HPC Infrastructure Powers Pre-Exascale Supercomputer Marenostrum 5 to Enable New Scientific Advances and Solve Global Challenges

Lenovo (HKSE: 992) (ADR: LNVGY) has today announced that the General Purpose Partition of the MareNostrum 5, a new pre-exascale supercomputer running on Lenovo's HPC infrastructure, has been classified as the top x86 general-purpose cluster on the recently published TOP500 list of the most powerful supercomputers globally.

Officially inaugurated at Barcelona Supercomputing Center on December 21st, MareNostrum 5 has been built for the European High Performance Computing Joint Undertaking (EuroHPC JU). The pre-exascale supercomputer will bolster the EU's mission to provide Europe with the most advanced supercomputing technology and accelerate the capacity for artificial intelligence (AI) research, enabling new scientific advances that will help solve global challenges. It aims to empower a wide range of complex HPC-specific applications, from climate research and engineering to material science and earth sciences, adeptly handling tasks that extend beyond the capabilities of cloud computing.

ITRI and TSMC Collaborate on Advancing High-Speed Computing with SOT-MRAM

The Industrial Technology Research Institute (ITRI) has joined forces with Taiwan Semiconductor Manufacturing Company (TSMC) for pioneering research into the development of a spin-orbit-torque magnetic random-access memory (SOT-MRAM) array chip. This SOT-MRAM array chip showcases an innovative computing in memory architecture and boasts a power consumption of merely one percent of a spin-transfer torque magnetic random-access memory (STT-MRAM) product. Their collaborative efforts have resulted in a research paper on this microelectronic component, which was jointly presented at the 2023 IEEE International Electron Devices Meeting (IEDM 2023), underscoring the cutting-edge nature of their findings and their pivotal role in advancing next-generation memory technologies.

Dr. Shih-Chieh Chang, General Director of Electronic and Optoelectronic System Research Laboratories at ITRI, highlighted the collaborative achievements of both organizations. "Following the co-authored papers presented at the Symposium on VLSI Technology and Circuits last year, we have further co-developed a SOT-MRAM unit cell," said Chang. "This unit cell achieves simultaneous low power consumption and high-speed operation, reaching speeds as rapid as 10 nanoseconds. And its overall computing performance can be further enhanced when integrated with computing in memory circuit design. Looking ahead, this technology holds the potential for applications in high-performance computing (HPC), artificial intelligence (AI), automotive chips, and more."

Report: Global Semiconductor Capacity Projected to Reach Record High 30 Million Wafers Per Month in 2024

Global semiconductor capacity is expected to increase 6.4% in 2024 to top the 30 million *wafers per month (wpm) mark for the first time after rising 5.5% to 29.6 wpm in 2023, SEMI announced today in its latest quarterly World Fab Forecast report.

The 2024 growth will be driven by capacity increases in leading-edge logic and foundry, applications including generative AI and high-performance computing (HPC), and the recovery in end-demand for chips. The capacity expansion slowed in 2023 due to softening semiconductor market demand and the resulting inventory correction.

SUNON: Pioneering Innovative Liquid Cooling Solutions for Modern Data Centers

In the era of high-tech development and the ever-increasing demand for data processing power, data centers are consuming more energy and generating excess heat. As a global leader in thermal solutions, SUNON is at the forefront, offering a diverse range of cutting-edge liquid cooling solutions tailored to advanced data centers equipped with high-capacity CPU and GPU computing for AI, edge, and cloud servers.

SUNON's liquid cooling design services are ideally suited for modern data centers, generative AI computing, and high-performance computing (HPC) applications. These solutions are meticulously customized to fit the cooling space and server density of each data center. With their compact yet comprehensive design, they guarantee exceptional cooling efficiency and reliability, ultimately contributing to a significant reduction in a client's total cost of ownership (TCO) in the long term. In the pursuit of net-zero emissions standards, SUNON's liquid cooling solutions play a pivotal role in enhancing corporate sustainability. They o ff er a win-win scenario for clients seeking to transition toward greener and more digitalized operations.

TYAN Upgrades HPC, AI and Data Center Solutions with the Power of 5th Gen Intel Xeon Scalable Processors

TYAN, a leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced upgraded server platforms and motherboards based on the brand-new 5th Gen Intel Xeon Scalable Processors, formerly codenamed Emerald Rapids.

5th Gen Intel Xeon processor has increased to 64 cores, featuring a larger shared cache, higher UPI and DDR5 memory speed, as well as PCIe 5.0 with 80 lanes. Growing and excelling with workload-optimized performance, 5th Gen Intel Xeon delivers more compute power and faster memory within the same power envelope as the previous generation. "5th Gen Intel Xeon is the second processor offering inside the 2023 Intel Xeon Scalable platform, offering improved performance and power efficiency to accelerate TCO and operational efficiency", said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation. "By harnessing the capabilities of Intel's new Xeon CPUs, TYAN's 5th-Gen Intel Xeon-supported solutions are designed to handle the intense demands of HPC, data centers, and AI workloads.

Intel's New 5th Gen "Emerald Rapids" Xeon Processors are Built with AI Acceleration in Every Core

Today at the "AI Everywhere" event, Intel launched its 5th Gen Intel Xeon processors (code-named Emerald Rapids) that deliver increased performance per watt and lower total cost of ownership (TCO) across critical workloads for artificial intelligence, high performance computing (HPC), networking, storage, database and security. This launch marks the second Xeon family upgrade in less than a year, offering customers more compute and faster memory at the same power envelope as the previous generation. The processors are software- and platform-compatible with 4th Gen Intel Xeon processors, allowing customers to upgrade and maximize the longevity of infrastructure investments while reducing costs and carbon emissions.

"Designed for AI, our 5th Gen Intel Xeon processors provide greater performance to customers deploying AI capabilities across cloud, network and edge use cases. As a result of our long-standing work with customers, partners and the developer ecosystem, we're launching 5th Gen Intel Xeon on a proven foundation that will enable rapid adoption and scale at lower TCO." -Sandra Rivera, Intel executive vice president and general manager of Data Center and AI Group.

China Continues to Enhance AI Chip Self-Sufficiency, but High-End AI Chip Development Remains Constrained

Huawei's subsidiary HiSilicon has made significant strides in the independent R&D of AI chips, launching the next-gen Ascend 910B. These chips are utilized not only in Huawei's public cloud infrastructure but also sold to other Chinese companies. This year, Baidu ordered over a thousand Ascend 910B chips from Huawei to build approximately 200 AI servers. Additionally, in August, Chinese company iFlytek, in partnership with Huawei, released the "Gemini Star Program," a hardware and software integrated device for exclusive enterprise LLMs, equipped with the Ascend 910B AI acceleration chip, according to TrendForce's research.

TrendForce conjectures that the next-generation Ascend 910B chip is likely manufactured using SMIC's N+2 process. However, the production faces two potential risks. Firstly, as Huawei recently focused on expanding its smartphone business, the N+2 process capacity at SMIC is almost entirely allocated to Huawei's smartphone products, potentially limiting future capacity for AI chips. Secondly, SMIC remains on the Entity List, possibly restricting access to advanced process equipment.

GIGABYTE Unveils Next-gen HPC & AI Servers with AMD Instinct MI300 Series Accelerators

GIGABYTE Technology: Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, and IT infrastructure, today announced the GIGABYTE G383-R80 for the AMD Instinct MI300A APU and two GIGABYTE G593 series servers for the AMD Instinct MI300X GPU and AMD EPYC 9004 Series processor. As a testament to the performance of AMD Instinct MI300 Series family of products, the El Capitan supercomputer at Lawrence Livermore National Laboratory uses the MI300A APU to power exascale computing. And these new GIGABYTE servers are the ideal platform to propel discoveries in HPC & AI at exascale.⁠

Marrying of a CPU & GPU: G383-R80
For incredible advancements in HPC there is the GIGABYTE G383-R80 that houses four LGA6096 sockets for MI300A APUs. This chip integrates a CPU that has twenty-four AMD Zen 4 cores with a powerful GPU built with AMD CDNA 3 GPU cores. And the chiplet design shares 128 GB of unified HBM3 memory for impressive performance for large AI models. The G383 server has lots of expansion slots for networking, storage, or other accelerators, with a total of twelve PCIe Gen 5 slots. And in the front of the chassis are eight 2.5" Gen 5 NVMe bays to handle heavy workloads such as real-time big data analytics and latency-sensitive workloads in finance and telecom. ⁠

AWS and NVIDIA Partner to Deliver 65 ExaFLOP AI Supercomputer, Other Solutions

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced an expansion of their strategic collaboration to deliver the most-advanced infrastructure, software and services to power customers' generative artificial intelligence (AI) innovations. The companies will bring together the best of NVIDIA and AWS technologies—from NVIDIA's newest multi-node systems featuring next-generation GPUs, CPUs and AI software, to AWS Nitro System advanced virtualization and security, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability—that are ideal for training foundation models and building generative AI applications.

The expanded collaboration builds on a longstanding relationship that has fueled the generative AI era by offering early machine learning (ML) pioneers the compute performance required to advance the state-of-the-art in these technologies.

SK hynix Showcases Next-Gen AI and HPC Solutions at SC23

SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12-17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first appearance at the conference by introducing its groundbreaking memory solutions to the HPC community. During the six-day event, several SK hynix employees also made presentations revealing the impact of the company's memory solutions on AI and HPC.

Displaying Advanced HPC & AI Products
At SC23, SK hynix showcased its products tailored for AI and HPC to underline its leadership in the AI memory field. Among these next-generation products, HBM3E attracted attention as the HBM solution meets the industry's highest standards of speed, capacity, heat dissipation, and power efficiency. These capabilities make it particularly suitable for data-intensive AI server systems. HBM3E was presented alongside NVIDIA's H100, a high-performance GPU for AI that uses HBM3 for its memory.

Ansys Collaborates with TSMC and Microsoft to Accelerate Mechanical Stress Simulation for 3D-IC Reliability in the Cloud

Ansys has collaborated with TSMC and Microsoft to validate a joint solution for analyzing mechanical stresses in multi-die 3D-IC systems manufactured with TSMC's 3DFabric advanced packaging technologies. This collaborative solution gives customers added confidence to address novel multiphysics requirements that improve the functional reliability of advanced designs using TSMC's 3DFabric, a comprehensive family of 3D silicon stacking and advanced packaging technologies.

Ansys Mechanical is the industry-leading finite element analysis software used to simulate mechanical stresses caused by thermal gradients in 3D-ICs. The solution flow has been proven to run efficiently on Microsoft Azure, helping to ensure fast turn-around times with today's very large and complex 2.5D/3D-IC systems.

TYAN Announces New Server Line-Up Powered by 4th Gen AMD EPYC (9004/8004 Series) and AMD Ryzen (7000 Series) Processors at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, debuts its new server line-up for 4th Gen AMD EPYC & AMD Ryzen Processors at SC23, Booth #1917, in the Colorado Convention Center, Denver, CO, November 13-16.

AMD EPYC 9004 processor features leadership performance and is optimized for a wide range of HPC, cloud-native computing and Generative AI workloads
TYAN offers server platforms supporting the AMD EPYC 9004 processors that provide up to 128 Zen 4C cores and 256 MB of L3 Cache for dynamic cloud-native applications with high performance, density, energy efficiency, and compatibility.

TYAN Unveils its Robuste Immersion Cooling Solution that Delivering Significant PUE Enhancement at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, unveils an immersion cooling solution that delivering significant PUE (Power Usage Effectiveness) enhancement and showcases its latest server platforms powered by 4th Gen. Intel Xeon Scalable Processors targeting HPC, AI and Cloud Computing applications at SC23, Booth #1917.

Significant PUE Enhancement shown in an Immersion-cooling Tank vs. Conventional Air-cooling Operation Cabinet
The immersion cooling system live demonstrated at TYAN booth during SC23 is a 4U hybrid single phase tank enclosure equipped with 4 units of TYAN GC68A-B7136 cloud computing servers. Comparing to conventional Air-cooling operating cabinet, this hybrid immersion cooling system could offer huge improvement of PUE which makes it become an ideal mission-critical solution for the users aimed in energy-saving and green products.

ASRock Rack Announces Support of NVIDIA H200 GPUs and GH200 Superchips and Highlights HPC and AI Server Platforms at SC 23

ASRock Rack Inc., the leading innovative server company, today is set to showcase a comprehensive range of servers for diverse AI workloads catering to scenarios from the edge, on-premises, and to the cloud at booth #1737 at SC 23 held at the Colorado Convention Center in Denver, USA. The event is from November 13th to 16th, and ASRock Rack will feature the following significant highlights:

At SC 23, ASRock Rack will demonstrate the NVIDIA-Qualified 2U4G-GENOA/M3 and 4U8G series GPU server solutions along with the NVIDIA H100 PCIe. The ASRock Rack 4U8G and 4U10G series GPU servers are able to accommodate eight to ten 400 W dual-slot GPU cards and 24 hot-swappable 2.5" drives, designed to deliver exceptional performance for demanding AI workloads deployed in the cloud environment. The 2U4G-GENOA/M3, tailored for lighter workloads, is powered by a single AMD EPYC 9004 series processor and is able to support four 400 W dual-slot GPUs while having additional PCIe and OCP NIC 3.0 slots for expansions.

Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is expanding its AI reach with the upcoming support for the new NVIDIA HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry leading AI platforms, including 8U and 4U Universal GPU Systems, are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x capacity and 1.4x higher bandwidth HBM3e memory compared to the NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of Supermicro NVIDIA MGX systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented performance, scalability, and reliability, Supermicro's rack scale AI solutions accelerate the performance of computationally intensive generative AI, large language Model (LLM) training, and HPC applications while meeting the evolving demands of growing model sizes. Using the building block architecture, Supermicro can quickly bring new technology to market, enabling customers to become more productive sooner.

Supermicro is also introducing the industry's highest density server with NVIDIA HGX H100 8-GPUs systems in a liquid cooled 4U system, utilizing the latest Supermicro liquid cooling solution. The industry's most compact high performance GPU server enables data center operators to reduce footprints and energy costs while offering the highest performance AI training capacity available in a single rack. With the highest density GPU systems, organizations can reduce their TCO by leveraging cutting-edge liquid cooling solutions.

GIGABYTE Demonstrates the Future of Computing at Supercomputing 2023 with Advanced Cooling and Scaled Data Centers

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, continues to be a leader in cooling IT hardware efficiently and in developing diverse server platforms for Arm and x86 processors, as well as AI accelerators. At SC23, GIGABYTE (booth #355) will showcase some standout platforms, including for the NVIDIA GH200 Grace Hopper Superchip and next-gen AMD Instinct APU. To better introduce its extensive lineup of servers, GIGABYTE will address the most important needs in supercomputing data centers, such as how to cool high-performance IT hardware efficiently and power AI that is capable of real-time analysis and fast time to results.

Advanced Cooling
For many data centers, it is becoming apparent that their cooling infrastructure must radically shift to keep pace with new IT hardware that continues to generate more heat and requires rapid heat transfer. Because of this, GIGABYTE has launched advanced cooling solutions that allow IT hardware to maintain ideal performance while being more energy-efficient and maintaining the same data center footprint. At SC23, its booth will have a single-phase immersion tank, the A1P0-EA0, which offers a one-stop immersion cooling solution. GIGABYTE is experienced in implementing immersion cooling with immersion-ready servers, immersion tanks, oil, tools, and services spanning the globe. Another cooling solution showcased at SC23 will be direct liquid cooling (DLC), and in particular, the new GIGABYTE cold plates and cooling modules for the NVIDIA Grace CPU Superchip, NVIDIA Grace Hopper Superchip, AMD EPYC 9004 processor, and 4th Gen Intel Xeon processor.

MSI Introduces New AI Server Platforms with Liquid Cooling Feature at SC23

MSI, a leading global server provider, is showcasing its latest GPU and CXL memory expansion servers powered by AMD EPYC processors and 4th Gen Intel Xeon Scalable processors, which are optimized for enterprises, organizations and data centers, at SC23, booth #1592 in the Colorado Convention Center in Denver from November 13 to 16.

"The exponential growth of human- and machine-generated data demands increased data center compute performance. To address this demand, liquid cooling has emerged as a key trend, said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's server platforms offer a well-balanced hardware foundation for modern data centers. These platforms can be tailored to specific workloads, optimizing performance and aligning with the liquid cooling trend."

Intel Advances Scientific Research and Performance for New Wave of Supercomputers

At SC23, Intel showcased AI-accelerated high performance computing (HPC) with leadership performance for HPC and AI workloads across Intel Data Center GPU Max Series, Intel Gaudi 2 AI accelerators and Intel Xeon processors. In partnership with Argonne National Laboratory, Intel shared progress on the Aurora generative AI (genAI) project, including an update on the 1 trillion parameter GPT-3 LLM on the Aurora supercomputer that is made possible by the unique architecture of the Max Series GPU and the system capabilities of the Aurora supercomputer. Intel and Argonne demonstrated the acceleration of science with applications from the Aurora Early Science Program (ESP) and the Exascale Computing Project. The company also showed the path to Intel Gaudi 3 AI accelerators and Falcon Shores.

"Intel has always been committed to delivering innovative technology solutions to meet the needs of the HPC and AI community. The great performance of our Xeon CPUs along with our Max GPUs and CPUs help propel research and science. That coupled with our Gaudi accelerators demonstrate our full breadth of technology to provide our customers with compelling choices to suit their diverse workloads," said Deepak Patil, Intel corporate vice president and general manager of Data Center AI Solutions.

NVIDIA Grace Hopper Superchip Powers 40+ AI Supercomputers

Dozens of new supercomputers for scientific computing will soon hop online, powered by NVIDIA's breakthrough GH200 Grace Hopper Superchip for giant-scale AI and high performance computing. The NVIDIA GH200 enables scientists and researchers to tackle the world's most challenging problems by accelerating complex AI and HPC applications running terabytes of data.

At the SC23 supercomputing show, NVIDIA today announced that the superchip is coming to more systems worldwide, including from Dell Technologies, Eviden, Hewlett Packard Enterprise (HPE), Lenovo, QCT and Supermicro. Bringing together the Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C interconnect technology, GH200 also serves as the engine behind scientific supercomputing centers across the globe. Combined, these GH200-powered centers represent some 200 exaflops of AI performance to drive scientific innovation.

Ayar Labs Showcases 4 Tbps Optically-enabled Intel FPGA at Supercomputing 2023

Ayar Labs, a leader in silicon photonics for chip-to-chip connectivity, will showcase its in-package optical I/O solution integrated with Intel's industry-leading Agilex Field-Programmable Gate Array (FPGA) technology. In demonstrating 5x current industry bandwidth at 5x lower power and 20x lower latency, the optical FPGA - packaged in a common PCIe card form factor - has the potential to transform the high performance computing (HPC) landscape for data-intensive workloads such as generative artificial intelligence (AI), machine learning, and support novel new disaggregated compute and memory architectures and more.

"We're on the cusp of a new era in high performance computing as optical I/O becomes a 'must have' building block for meeting the exponentially growing, data-intensive demands of emerging technologies like generative AI," said Charles Wuischpard, CEO of Ayar Labs. "Showcasing the integration of Ayar Labs' silicon photonics and Intel's cutting-edge FPGA technology at Supercomputing is a concrete demonstration that optical I/O has the maturity and manufacturability needed to meet these critical demands."

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

EK Unveils Enterprise-Grade Liquid Cooling Loops for Servers and Workstations

EK, the leading premium liquid cooling gear manufacturer, will soon provide a one-stop-shop purchasing option where System Integrators, Data Centers, AI/ML-focused companies, and similar customers can get full enterprise-grade custom liquid-cooling loops for their systems. The products belong to the EK-Pro line of professional liquid cooling solutions tailor-made for HPC systems and high-compute density applications. This means that System Integrators and Data Centers will be able to purchase full liquid cooling loops for their HPC servers and workstations and enjoy all the benefits that water-cooled HPC systems can provide.

The Advantages of Liquid Cooling for HPC Systems
High Performance Computing facilities use servers and workstations that leverage powerful GPUs, multithreaded CPUs, petabytes of RAM, and extensive storage space. All this computing power, despite modern advancements in manufacturing and power efficiency, generates tremendous amounts of heat. In fact, it is estimated that up to 40% of the total energy requirements of these facilities is spent on keeping the IT equipment within acceptable operating temperatures. Plus, the components that use conventional air cooling tend to underperform and have shorter lifespans.

Intel, Dell Technologies and University of Cambridge Announce Deployment of Dawn Supercomputer

Dell Technologies, Intel and the University of Cambridge announce the deployment of the co-designed Dawn Phase 1 supercomputer. Leading technical teams built the U.K.'s fastest AI supercomputer that harnesses the power of both artificial intelligence (AI) and high performance computing (HPC) to solve some of the world's most pressing challenges. This sets a clear way forward for future U.K. technology leadership and inward investment into the U.K. technology sector. Dawn kickstarts the recently launched U.K. AI Research Resource (AIRR), which will explore the viability of associated systems and architectures. Dawn brings the U.K. closer to reaching the compute threshold of a quintillion (1018) floating point operations per second - one exaflop, better known as exascale. For perspective: Every person on earth would have to make calculations 24 hours a day for more than four years to equal a second's worth of processing power in an exascale system.

"Dawn considerably strengthens the scientific and AI compute capability available in the U.K., and it's on the ground, operational today at the Cambridge Open Zettascale Lab. Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI. I'm very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the U.K. scientific and AI community," said Adam Roe, EMEA HPC technical director at Intel.
Return to Keyword Browsing
Jul 15th, 2025 15:52 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts