News Posts matching #EPYC

Return to Keyword Browsing

MSI Powers the Future of Cloud Computing at CloudFest 2025

MSI, a leading global provider of high-performance server solutions, unveiled its next-generation server platforms—ORv3 Servers, DC-MHS Servers, and NVIDIA MGX AI Servers—at CloudFest 2025, held from March 18-20 at booth H02. The ORv3 Servers focus on modularity and standardization to enable seamless integration and rapid scalability for hyperscale growth. Complementing this, the DC-MHS Servers emphasize modular flexibility, allowing quick reconfiguration to adapt to diverse data center requirements while maximizing rack density for sustainable operations. Together with NVIDIA MGX AI Servers, which deliver exceptional performance for AI and HPC workloads, MSI's comprehensive solutions empower enterprises and hyperscalers to redefine cloud infrastructure with unmatched flexibility and performance.

"We're excited to present MSI's vision for the future of cloud infrastructure." said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "Our next-generation server platforms address the critical needs of scalability, efficiency, and sustainability. By offering modular flexibility, seamless integration, and exceptional performance, we empower businesses, hyperscalers, and enterprise data centers to innovate, scale, and lead in this cloud-powered era."

GIGABYTE Showcases Cutting-Edge AI and Cloud Computing Solutions at CloudFest 2025

Giga Computing, a subsidiary of GIGABYTE, a global leader in IT technology solutions, is thrilled to announce its participation at CloudFest 2025, the world's premier cloud, hosting, and internet infrastructure event. As a key exhibitor, Giga Computing will highlight its latest innovations in AI, cloud computing, and edge solutions at the GIGABYTE booth. In line with its commitment to shaping the future of AI development and deployment, the GIGABYTE booth will showcase its industry-leading hardware and platforms optimized for AI workloads, cloud applications, and edge computing. As cloud adoption continues to accelerate, Giga Computing solutions are designed to empower businesses with unparalleled performance, scalability, and efficiency.

At CloudFest 2025, Giga Computing invites attendees to visit booth #E03 to experience firsthand its cutting-edge cloud computing solutions. From state-of-the-art hardware to innovative total solutions, a comprehensive suite of products and services designed to meet the evolving needs of the cloud industry are being showcased.

ASUS Showcases Servers Based on Intel Xeon 6, Intel Gaudi 3 at CloudFest 2025

ASUS today announced its showcase of comprehensive AI infrastructure solutions at CloudFest 2025, bringing together cutting-edge hardware powered by Intel Xeon 6 processors, NVIDIA GPUs and AMD EPYC processors. The company will also highlight its integrated software platforms, reinforcing its position as a total AI solution provider for enterprises seeking seamless AI deployments from edge to cloud.

Intel Xeon 6-based AI solutions and Gaudi 3 Acceleration for generative AI inferencing and fine tuning training
ASUS Intel Xeon 6-based servers leverage the Data Center Modular Hardware System (DC-MHS) architecture, providing unparalleled scalability, cost-efficiency and simplified maintenance. ASUS will showcase a comprehensive Intel Xeon 6 family of processors at CloudFest 2025, including the RS700-E12, RS720Q-E12. and ESC8000-E12P-series servers. The ESC800-E12P-series servers will debut the Intel Gaudi 3 AI accelerator PCIe card. This lineup underscores the ASUS commitment to delivering comprehensive AI solutions that integrate cutting-edge hardware with enterprise-grade software platforms for seamless, scalable AI deployments, highlighting Intel's latest innovations for high-performance AI training, inference, and cloud-native workloads.

AMD Recommends EPYC Processors for Everyday AI Server Tasks

Ask a typical IT professional today whether they're leveraging AI, and there's a good chance they'll say yes-after all, they have reputations to protect! Kidding aside, many will report that their teams may use Web-based tools like ChatGPT or even have internal chatbots that serve their employee base on their intranet, but for that not much AI is really being implemented at the infrastructure level. As it turns out, the true answer is a bit different. AI tools and techniques have embedded themselves firmly into standard enterprise workloads and are a more common, everyday phenomena than even many IT people may realize. Assembly line operations now include computer vision-powered inspections. Supply chains use AI for demand forecasting making business move faster and of course, AI note-taking and meeting summary is embedded on virtually all the variants of collaboration and meeting software.

Increasingly, critical enterprise software tools incorporate built-in recommendation systems, virtual agents or some other form of AI-enabled assistance. AI is truly becoming a pervasive, complementary tool for everyday business. At the same time, today's enterprises are navigating a hybrid landscape where traditional, mission-critical workloads coexist with innovative AI-driven tasks. This "mixed enterprise and AI" workload environment calls for infrastructure that can handle both types of processing seamlessly. Robust, general-purpose CPUs like the AMD EPYC processors are designed to be powerful and secure and flexible to address this need. They handle everyday tasks—running databases, web servers, ERP systems—and offer strong security features crucial for enterprise operations augmented with AI workloads. In essence, modern enterprise infrastructure is about creating a balanced ecosystem. AMD EPYC CPUs play a pivotal role in creating this balance, delivering high performance, efficiency, and security features that underpin both traditional enterprise workloads and advanced AI operations.

Advantech Launches Next-Gen Edge AI Solutions Powered by the AMD Compute Portfolio

A global leader in intelligent IoT systems and embedded platforms, is excited to introduce its latest AIR series Edge AI systems, powered by the comprehensive AMD compute portfolio. These next-generation solutions leverage AMD Ryzen and EPYC processors alongside Instinct MI210 accelerators and Radeon PRO GPUs, delivering exceptional AI computing performance for demanding edge applications.

"Advantech and AMD continue to strengthen our collaboration in the Edge AI era, integrating advanced CPU platforms with high-performance AI accelerators and GPU solutions," said Aaron Su, Vice President of Advantech Embedded IoT Group. "This joint effort enables cutting-edge computing power to meet the demands of the rapidly evolving embedded AI applications.

GIGABYTE Showcases Future-Ready AI and HPC Technologies for High-Efficiency Computing at SCA 2025

Giga Computing, a subsidiary of GIGABYTE and a pioneer in AI-driven enterprise computing, is set to make a significant impact at Supercomputing Asia 2025 (SCA25) in Singapore (March 11-13). At booth #D5, GIGABYTE showcases its latest advancements in liquid cooling, solutions for AI training and high-performance computing (HPC). The booth highlights GIGABYTE's innovative technology and comprehensive direct liquid cooling (DLC) strategies, reinforcing its commitment to energy-efficient, high-performance computing.

Revolutionizing AI Training with DLC
A key highlight of GIGABYTE's showcase is the NVIDIA HGX H200 platform, a next-generation solution for AI workloads. GIGABYTE is presenting both its liquid-cooled G4L3-SD1 server and its air-cooled G893 series, providing businesses with advanced cooling solutions tailored for high-performance demands. The G4L3-SD1 server, equipped with CoolIT Systems' cold plates, effectively cools Intel Xeon CPUs and eight NVIDIA H200 GPUs, ensuring optimal performance with enhanced energy efficiency.

AMD Launches the EPYC Embedded 9005 "Turin" Family of Server Processors

AMD today launched the EPYC Embedded 9005 line of server processors in the embedded form-factor. These are non-socketed variants of the EPYC 9005 "Turin" server processors. The chips are intended for servers and other enterprise applications where processor replacements or upgradability are not a consideration. The EPYC Embedded 9005 "Turin" are otherwise every bit similar to the regular socketed EPYC 9005 series. These chips are based on a BGA version of the "Turin" chiplet-based processor, and powered by the "Zen 5" microarchitecture. Besides the BGA package, the EPYC Embedded 9005 series comes with a few features relevant to its form-factor and target use-cases.

To begin with, the EPYC Embedded 9005 "Turin" series comes with NTB (non-transparent bridging), a technology that enables high-performance data transfer between two processor packages across different memory domains. NTB doesn't use Infinity Fabric or even CXL, but a regular PCI-Express 5.0 x16 connection. It isn't intended to provide cache coherence, but to absorb faults across various memory domains. Next up, the series supports DRAM flush for enhanced power-loss mitigation. Upon detecting a power loss, the processor immediately dumps memory onto NVMe storage, before the machine turns off. On restart, the BIOS copies this memory dump from the NVMe SSD back to DRAM. Thirdly, the processors in the series support dual SPI flash interfaces, which enables system architects to embed lightweight operating systems directly onto a 64 MB SPI flash ROM, besides the primary SPI flash that stores the system BIOS. This lightweight OS can act like a bootloader for operating systems in other local storage devices.

AMD Discusses EPYC's "No Compromise" Driving of Performance and Efficiency

One of the main pillars that vendors of Arm-based processors often cite as a competitive advantage versus x86 processors is a keen focus on energy efficiency and predictability of performance. In the quest for higher efficiency and performance, Arm vendors have largely designed out the ability to operate on multiple threads concurrently—something that most enterprise-class CPUs have enabled for years under the technology description of "SMT"—which was also created in the name of enabling performance and efficiency benefits.

Arm vendors often claim that SMT brings security risks, creates performance unpredictability from shared resource contention and drives added cost and energy needed to implement SMT. Interestingly, Arm does support multi-threading in its Neoverse E1-class processor family for embedded uses such as automotive. Given these incongruities, this blog intends to provide a bit more clarity to help customers assess what attributes of performance and efficiency really bring them value for their critical workloads.

AMD to Discuss Advancing of AI "From the Enterprise to the Edge" at MWC 2025

GSMA MWC Barcelona, runs from March 3 to 6, 2025 at the Fira Barcelona Gran Via in Barcelona, Spain. AMD is proud to participate in forward-thinking discussions and demos around AI, edge and cloud computing, the long-term revolutionary potential of moonshot technologies like quantum processing, and more. Check out the AMD hospitality suite in Hall 2 (Stand 2M61) and explore our demos and system design wins. Attendees are welcome to stop by informally or schedule a time slot with us.

As modern networks evolve, high-performance computing, energy efficiency, and AI acceleration are becoming just as critical as connectivity itself. AMD is at the forefront of this transformation, delivering solutions that power next-generation cloud, AI, and networking infrastructure. Our demos this year showcase AMD EPYC, AMD Instinct, and AMD Ryzen AI processors, as well as AMD Versal adaptive SoC and Zynq UltraScale+ RFSoC devices.

AMD & Nutanix Solutions Discuss Energy Efficient EPYC 9004 CPU Family

AMD and Nutanix have jointly developed virtualization/HCI solutions since 2019, working with major OEMs including Dell, HP and Lenovo, systems integrators and other resellers and partners. You can learn more about AMD-Nutanix solutions here.

AMD EPYC Processors
The EPYC 9004 family of high performance processors provide up to 128 cores per processor to help meet the demands of a wide range of workloads and use cases. High density core counts allow you to reduce the number of servers you need by as much as a five to one ratio when looking at retiring older, inefficient servers and replacing with a new one. Systems based on AMD processors can also be more energy efficient than many competitive processor based systems. For example, running 2000 VMs on 11 2P AMD EPYC 9654 processor-powered servers will use up to 29% less power annually than the 17 2P Intel Xeon Platinum 8490H processor-based servers required to deliver the same performance, while helping reduce CAPEX up to 46%.

Intel Xeon Server Processor Shipments Fall to a 13-Year Low

Intel's data center business has experienced a lot of decline in recent years. Once the go-to choice for data center buildout, nowadays, Xeon processors have reached a 13-year low. According to SemiAnalysis analyst Sravan Kundojjala on X, the once mighty has fallen to a 13-year low number, less than 50% of its CPU sales in the peak observed in 2021. In a chart that is indexed to 2011 CPU volume, the analysis gathered from server volume and 10K fillings shows the decline that Intel has experienced in recent years. Following the 2021 peak, the volume of shipped CPUs has remained in free fall, reaching less than 50% of its once-dominant position. The main cause for this volume contraction is attributed to Intel's competitors gaining massive traction. AMD, with its EPYC CPUs, has been Intel's primary competitor, pushing the boundaries on CPU core count per socket and performance per watt, all at an attractive price point.

During a recent earnings call, Intel's interim c-CEO leadership admitted that Intel is still behind the competition with regard to performance, even with Granite Rapids and Clearwater Forest, which promised to be their advantage in the data center. "So I think it would not be unfathomable that I would put a data center product outside if that meant that I hit the right product, the right market window as well as the right performance for my customers," said Intel co-CEO Michelle Johnston Holthaus, adding that "Intel Foundry will need to earn my business every day, just as I need to earn the business of my customers." This confirms that the company is now dedicated to restoring its product leadership, even if its internal foundry is not doing okay. It will take some time before Intel CPU volume shipments recover, and with AMD executing well in data center, it is becoming a highly intense battle.

AMD "Zen 1" to "Zen 4" Processors Affected by Microcode Signature Verification Vulnerability

Google Security Research team has just published its latest research on a fundamental flaw in the microcode patch verification system that affects AMD processors from "Zen 1" through "Zen 4" generations. The vulnerability stems from an inadequate hash function implementation in the CPU's signature validation process for microcode updates, enabling attackers with local administrator privileges (ring 0 from outside a VM) to inject malicious microcode patches, potentially compromising AMD SEV-SNP-protected confidential computing workloads and Dynamic Root of Trust Measurement systems. Google disclosed this high-severity issue to AMD on September 25, 2024, leading to AMD's release of an embargoed fix to customers on December 17, 2024, with public disclosure following on February 3, 2025; however, due to the complexity of supply chain dependencies and remediation requirements, comprehensive technical details are being withheld until March 5, 2025, allowing organizations time to implement necessary security measures and re-establish trust in their confidential compute environments.

AMD has released comprehensive mitigation measures through AGESA firmware updates across its entire EPYC server processor lineup, from the first-generation Naples to the latest Genoa-X and Bergamo architectures. The security patch, designated as CVE-2024-56161 with a high severity rating of 7.2, introduces critical microcode updates: Naples B2 processors require uCode version 0x08001278, Rome B0 systems need 0x0830107D, while Milan and Milan-X variants mandate versions 0x0A0011DB and 0x0A001244 respectively. For the latest Genoa-based systems, including Genoa-X and Bergamo/Siena variants, the required microcode versions are 0x0A101154, 0x0A10124F, and 0x0AA00219. These updates implement robust protections across all SEV security features - including SEV, SEV-ES, and SEV-SNP - while introducing new restrictions on microcode hot-loading capabilities to prevent future exploitation attempts.

AMD Believes EPYC CPUs & Instinct GPUs Will Accelerate AI Advancements

If you're looking for innovative use of AI technology, look to the cloud. Gartner reports, "73% of respondents to the 2024 Gartner CIO and Tech Executive Survey have increased funding for AI." And IDC says that AI: "will have a cumulative global economic impact of $19.9 trillion through 2030." But end users aren't running most of those AI workloads on their own hardware. Instead, they are largely relying on cloud service providers and large technology companies to provide the infrastructure for their AI efforts. This approach makes sense since most organizations are already heavily reliant the cloud. According to O'Reilly, more than 90% of companies are using public cloud services. And they aren't moving just a few workloads to the cloud. That same report shows a 175% growth in cloud-native interest, indicating that companies are committing heavily to the cloud.

As a result of this demand for infrastructure to power AI initiatives, cloud service providers are finding it necessary to rapidly scale up their data centers. IDC predicts: "the surging demand for AI workloads will lead to a significant increase in datacenter capacity, energy consumption, and carbon emissions, with AI datacenter capacity projected to have a compound annual growth rate (CAGR) of 40.5% through 2027." While this surge creates massive opportunities for service providers, it also introduces some challenges. Providing the computing power necessary to support AI initiatives at scale, reliably and cost-effectively is difficult. Many providers have found that deploying AMD EPYC CPUs and Instinct GPUs can help them overcome those challenges. Here's a quick look at three service providers who are using AMD chips to accelerate AI advancements.

Intel Cuts Xeon 6 Prices up to 30% to Battle AMD in the Data Center

Intel has implemented substantial price cuts across its Xeon 6 "Granite Rapids" server processor lineup, marking a significant shift in its data center strategy. The reductions, quietly introduced and reflected in Intel's ARK database, come just four months after the processors' September launch. The most dramatic cut affects Intel's flagship 128-core Xeon 6980P, which saw its price drop from $17,800 by 30% to $12,460. This aggressive pricing positions the processor below AMD's competing EPYC "Turin" 9755 128-core CPU both absolute and per-core pricing, intensifying the rivalry between the two semiconductor giants. AMD's SKU at 128 cores is now pricier at $12,984, with higher core count SKUs reaching up to $14,813 for 192-core EPYC 9965 CPU based on Zen 5c core. Intel is expected to release 288-core "Sierra Forest" Xeon SKUs this quarter, so we can get an updated pricing structure and compare it to AMD.

Additionally, Intel's price adjustments extend beyond the flagship model, with three of the five Granite Rapids processors receiving substantial reductions. The 96-core Xeon 6972P and 6952P models have been marked down by 13% and 20% respectively. These cuts make Intel's offerings particularly attractive to cloud providers who prioritize core density and cost efficiency. However, Intel's competitive pricing comes with trade-offs. The higher power consumption of Intel's processors—exemplified by the 96-core Xeon 6972P's 500 W requirement, which exceeds AMD's comparable model by 100 W—could offset the initial savings through increased operational costs. Ultimately, most of the data center buildout will be won by whoever can serve the most CPU volume shipped (read wafer production capacity) and the best TCO/ROI balance, including power consumption and performance.

Supermicro Empowers AI-driven Capabilities for Enterprise, Retail, and Edge Server Solutions

Supermicro, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is showcasing the latest solutions for the retail industry in collaboration with NVIDIA at the National Retail Federation (NRF) annual show. As generative AI (GenAI) grows in capability and becomes more easily accessible, retailers are leveraging NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, for a broad spectrum of applications.

"Supermicro's innovative server, storage, and edge computing solutions improve retail operations, store security, and operational efficiency," said Charles Liang, president and CEO of Supermicro. "At NRF, Supermicro is excited to introduce retailers to AI's transformative potential and to revolutionize the customer's experience. Our systems here will help resolve day-to-day concerns and elevate the overall buying experience."

Gigabyte Expands Its Accelerated Computing Portfolio with New Servers Using the NVIDIA HGX B200 Platform

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, announced new GIGABYTE G893 series servers using the NVIDIA HGX B200 platform. The launch of these flagship 8U air-cooled servers, the G893-SD1-AAX5 and G893-ZD1-AAX5, signifies a new architecture and platform change for GIGABYTE in the demanding world of high-performance computing and AI, setting new standards for speed, scalability, and versatility.

These servers join GIGABYTE's accelerated computing portfolio alongside the NVIDIA GB200 NVL72 platform, which is a rack-scale design that connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. At CES 2025 (January 7-10), the GIGABYTE booth will display the NVIDIA GB200 NVL72, and attendees can engage in discussions about the benefits of GIGABYTE platforms with the NVIDIA Blackwell architecture.

Linux Kernel Patch Fixes Minutes-Long Boot Times on AMD "Zen 1" and "Zen 2" Processors

A significant fix has been submitted to the Linux kernel 6.13-rc1 that addresses prolonged boot times affecting older AMD processors, specifically targeting "Zen 1" and "Zen 2" architectures. The issue, which has been present for approximately 18 months, could cause boot delays ranging from several seconds to multiple minutes in extreme cases. The problem was discovered by a Nokia engineer who reported inconsistent boot delays across multiple AMD EPYC servers. The most severe instances showed the initial unpacking process taking several minutes longer than expected, though not all boots were affected. Investigation revealed that the root cause stemmed from a kernel modification implemented in June 2023, specifically related to CPU microcode update handling.

The technical issue was identified as a missing step in the boot process: Zen 1 and Zen 2 processors require the patch buffer mapping to be flushed from the Translation Lookaside Buffer (TLB) after applying CPU microcode updates during startup. The fix, submitted as part of the "x86/urgent" material ahead of the Linux 6.13-rc1 release, implements the necessary TLB flush for affected AMD Ryzen and EPYC systems. This addition eliminates what developers described as "unnecessary and unnatural delays" in the boot process. While the solution will be included in the upcoming Linux 6.13 kernel release, plans are in place to back-port the fix to stable kernel versions to help cover most Linux users on older Zen architectures.

Advantech Unveils AMD-Powered Network Appliances

To address the growing demands for agile embedded networking, intelligent edge, and secure communication, Advantech, a leading provider of network security solutions, has launched a new series of x86 network appliances: FWA-6183, FWA-5082, and FWA-1081. Powered by AMD EPYC 9004 and 8004, and AMD Ryzen V3000 series processors, this series delivers advanced computing performance, high bandwidth, and lower TDP. These appliances are optimized for a wide range of workloads, from SMEs to larger-scale enterprise of network security applications, including edge computing, WAN optimization, DPI/IPS/IDS, SD-WAN/SASE, and NGFW/UTM.

Key AMD Embedded Network Advantages
AMD EPYC & Ryzen Series Processors:
  • Breakthrough Performance
  • Up to 96 cores/192 threads, ensuring scalable processing power.
  • Expansive I/O Options
  • PCIe Gen 5 with up to 128 lanes for high bandwidth and maximum I/O flexibility
  • Optimized Power Efficiency

AMD Custom Makes CPUs for Azure: 88 "Zen 4" Cores and HBM3 Memory

Microsoft has announced its new Azure HBv5 virtual machines, featuring unique custom hardware made by AMD. CEO Satya Nadella made the announcement during Microsoft Ignite, introducing a custom-designed AMD processor solution that achieves remarkable performance metrics. The new HBv5 virtual machines deliver an extraordinary 6.9 TB/s of memory bandwidth, utilizing four specialized AMD processors equipped with HBM3 technology. This represents an eightfold improvement over existing cloud alternatives and a staggering 20-fold increase compared to previous Azure HBv3 configurations. Each HBv5 virtual machine boasts impressive specifications, including up to 352 AMD EPYC "Zen4" CPU cores capable of reaching 4 GHz peak frequencies. The system provides users with 400-450 GB of HBM3 RAM and features doubled Infinity Fabric bandwidth compared to any previous AMD EPYC server platform. Given that each VM had four CPUs, this yields 88 Zen 4 cores per CPU socket, with 9 GB of memory per core.

The architecture includes 800 Gb/s of NVIDIA Quantum-2 InfiniBand connectivity and 14 TB of local NVMe SSD storage. The development marks a strategic shift in addressing memory performance limitations, which Microsoft identifies as a critical bottleneck in HPC applications. This custom design particularly benefits sectors requiring intensive computational resources, including automotive design, aerospace simulation, weather modeling, and energy research. While the CPU appears custom-designed for Microsoft's needs, it bears similarities to previously rumored AMD processors, suggesting a possible connection to the speculated MI300C chip architecture. The system's design choices, including disabled SMT and single-tenant configuration, clearly focus on optimizing performance for specific HPC workloads. If readers can recall, Intel also made customized Xeons for AWS and their needs, which is normal in the hyperscaler space, given they drive most of the revenue.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

MSI Presents New AMD EPYC 9005 Series CPU-Based Server Platforms at SC24

MSI, a leading global provider of high-performance server solutions, is excited to unveil its latest AMD EPYC 9005 Series CPU-based server boards and platforms at SC24 (SuperComputing 2024), Booth #3655, from November 19-21. Built on the OCP Modular Hardware System (DC-MHS) architecture, these new platforms deliver high-density, AI-ready solutions, including multi-node, enterprise, CXL memory expansion, and GPU servers, designed to meet the intensive demands of modern data centers.

"As AI continues to reshape the landscape of data center infrastructure, MSI's servers, powered by the AMD EPYC 9005 Series processors, offer unmatched density, energy efficiency, and cost optimization—making them ideal for modern data centers," said Danny Hsu, General Manager of Enterprise Platform Solutions. "Our servers optimize thermal management and performance for virtualized and containerized environments, positioning MSI at the forefront of AI and cloud-based workloads."

Arctic Unveils Freezer 4U-SP5 Cooler for AMD Socket SP5

ARCTIC is expanding its range of high-performance coolers for servers and workstations. With the Freezer 4U-SP5, ARCTIC is introducing a specialist that was specifically developed for SP5 EPYC processors and offers optimal cooling for the highest demands. With a maximum power consumption of only 6.96 W, the Freezer 4U-SP5 cools even the most powerful AMD processors with 128 cores and 360 W TDP extremely efficiently.

The targeted arrangement of the heat pipes at the high-temperature zones of the PCB edges enables rapid heat dissipation exactly where it is needed. The Freezer 4U-SP5's fans work in a push-pull configuration and can be controlled via PWM, which ensures performance-optimized operation. The long-lasting, particularly low-wear double ball bearings make it ideal for 24/7 continuous operation. With a height of only 145 mm, the Freezer 4U-SP5 fits into common server cases from 4U/4HE and can also be used on dual-CPU mainboards.

ASRock Rack Brings End-to-End AI and HPC Server Portfolio to SC24

ASRock Rack Inc., a leading innovative server company, today announces its presence at SC24, held at the Georgia World Congress Center in Atlanta from November 18-21. At booth #3609, ASRock Rack will showcase a comprehensive high-performance portfolio of server boards, systems, and rack solutions with NVIDIA accelerated computing platforms, helping address the needs of enterprises, organizations, and data centers.

Artificial intelligence (AI) and high-performance computing (HPC) continue to reshape technology. ASRock Rack is presenting a complete suite of solutions spanning edge, on-premise, and cloud environments, engineered to meet the demand of AI and HPC. The 2U short-depth MECAI, incorporating the NVIDIA GH200 Grace Hopper Superchip, is developed to supercharge accelerated computing and generative AI in space-constrained environments. The 4U10G-TURIN2 and 4UXGM-GNR2, supporting ten and eight NVIDIA H200 NVL PCIe GPUs respectively, are aiming to help enterprises and researchers tackle every AI and HPC challenge with enhanced performance and greater energy efficiency. NVIDIA H200 NVL is ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations, delivering acceleration for AI and HPC workloads regardless of size.

AMD Powers El Capitan: The World's Fastest Supercomputer

Today, AMD showcased its ongoing high performance computing (HPC) leadership at Supercomputing 2024 by powering the world's fastest supercomputer for the sixth straight Top 500 list.

The El Capitan supercomputer, housed at Lawrence Livermore National Laboratory (LLNL), powered by AMD Instinct MI300A APUs and built by Hewlett Packard Enterprise (HPE), is now the fastest supercomputer in the world with a High-Performance Linpack (HPL) score of 1.742 exaflops based on the latest Top 500 list. Both El Capitan and the Frontier system at Oak Ridge National Lab claimed numbers 18 and 22, respectively, on the Green 500 list, showcasing the impressive capabilities of the AMD EPYC processors and AMD Instinct GPUs to drive leadership performance and energy efficiency for HPC workloads.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive line-up for AI and HPC success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.
Return to Keyword Browsing
Mar 25th, 2025 07:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts