News Posts matching #Xeon

Return to Keyword Browsing

GIGABYTE Showcases Cutting-Edge AI and Cloud Computing Solutions at CloudFest 2025

Giga Computing, a subsidiary of GIGABYTE, a global leader in IT technology solutions, is thrilled to announce its participation at CloudFest 2025, the world's premier cloud, hosting, and internet infrastructure event. As a key exhibitor, Giga Computing will highlight its latest innovations in AI, cloud computing, and edge solutions at the GIGABYTE booth. In line with its commitment to shaping the future of AI development and deployment, the GIGABYTE booth will showcase its industry-leading hardware and platforms optimized for AI workloads, cloud applications, and edge computing. As cloud adoption continues to accelerate, Giga Computing solutions are designed to empower businesses with unparalleled performance, scalability, and efficiency.

At CloudFest 2025, Giga Computing invites attendees to visit booth #E03 to experience firsthand its cutting-edge cloud computing solutions. From state-of-the-art hardware to innovative total solutions, a comprehensive suite of products and services designed to meet the evolving needs of the cloud industry are being showcased.

ASUS Showcases Servers Based on Intel Xeon 6, Intel Gaudi 3 at CloudFest 2025

ASUS today announced its showcase of comprehensive AI infrastructure solutions at CloudFest 2025, bringing together cutting-edge hardware powered by Intel Xeon 6 processors, NVIDIA GPUs and AMD EPYC processors. The company will also highlight its integrated software platforms, reinforcing its position as a total AI solution provider for enterprises seeking seamless AI deployments from edge to cloud.

Intel Xeon 6-based AI solutions and Gaudi 3 Acceleration for generative AI inferencing and fine tuning training
ASUS Intel Xeon 6-based servers leverage the Data Center Modular Hardware System (DC-MHS) architecture, providing unparalleled scalability, cost-efficiency and simplified maintenance. ASUS will showcase a comprehensive Intel Xeon 6 family of processors at CloudFest 2025, including the RS700-E12, RS720Q-E12. and ESC8000-E12P-series servers. The ESC800-E12P-series servers will debut the Intel Gaudi 3 AI accelerator PCIe card. This lineup underscores the ASUS commitment to delivering comprehensive AI solutions that integrate cutting-edge hardware with enterprise-grade software platforms for seamless, scalable AI deployments, highlighting Intel's latest innovations for high-performance AI training, inference, and cloud-native workloads.

GIGABYTE Showcases Future-Ready AI and HPC Technologies for High-Efficiency Computing at SCA 2025

Giga Computing, a subsidiary of GIGABYTE and a pioneer in AI-driven enterprise computing, is set to make a significant impact at Supercomputing Asia 2025 (SCA25) in Singapore (March 11-13). At booth #D5, GIGABYTE showcases its latest advancements in liquid cooling, solutions for AI training and high-performance computing (HPC). The booth highlights GIGABYTE's innovative technology and comprehensive direct liquid cooling (DLC) strategies, reinforcing its commitment to energy-efficient, high-performance computing.

Revolutionizing AI Training with DLC
A key highlight of GIGABYTE's showcase is the NVIDIA HGX H200 platform, a next-generation solution for AI workloads. GIGABYTE is presenting both its liquid-cooled G4L3-SD1 server and its air-cooled G893 series, providing businesses with advanced cooling solutions tailored for high-performance demands. The G4L3-SD1 server, equipped with CoolIT Systems' cold plates, effectively cools Intel Xeon CPUs and eight NVIDIA H200 GPUs, ensuring optimal performance with enhanced energy efficiency.

Intel Showcases Foundational Network Infrastructure with Xeon 6 at MWC 2025

The telecommunications industry is undergoing a major transformation as AI and 5G technologies reshape networks and connectivity. While operators are eager to modernize infrastructure, challenges remain, such as high capital expenditures, security concerns and integration with legacy systems. At MWC 2025, Intel - alongside more than 50 partners and customers - will showcase groundbreaking solutions that deliver high capacity and high efficiency performance with built-in AI integration, eliminating the need for costly additional hardware and delivering optimized total cost of ownership (TCO).

"By leveraging cloud technologies and fostering close collaborations with partners, we are helping operators virtualize both 5G core and radio access networks - proving that the most demanding, mission-critical workloads can run efficiently on general-purpose silicon," said Sachin Katti, senior vice president and general manager of the Network and Edge Group at Intel Corporation. "Through our Xeon 6 processors, we are enabling the future of AI-powered network modernization."

ASUS Unveils All-New Intel Xeon 6 Server Lineup

ASUS today announced an all-new series of servers powered by the latest Intel Xeon 6 processors, including the Xeon 6900-series, 6500P/6700P-series and 6300-series processors. These powerhouse processors deliver exceptional performance, efficiency and scalability, featuring up to 128 Performance-cores (P-cores) or 288 Efficient-cores (E-cores) per socket, along with native support for PCI Express (PCIe 5.0) and DDR5 6400 MT/s memory speeds. The latest ASUS server solutions also incorporate the updated BMC module within the ASPEED 2600 chipset, providing improved manageability, security and compatibility with a wide range of remote management software - and coincide with the unveiling of the latest Intel Xeon 6 processors.

Redefining efficiency and scalability
Intel Xeon 6 processors are engineered to meet the needs of modern data centers, AI-driven workloads and enterprise computing. Offering a choice between P-core and E-core architectures, these processors provide flexibility for businesses to optimize performance and energy efficiency based on specific workloads.

GIGABYTE Launches New Servers Using Intel Xeon 6700 & 6500-series Processors and Provides Updates for Servers Using Xeon 6300-series

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced new GIGABYTE rack servers that are optimized for Intel Xeon 6700/6500-series processors with up to 136 PCIe 5.0 lanes. Additionally, GIGABYTE enterprise products support new Intel Xeon 6700/6500-series with P-cores and 6300-series.

New Servers Supporting Platform with Up to 136 PCIe Lanes
The new Intel Xeon 6700 and 6500-series have processor SKUs that are designed to support either up to 136 PCIe 5.0 lanes (referred to as R1S) or 88 PCIe lanes. Optimized for R1S processors, the new GIGABYTE rack servers (R264-SG2, R264-SG3, R264-SG5, R164-SG5, R164-SG6) make use of the additional PCIe lanes for diverse storage options, dual-slot GPUs, and additional expansion slots including support for OCP NIC 3.0. These servers will be deployed in applications such as storage, telecom, edge, and more. The new servers support Intel Xeon 6 processors using LGA 4710; however, they are further optimized to take advantage of the additional PCIe lanes when compared to other Intel Xeon 6 processors. A new server that exemplifies this advantage, the R264-SG5 supports up to twenty-eight E3.S Gen 5 NVMe drives, yet it has support for a dual-slot (Gen 5 x16) GPU.

Lenovo Delivers Unmatched Flexibility, Performance and Design with New ThinkSystem V4 Servers Powered by Intel Xeon 6 Processors

Today, Lenovo announced three new infrastructure solutions, powered by Intel Xeon 6 processors, designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next generation Lenovo ThinkSystem V4 servers that deliver breakthrough performance and exceptional versatility to handle any workload while enabling powerful AI capabilities in compact, high-density designs. Whether deploying at the edge, co-locating or leveraging a hybrid cloud, Lenovo is delivering the right mix of solutions that seamlessly unlock intelligence and bring AI wherever it is needed.

The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute intensive - from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth when using new MRDIMM technology, to scale and accelerate AI everywhere.

Intel Unveils High-Performance, Power-Efficient Ethernet Solutions

Intel today launched two new Ethernet product lines - the Intel Ethernet E830 Controllers and Network Adapters, and the Intel Ethernet E610 Controllers and Network Adapters - designed to meet the growing demands of enterprise, telecommunications, cloud, edge, high performance computing (HPC) and artificial intelligence (AI) applications. These next-generation solutions provide robust, high-performance connectivity while enhancing energy efficiency and security, and lowering total cost of ownership (TCO).

"In today's interconnected world, networking is essential to the success of business and technology transformation. With the launch of the Intel Ethernet E830 and E610 products, we are helping customers meet the growing demand for high-performance, energy-efficient solutions that optimize network infrastructures, lower operational costs and enhance TCO." -Bob Ghaffari, Intel vice president, Network and Edge Group

OnLogic Reveals the Axial AX300 Edge Server

OnLogic, a leading provider of edge computing solutions, has launched the Axial AX300, a highly customizable and powerful edge server. The AX300 is engineered to help businesses of any size better leverage their on-site data and unlock the potential of AI by placing powerful computing capabilities on-site.

The Axial AX300 empowers organizations to seamlessly move computing resources closer to the data source, providing significant advantages in performance, latency, operational efficiency, and total cost of ownership over cloud-based data management. With its robust design, flexible configuration options, and advanced security features, the Axial AX300 is the ideal platform for a wide range of highly-impactful edge computing applications, including:
  • AI/ML inference and training: Leveraging the power of AI/ML at the edge for real-time insights, predictive maintenance, and improved decision-making.
  • Data analytics: Processing and analyzing data generated by IoT devices and sensors in real-time to improve operational efficiency.
  • Virtualization: Consolidating multiple workloads onto a single server, optimizing resource utilization and simplifying deployment and management.

Intel Xeon "Granite Rapids-W" Mainstream & Expert HEDT CPUs Leaked

Unannounced Intel processor families have emerged online over the past week—one source, Jaykihn, has unearthed a treasure trove of speculative mobile SKUs. Today's discovery pushes into the enterprise market segment; focusing on Team Blue's "Granite Rapids-W" (GNR-W) platform, likely equipped with Redwood Cove cores. The latest leak suggests an upcoming emergence of "Mainstream" and "Expert" workstation-oriented product tiers, allegedly prepared with the company's rumored W890 motherboard chipset. Past generations of "Xeon W" HEDT processor families have rolled out with entry-level and high-end offerings.

The leaker reckons that the mainstream Intel Granite Rapids-W "Xeon W" lineup will arrive with 4-channel memory support and 80 PCIe Gen 5 Lanes. Team Blue's higher-end "Expert" tier is anticipated with 128 PCIe Gen 5 lanes and 8-channel memory support. In addition to leaking processor information, Jaykihn outlined basic details regarding the W890 chipset—they believe that board designs will have access to 24 PCIe Gen 4 lanes, as well as 8 Gen 4 lanes for Intel's proprietary link (DMI) between northbridge and southbridge. The leaker did not divulge details of upcoming socket types—Team Blue is notorious for its elaborate rollout out of multiple LGA platforms.

Intel Xeon Server Processor Shipments Fall to a 13-Year Low

Intel's data center business has experienced a lot of decline in recent years. Once the go-to choice for data center buildout, nowadays, Xeon processors have reached a 13-year low. According to SemiAnalysis analyst Sravan Kundojjala on X, the once mighty has fallen to a 13-year low number, less than 50% of its CPU sales in the peak observed in 2021. In a chart that is indexed to 2011 CPU volume, the analysis gathered from server volume and 10K fillings shows the decline that Intel has experienced in recent years. Following the 2021 peak, the volume of shipped CPUs has remained in free fall, reaching less than 50% of its once-dominant position. The main cause for this volume contraction is attributed to Intel's competitors gaining massive traction. AMD, with its EPYC CPUs, has been Intel's primary competitor, pushing the boundaries on CPU core count per socket and performance per watt, all at an attractive price point.

During a recent earnings call, Intel's interim c-CEO leadership admitted that Intel is still behind the competition with regard to performance, even with Granite Rapids and Clearwater Forest, which promised to be their advantage in the data center. "So I think it would not be unfathomable that I would put a data center product outside if that meant that I hit the right product, the right market window as well as the right performance for my customers," said Intel co-CEO Michelle Johnston Holthaus, adding that "Intel Foundry will need to earn my business every day, just as I need to earn the business of my customers." This confirms that the company is now dedicated to restoring its product leadership, even if its internal foundry is not doing okay. It will take some time before Intel CPU volume shipments recover, and with AMD executing well in data center, it is becoming a highly intense battle.

Intel Pushes "Clearwater Forest" Xeon CPU Series Launch into 2026

Intel has officially announced that its "Clearwater Forest" Xeon processor family will be arriving somewhere in the first half of 2026. During a recent earnings call, interim co-CEO—Michelle Johnston Holthaus—discussed Team Blue's product roadmap for 2025 and beyond: "this year is all about improving Intel Xeon's competitive position as we fight harder to close the gap to the competition. The ramp of Granite Rapids has been a good first step. We are also making good progress on Clearwater Forest, our first Intel 18A server product that we plan to launch in the first half of next year." Press outlets have (correctly) pointed out that Intel's "Clearwater Forest" Xeon processors were originally slated for release in 2025, so the company's executive branch has seemingly admitted—in a low-key manner—that their next-gen series is delayed. Industry whispers from last autumn posit that Team Blue foundries were struggling with their proprietary 18A (1.8 nm) node process—at the time, watchdogs predicted a postponement of "Clearwater Forest" server processors.

The original timetable had "Clearwater Forest" server CPUs arriving not long after the launch of Intel's latest line of "Sierra Forest" products—288-core models from the Xeon 6-series. The delay into 2026 could be beneficial—The Register proposes that "Xeons bristling with E-cores" have not found a large enough audience. Holthaus disclosed a similar sentiment (in the Q4 earnings call): "what we've seen is that's more of a niche market, and we haven't seen volume materialize there as fast as we expected." Despite rumors swirling around complications affecting chip manufacturing volumes, Intel's temporary co-leaders believe that things are going well. David Zinsner—Team Blue's CFO—stated: "18A has been an area of good progress...Like any new process, there have been ups and downs along the way, but overall, we are confident that we are delivering a competitive process." His colleague added: "as the first volume customer of Intel 18A, I see the progress that Intel Foundry is making on performance and yield, and I look forward to being in production in the second half, as we demonstrate the benefits of our world-class design."

Intel Cuts Xeon 6 Prices up to 30% to Battle AMD in the Data Center

Intel has implemented substantial price cuts across its Xeon 6 "Granite Rapids" server processor lineup, marking a significant shift in its data center strategy. The reductions, quietly introduced and reflected in Intel's ARK database, come just four months after the processors' September launch. The most dramatic cut affects Intel's flagship 128-core Xeon 6980P, which saw its price drop from $17,800 by 30% to $12,460. This aggressive pricing positions the processor below AMD's competing EPYC "Turin" 9755 128-core CPU both absolute and per-core pricing, intensifying the rivalry between the two semiconductor giants. AMD's SKU at 128 cores is now pricier at $12,984, with higher core count SKUs reaching up to $14,813 for 192-core EPYC 9965 CPU based on Zen 5c core. Intel is expected to release 288-core "Sierra Forest" Xeon SKUs this quarter, so we can get an updated pricing structure and compare it to AMD.

Additionally, Intel's price adjustments extend beyond the flagship model, with three of the five Granite Rapids processors receiving substantial reductions. The 96-core Xeon 6972P and 6952P models have been marked down by 13% and 20% respectively. These cuts make Intel's offerings particularly attractive to cloud providers who prioritize core density and cost efficiency. However, Intel's competitive pricing comes with trade-offs. The higher power consumption of Intel's processors—exemplified by the 96-core Xeon 6972P's 500 W requirement, which exceeds AMD's comparable model by 100 W—could offset the initial savings through increased operational costs. Ultimately, most of the data center buildout will be won by whoever can serve the most CPU volume shipped (read wafer production capacity) and the best TCO/ROI balance, including power consumption and performance.

Supermicro Empowers AI-driven Capabilities for Enterprise, Retail, and Edge Server Solutions

Supermicro, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is showcasing the latest solutions for the retail industry in collaboration with NVIDIA at the National Retail Federation (NRF) annual show. As generative AI (GenAI) grows in capability and becomes more easily accessible, retailers are leveraging NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, for a broad spectrum of applications.

"Supermicro's innovative server, storage, and edge computing solutions improve retail operations, store security, and operational efficiency," said Charles Liang, president and CEO of Supermicro. "At NRF, Supermicro is excited to introduce retailers to AI's transformative potential and to revolutionize the customer's experience. Our systems here will help resolve day-to-day concerns and elevate the overall buying experience."

Infortrend Unveils New High-Density Storage Solution

Infortrend Technology, Inc., the industry-leading enterprise storage provider, introduces the latest additions to its high-density 4U 90-bay HDD solutions: unified storage EonStor GS 5090 and expansion enclosure JB 4090. The large capacity and high throughput performance of the solutions make them ideal for applications like High-Performance Computing and Media & Entertainment.

EonStor GS 5090 is a high-availability unified storage solution with a dual redundant controller design, ensuring uninterrupted operation. With its new hardware powered by the Intel Xeon Scalable Processor, the GS 5090 delivers ultra-high performance, achieving 45 GB/s read and 20 GB/s write—around three times the performance of the previous top-performing model, the GS 4090. The GS 5090 introduces ultra-fast 200GbE connectivity, setting a new benchmark for data transfer speed while simplifying deployment and management to meet the demands of today's high-performance enterprise workloads. It also supports SAS 24G expansion, doubling throughput compared to SAS 12G. Additionally, the GS 5090 features four dedicated U.2 NVMe SSD slots for cache, providing fast data access while fully leveraging the ample capacity of its 90-bay storage.

Qualcomm Pushes for Data Center CPUs, Hires Ex-Intel Chief Xeon Architect

Qualcomm is becoming serious about its server CPU ambitions. Today, we have learned that Sailesh Kottapalli, Intel's former chief architect for Xeon server processors, has joined Qualcomm as Senior Vice President after 28 years at Intel. Kottapalli, who announced his departure on LinkedIn Monday, previously led the development of multiple Xeon and Itanium processors at Intel. Qualcomm's data center team is currently working on reference platforms based on their Snapdragon technology. The company already sells AI accelerator chips under the Qualcomm Cloud AI brand, supported by major providers including AWS, HPE, and Lenovo.

This marks Qualcomm's second attempt at entering the server CPU market, following an unsuccessful Centriq effort that ended in 2018. The company is now leveraging technology from its $1.4 billion Nuvia acquisition in 2021, though this has led to ongoing legal disputes with Arm over licensing terms. While Qualcomm hasn't officially detailed Kottapalli's role, the company confirmed in legal filings its intentions to continue developing data center CPUs, as originally planned by Nuvia.

ENERMAX Launches LIQTECH XTR, Workstation-Level CPU AIO Cooler

ENERMAX is proud to be launching its new workstation-level CPU AIO cooler: LIQTECH XTR. Specifically designed for the AMD Ryzen Threadripper and Intel Xeon W series processors with a cooling capacity of over 550 W. LIQTECH XTR AIO cooler debuted at Computex 2024 with a live demo, achieving a cooling capacity of over 550 W. The LIQTECH XTR 360 mm CPU cooler features an enlarged cold plate design on its water block, fully covering the IHS area of both AMD Threadripper and Intel Xeon W series processors. Thanks to ENERMAX's high-efficient EP1 pump with a 450L/h flow rate and 3000 RPM radiator fans, LIQTECH XTR can provide more than 550 W cooling capacity on both AMD and Intel platforms.

LIQTECH XTR AIO cooler is also the first ENERMAX CPU cooler to implement the magnetic, fully rotatable, real-time status digital display. Users can manually adjust the LED screen on the water block to set their preferred digital direction. The additional software, ENERMAX's tuner, can further provide a real-time data showing on the water block. ENERMAX's LIQTECH XTR AIO cooler is the perfect cooling solution for workstations that need to perform heavy duty tasks such as 3D rendering, product visualization & simulation, artificial intelligence (AI), and machine learning acceleration. It fully supports AMD socket sTR5/SP6/sWRX8/sTRX4/TR4/SP3 and Intel socket LGA 4677.

$30,000 Music Streaming Server is the Next Audiophile Dream Device

Taiko Audio, a Dutch high-end audio manufacturer, has unveiled what might be the most over-engineered music server ever created—the Extreme Server. With a starting price of €28,000 (US$29,600), this meticulously crafted device embodies either the pinnacle of audio engineering or the epitome of audiophile excess. The Extreme's most distinctive feature is its unique dual-processor architecture, using two Intel Xeon Scalable 10-core CPUs. This unusual configuration isn't just for show—Taiko claims it solves a specific audiophile dilemma: the impact of Roon's music management interface on sound quality. By dedicating two processors to Roon and Windows 10 Enterprise LTSC 2019 interface, they've made Roon's processing "virtually inaudible", addressing a concern most music listeners probably never knew existed.

Perhaps the most striking technical achievement is the server's cooling system, or rather, its complete absence of conventional cooling. Taiko designed a custom 240 W passive cooling solution with absolutely no fans or moving parts. The company machined the CPU interface to a mind-boggling precision of 5 microns (0.005 mm) and opted for solid copper heat sinks instead of aluminium, claiming this will extend component life by 4 to 12 years. The attention to detail extends to the memory configuration, where Taiko takes an unconventional approach. The server uses twelve 4 GB custom-made industrial memory modules, each factory pre-selected with components matched to within 1% tolerance. According to Taiko, this reduces the refresh rate burst current by almost 50% and allows for lower operating temperatures. The PSU that powers the PC is a custom 400 W linear power supply, an in-house development designed specifically for the Extreme's unique needs. It combines premium Mundorf and Duelund capacitors for sonic neutrality, Lundahl chokes selected by ear, and extensive vibrational damping using Panzerholz (a compressed wood composite) for durability, low temperature operation, longevity, and exceptional sound quality.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive Lineup for AI and HPC Success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that are powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

TOP500: El Capitan Achieves Top Spot, Frontier and Aurora Follow Behind

The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier and Aurora. Both systems have since moved down to No. 2 and No. 3 spots, respectively. Additionally, new systems have found their way onto the Top 10.

The new El Capitan system at the Lawrence Livermore National Laboratory in California, U.S.A., has debuted as the most powerful system on the list with an HPL score of 1.742 EFlop/s. It has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. El Capitan relies on a Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 58.89 GigaFLOPS/watt. This power efficiency rating helped El Capitan achieve No. 18 on the GREEN500 list as well.

Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

Expanding the system's computing capability with an additional accelerator like a GPU is common. However, expanding the system's memory capacity with room for more DIMM is something new. Thanks to ServeTheHome, we see that at the OCP Summit 2024, Lenovo showcased its ThinkSystem SR860 V3 server, leveraging CXL technology and Astera Labs Leo memory controllers to accommodate a staggering 16 TB of DDR5 memory across 128 DIMM slots. Traditional four-socket servers face limitations due to the memory channels supported by Intel Xeon processors. With each CPU supporting up to 16 DDR5 DIMMs, a four-socket configuration maxes out at 64 DIMMs, equating to 8 TB when using 128 GB RDIMMs. Lenovo's new approach expands this ceiling significantly by incorporating an additional 64 DIMM slots through CXL memory expansion.

The ThinkSystem SR860 V3 integrates Astera Labs Leo controllers to enable the CXL-connected DIMMs. These controllers manage up to four DDR5 DIMMs each, resulting in a layered memory design. The chassis base houses four Xeon processors, each linked to 16 directly connected DIMMs, while the upper section—called the "memory forest"—houses the additional CXL-enabled DIMMs. Beyond memory capabilities, the server supports up to four double-width GPUs, making it also a solution for high-performance computing and AI workloads. This design caters to scale-up applications requiring vast memory resources, such as large-scale database management, and allows the resources to stay in memory instead of waiting on storage. CXL-based memory architectures are expected to become more common next year. Future developments may see even larger systems with shared memory pools, enabling dynamic allocation across multiple servers. For more pictures and video walkthrough, check out ServeTheHome's post.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive line-up for AI and HPC success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

ADATA XPG Debuts First AICORE DDR5 Overclocked R-DIMM for High-end Workstations

XPG, a fast-growing provider of systems, components, and peripherals for Gamers, Esports Pros, and Tech Enthusiasts and gaming brand of ADATA Technology, the world's leading brand for memory modules and flash memory, has now expanded its product line and officially entered the field of workstations. Today XPG announced the launch of the first AICORE Overclocked DDR5 R-DIMM, with a maximum speed of 8,000 MT/s and a capacity of 32 GB, making challenging large-capacity workstation memory expansions a breeze. AICORE is built for improving overall system performance, process complex data, multi-tasking more quickly, and maximize the efficiency of AI computing.

Born for High-Speed Computing and AI Development
R-DIMM adopts a Register Clock Driver (RCD) and is characterized by high-speed, low-latency, and enhanced operational stability. AICORE Overclocked R-DIMM memory is designed for high speed and stability and is perfect for large-scale data processing, AI generation, 3D rendering and graphics, video post-production editing, and multitasking. AICORE is designed for securities markets that emphasize real-time data analysis and data science professionals or expert image creators to accelerate work efficiency and quickly complete various large-scale projects. AICORE Overclocked DDR5 R-DIMM memory has a maximum speed of 8,000 MT/s which is 1.6 times faster than standard R-DIMM memory, delivering more powerful performance to meet the rigorous speed requirements of high-end systems.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.

Flex Announces Liquid-Cooled Rack and Power Solutions for AI Data Centers at 2024 OCP Global Summit

Flex today announced new reference platforms for liquid-cooled servers, rack, and power products that will enable customers to sustainably accelerate data center growth. These innovations build on Flex's ability to address technical challenges associated with power, heat generation, and scale to support artificial intelligence (AI) and high-performance computing (HPC) workloads.

"Flex delivers integrated data center IT and power infrastructure solutions that address the growing power and compute demands in the AI era," said Michael Hartung, president and chief commercial officer, Flex. "We are expanding our unique portfolio of advanced manufacturing capabilities, innovative products, and lifecycle services, enabling customers to deploy IT and power infrastructure at scale and drive AI data center expansion."
Return to Keyword Browsing
Mar 22nd, 2025 23:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts