News Posts matching #Server

Return to Keyword Browsing

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

Advantech Introduces Its GPU Server SKY-602E3 With NVIDIA H200 NVL

Advantech, a leading global provider of industrial edge AI solutions, is excited to introduce its GPU server SKY-602E3 equipped with the NVIDIA H200 NVL platform. This powerful combination is set to accelerate the offline LLM for manufacturing, providing unprecedented levels of performance and efficiency. The NVIDIA H200 NVL, requiring 600 W passive cooling, is fully supported by the compact and efficient SKY-602E3 GPU server, making it an ideal solution for demanding edge AI applications.

Core of Factory LLM Deployment: AI Vision
The SKY-602E3 GPU server excels in supporting large language models (LLMs) for AI inference and training. It features four PCIe 5.0 x16 slots, delivering high bandwidth for intensive tasks, and four PCIe 5.0 x8 slots, providing enhanced flexibility for GPU and frame grabber card expansion. The half-width design of the SKY-602E3 makes it an excellent choice for workstation environments. Additionally, the server can be equipped with the NVIDIA H200 NVL platform, which offers 1.7x more performance than the NVIDIA H100 NVL, freeing up additional PCIe slots for other expansion needs.

$30,000 Music Streaming Server is the Next Audiophile Dream Device

Taiko Audio, a Dutch high-end audio manufacturer, has unveiled what might be the most over-engineered music server ever created—the Extreme Server. With a starting price of €28,000 (US$29,600), this meticulously crafted device embodies either the pinnacle of audio engineering or the epitome of audiophile excess. The Extreme's most distinctive feature is its unique dual-processor architecture, using two Intel Xeon Scalable 10-core CPUs. This unusual configuration isn't just for show—Taiko claims it solves a specific audiophile dilemma: the impact of Roon's music management interface on sound quality. By dedicating two processors to Roon and Windows 10 Enterprise LTSC 2019 interface, they've made Roon's processing "virtually inaudible", addressing a concern most music listeners probably never knew existed.

Perhaps the most striking technical achievement is the server's cooling system, or rather, its complete absence of conventional cooling. Taiko designed a custom 240 W passive cooling solution with absolutely no fans or moving parts. The company machined the CPU interface to a mind-boggling precision of 5 microns (0.005 mm) and opted for solid copper heat sinks instead of aluminium, claiming this will extend component life by 4 to 12 years. The attention to detail extends to the memory configuration, where Taiko takes an unconventional approach. The server uses twelve 4 GB custom-made industrial memory modules, each factory pre-selected with components matched to within 1% tolerance. According to Taiko, this reduces the refresh rate burst current by almost 50% and allows for lower operating temperatures. The PSU that powers the PC is a custom 400 W linear power supply, an in-house development designed specifically for the Extreme's unique needs. It combines premium Mundorf and Duelund capacitors for sonic neutrality, Lundahl chokes selected by ear, and extensive vibrational damping using Panzerholz (a compressed wood composite) for durability, low temperature operation, longevity, and exceptional sound quality.

Path of Exile 2 Becomes Victim of Its Own Success As 450,000+ Players Overwhelm Servers

Path of Exile 2 today released in Early Access on Steam and consoles, and, despite the game's $29.99 Early Access pricing, it has already managed to amass a peak player count of over 458,920 players on Steam alone. While this is undoubtedly good news for the developer and publisher, the increased server load has apparently already caused problems, resulting in excessive queue times to get into game sessions. At the time of writing, the game has only been available to play for a little over four hours, and the player count is only beginning to plateau now.

According to the Path of Exile X account, the development team has been hard at work trying to stem the bleeding, as it were. So far, the Path of Exile website has been down several times due to the high traffic, preventing players from claiming their Steam keys. Additionally, and somewhat hilariously, this outage has also affected the "Early Access Live Updates" site that was meant to be a resource for gamers to keep track of work the live service team was doing to try and deal with the high launch-day volumes.

Amazon AWS Announces General Availability of Trainium2 Instances, Reveals Details of Next Gen Trainium3 Chip

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, today announced the general availability of AWS Trainium2-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, introduced new Trn2 UltraServers, enabling customers to train and deploy today's latest AI models as well as future large language models (LLM) and foundation models (FM) with exceptional levels of performance and cost efficiency, and unveiled next-generation Trainium3 chips.

"Trainium2 is purpose built to support the largest, most cutting-edge generative AI workloads, for both training and inference, and to deliver the best price performance on AWS," said David Brown, vice president of Compute and Networking at AWS. "With models approaching trillions of parameters, we understand customers also need a novel approach to train and run these massive workloads. New Trn2 UltraServers offer the fastest training and inference performance on AWS and help organizations of all sizes to train and deploy the world's largest models faster and at a lower cost."

Dell Technologies Delivers Third Quarter Fiscal 2025 Financial Results

Dell Technologies announces financial results for its fiscal 2025 third quarter. Revenue was $24.4 billion, up 10% year over year. Operating income was $1.7 billion and non-GAAP operating income was $2.2 billion, both up 12% year over year. Diluted earnings per share was $1.58, and non-GAAP diluted earnings per share was $2.15, up 16% and 14% year over year, respectively.

"We continued to build on our AI leadership and momentum, delivering combined ISG and CSG revenue of $23.5 billion, up 13% year over year," said Yvonne McGill, chief financial officer, Dell Technologies. "Our continued focus on profitability resulted in EPS growth that outpaced revenue growth, and we again delivered strong cash performance."

Aetina Debuts at SC24 With NVIDIA MGX Server for Enterprise Edge AI

Aetina, a subsidiary of the Innodisk Group and an expert in edge AI solutions, is pleased to announce its debut at Supercomputing (SC24) in Atlanta, Georgia, showcasing the innovative SuperEdge NVIDIA MGX short-depth edge AI server, AEX-2UA1. By integrating an enterprise-class on-premises large language model (LLM) with the advanced retrieval-augmented generation (RAG) technique, Aetina NVIDIA MGX short-depth server demonstrates exceptional enterprise edge AI performance, setting a new benchmark in Edge AI innovation. The server is powered by the latest Intel Xeon 6 processor and dual high-end double-width NVIDIA GPUs, delivering ultimate AI computing power in a compact 2U form factor, accelerating Gen AI at the edge.

The SuperEdge NVIDIA MGX server expands Aetina's product portfolio from specialized edge devices to comprehensive AI server solutions, propelling a key milestone in Innodisk Group's AI roadmap, from sensors and storage to AI software, computing platforms, and now AI edge servers.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

Hypertec Introduces the World's Most Advanced Immersion-Born GPU Server

Hypertec proudly announces the launch of its latest breakthrough product, the TRIDENT iG series, an immersion-born GPU server line that brings extreme density, sustainability, and performance to the AI and HPC community. Purpose-built for the most demanding AI applications, this cutting-edge server is optimized for generative AI, machine learning (ML), deep learning (DL), large language model (LLM) training, inference, and beyond. With up to six of the latest NVIDIA GPUs in a 2U form factor, a staggering 8 TB of memory with enhanced RDMA capabilities, and groundbreaking density supporting up to 200 GPUs per immersion tank, the TRIDENT iG server line is a game-changer for AI infrastructure.

Additionally, the server's innovative design features a single or dual root complex, enabling greater flexibility and efficiency for GPU usage in complex workloads.

IBM Expands Its AI Accelerator Offerings; Announces Collaboration With AMD

IBM and AMD have announced a collaboration to deploy AMD Instinct MI300X accelerators as a service on IBM Cloud. This offering, which is expected to be available in the first half of 2025, aims to enhance performance and power efficiency for Gen AI models such as and high-performance computing (HPC) applications for enterprise clients. This collaboration will also enable support for AMD Instinct MI300X accelerators within IBM's watsonx AI and data platform, as well as Red Hat Enterprise Linux AI inferencing support.

"As enterprises continue adopting larger AI models and datasets, it is critical that the accelerators within the system can process compute-intensive workloads with high performance and flexibility to scale," said Philip Guido, executive vice president and chief commercial officer, AMD. "AMD Instinct accelerators combined with AMD ROCm software offer wide support including IBM watsonx AI, Red Hat Enterprise Linux AI and Red Hat OpenShift AI platforms to build leading frameworks using these powerful open ecosystem tools. Our collaboration with IBM Cloud will aim to allow customers to execute and scale Gen AI inferencing without hindering cost, performance or efficiency."

NVIDIA "Blackwell" NVL72 Servers Reportedly Require Redesign Amid Overheating Problems

According to The Information, NVIDIA's latest "Blackwell" processors are reportedly encountering significant thermal management issues in high-density server configurations, potentially affecting deployment timelines for major tech companies. The challenges emerge specifically in NVL72 GB200 racks housing 72 GB200 processors, which can consume up to 120 kilowatts of power per rack, weighting a "mere" 3,000 pounds (or about 1.5 tons). These thermal concerns have prompted NVIDIA to revisit and modify its server rack designs multiple times to prevent performance degradation and potential hardware damage. Hyperscalers like Google, Meta, and Microsoft, who rely heavily on NVIDIA GPUs for training their advanced language models, have allegedly expressed concerns about possible delays in their data center deployment schedules.

The thermal management issues follow earlier setbacks related to a design flaw in the Blackwell production process. The problem stemmed from the complex CoWoS-L packaging technology, which connects dual chiplets using RDL interposer and LSI bridges. Thermal expansion mismatches between various components led to warping issues, requiring modifications to the GPU's metal layers and bump structures. A company spokesperson characterized these modifications as part of the standard development process, noting that a new photomask resolved this issue. The Information states that mass production of the revised Blackwell GPUs began in late October, with shipments expected to commence in late January. However, these timelines are unconfirmed by NVIDIA, and some server makers like Dell confirmed that these GB200 NVL72 liquid-cooled systems are shipping now, not in January, with CoreWave GPU cloud provider as a customer. The original report could be using older information, as Dell is one of NVIDIA's most significant partners and among the first in the supply chain to gain access to new GPU batches.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive line-up for AI and HPC success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

HPE Expands Direct Liquid-Cooled Supercomputing Solutions With Two AI Systems for Service Providers and Large Enterprises

Today, Hewlett Packard Enterprise announces its new high performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio that includes leadership-class HPE Cray Supercomputing EX solutions and two systems optimized for large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The new supercomputing solutions are designed to help global customers fast-track scientific research and invention.

"Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems."

IBM Launches Its Most Advanced Quantum Computers, Fueling New Scientific Value and Progress towards Quantum Advantage

Today at its inaugural IBM Quantum Developer Conference, IBM announced quantum hardware and software advancements to execute complex algorithms on IBM quantum computers with record levels of scale, speed, and accuracy.

IBM Quantum Heron, the company's most performant quantum processor to-date and available in IBM's global quantum data centers, can now leverage Qiskit to accurately run certain classes of quantum circuits with up to 5,000 two-qubit gate operations. Users can now use these capabilities to expand explorations in how quantum computers can tackle scientific problems across materials, chemistry, life sciences, high-energy physics, and more.

Innodisk Introduces E1.S Edge Server SSD for Edge Computing and AI Applications

Innodisk, a leading global AI solution provider, has introduced its new E1.S SSD, which is specifically designed to meet the demands of growing edge computing applications. The E1.S edge server SSD offers exceptional performance, reliability, and thermal management capabilities to address the critical needs of modern data-intensive environments and bridge the gap between traditional industrial SSDs and data center SSDs.

As AI and 5G technologies rapidly evolve, the demands on data processing and storage continue to grow. The E1.S SSD addresses the challenges of balancing heat dissipation and performance, which has become a major concern for today's SSDs. Traditional industrial and data center SSDs often struggle to meet the needs of edge applications. Innodisk's E1.S eliminates these bottlenecks with its Enterprise and Data Center Standard Form Factor (EDSFF) design and offers a superior alternative to U.2 and M.2 SSDs.

Arctic Intros Freezer 4U-M Rev. 2 Server CPU Cooler With Support for Ampere Altra Series

Even more versatile in the second revision: Developed on the basis of its proven predecessor model, the new version of the Freezer 4U-M offers optimised cooling performance, not only for powerful server CPUs from AMD and Intel, but also for the ARM processors of the Ampere Altra series.

Multi-compatible with additional flexibility
The 2nd revision of the Freezer 4U-M also impresses with its case and socket compatibility. In addition, it has been specially adapted to support Ampere Altra processors with 32 to 128 cores.

TerraMaster Launches Five New BBS Integrated Backup Servers

In an era where data has become a core asset for modern enterprises, TerraMaster, a global leader in data storage and management solutions, has announced the official launch of five high-performance integrated backup servers: T9-500 Pro, T12-500 Pro, U4-500, U8-500 Plus, and U12-500 Plus. This product release not only enriches TerraMaster enterprise-level product line but also provides enterprise users with an integrated, efficient, and secure data backup solution - from hardware to software - by pairing these devices with the company proprietary BBS Business Backup Suite.

Key Features of the New Integrated Backup Servers
  • T9-500 Pro & T12-500 Pro: As new members of TerraMaster high-end series, these products feature compact designs and are easy to manage. With powerful processors, large memory capacities, and dual 10GbE network interfaces, they ensure high-efficiency data backup tasks, catering to the large-scale data storage and backup needs of small and medium-sized enterprises.
  • U4-500: Designed for SOHO, small offices, and remote work scenarios, the U4-500 features a compact 4-bay design and convenient network connectivity, making it an ideal data backup solution. Its user-friendly management interface allows for easy deployment and maintenance.
  • U8-500 Plus & U12-500 Plus: These two rackmount 8-bay and 12-bay upgraded models feature fully optimized designs, high-performance processors, and standard dual 10GbE high-speed interfaces. They not only improve data processing speeds but also enhance data security, making them particularly suitable for small and medium-sized enterprises that need to handle large volumes of data backup and recovery.

AMD Captures 28.7% Desktop Market Share in Q3 2024, Intel Maintains Lead

According to the market research firm Mercury Research, the desktop CPU market has witnessed a remarkable transformation, with AMD seizing a substantial 28.7% market share in Q3 of 2024—a giant leap since the launch of the original Zen architecture in 2017. This 5.7 percentage point surge from the previous quarter is a testament to the company's continuous innovation against the long-standing industry leader, Intel. Their year-over-year growth of nearly ten percentage points, fueled by the success of their Ryzen 7000 and 9000 series processors, starkly contrasts Intel's Raptor Lake processors, which encountered technical hurdles like stability issues. AMD's revenue share soared by 8.5 percentage points, indicating robust performance in premium processor segments. Intel, witnessing a decline in its desktop market share to 71.3%, attributes this shift to inventory adjustments rather than competitive pressure and still holds the majority.

AMD's success story extends beyond desktops, with the company claiming 22.3% of the laptop processor market and 24.2% of the server segment. A significant milestone was reached as AMD's data center division generated $3.549 billion in quarterly revenue, a new record for a company not even present in the data center in any considerable quantity just a decade ago. Stemming from strong EPYC processor sales to hyperscalers and cloud providers, along with Instinct MI300X for AI applications, AMD's acceleration of data center deployments is massive. Despite these shifts, Intel continues to hold its dominant position in client computing, with 76.1% of the overall PC market, held by its strong corporate relationships and extensive manufacturing infrastructure. OEM partners like Dell, HP, Lenovo, and others rely heavily on Intel for their CPU choice, equipping institutions like schools, universities, and government agencies.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.

Microsoft Announces its FY25 Q1 Earnings Release

Microsoft Corp. today announced the following results for the quarter ended September 30, 2024, as compared to the corresponding period of last fiscal year:
  • Revenue was $65.6 billion and increased 16%
  • Operating income was $30.6 billion and increased 14%
  • Net income was $24.7 billion and increased 11% (up 10% in constant currency)
  • Diluted earnings per share was $3.30 and increased 10%
"AI-driven transformation is changing work, work artifacts, and workflow across every role, function, and business process," said Satya Nadella, chairman and chief executive officer of Microsoft. "We are expanding our opportunity and winning new customers as we help them apply our AI platforms and tools to drive new growth and operating leverage."

Cisco Unveils Plug-and-Play AI Solutions Powered by NVIDIA H100 and H200 Tensor Core GPUs

Today, Cisco announced new additions to its data center infrastructure portfolio: an AI server family purpose-built for GPU-intensive AI workloads with NVIDIA accelerated computing, and AI PODs to simplify and de-risk AI infrastructure investment. They give organizations an adaptable and scalable path to AI, supported by Cisco's industry-leading networking capabilities.

"Enterprise customers are under pressure to deploy AI workloads, especially as we move toward agentic workflows and AI begins solving problems on its own," said Jeetu Patel, Chief Product Officer, Cisco. "Cisco innovations like AI PODs and the GPU server strengthen the security, compliance, and processing power of those workloads as customers navigate their AI journeys from inferencing to training."

Lenovo Announces New Liquid Cooled Servers for Intel Xeon and NVIDIA Blackwell Platforms

At Lenovo Tech World 2024, we announced new Supercomputing servers for HPC and AI workloads. These new water-cooled servers use the latest processor and accelerator technology from Intel and NVIDIA.

ThinkSystem SC750 V4
Engineered for large-scale cloud infrastructures and High Performance Computing (HPC), the Lenovo ThinkSystem SC750 V4 Neptune excels in intensive simulations and complex modeling. It's designed to handle technical computing, grid deployments, and analytics workloads in various fields such as research, life sciences, energy, engineering, and financial simulation.

Jabil Intros New Servers Powered by AMD 5th Gen EPYC and Intel Xeon 6 Processors

Jabil Inc. announced today that it is expanding its server portfolio with the J421E-S and J422-S servers, powered by AMD 5th Generation EPYC and Intel Xeon 6 processors. These servers are purpose-built for scalability in a variety of cloud data center applications, including AI, high-performance computing (HPC), fintech, networking, storage, databases, and security — representing the latest generation of server innovation from Jabil.

Built with customization and innovation in mind, the design-ready J422-S and J421E-S servers will allow engineering teams to meet customers' specific requirements. By fine-tuning Jabil's custom BIOS and BMC firmware, Jabil can create a competitive advantage for customers by developing the server configuration needed for higher performance, data management, and security. The server platforms are now available for sampling and will be in production by the first half of 2025.

HPE Announces Industry's First 100% Fanless Direct Liquid Cooling Systems Architecture

Hewlett Packard Enterprise announced the industry's first 100% fanless direct liquid cooling systems architecture to enhance the energy and cost efficiency of large-scale AI deployments. The company introduced the innovation at its AI Day, held for members of the financial community at one of its state-of-the-art AI systems manufacturing facilities. During the event, the company showcased its expertise and leadership in AI across enterprises, sovereign governments, service providers and model builders.

Industry's first 100% fanless direct liquid cooling system
While efficiency has improved in next-generation accelerators, power consumption is continuing to intensify with AI adoption, outstripping traditional cooling techniques.

NVIDIA Might Consider Major Design Shift for Future 300 GPU Series

NVIDIA is reportedly considering a significant design change for its GPU products, shifting from the current on-board solution to an independent GPU socket design following the GB200 shipment in Q4, according to reports from MoneyDJ and the Economic Daily News quoted by TrendForce. This move is not new in the industry, AMD has already introduced socket design in 2023 with their MI300A series via Supermicro dedicated servers. The B300 series, expected to become NVIDIA's mainstream product in the second half of 2025, is rumored to be the main beneficiary of this design change that could improve yield rates, though it may come with some performance trade-offs.

According to the Economic Daily News, the socket design will simplify after-sales service and server board maintenance, allowing users to replace or upgrade the GPUs quickly. The report also pointed out that based on the slot design, boards will contain up to four NVIDIA GPUs and a CPU, with each GPU having its dedicated slot. This will bring benefits for Taiwanese manufacturers like Foxconn and LOTES, who will supply different components and connectors. The move seems logical since with the current on-board design, once a GPU becomes faulty, the entire motherboard needs to be replaced, leading to significant downtime and high operational and maintenance costs.
Return to Keyword Browsing
Dec 22nd, 2024 00:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts