News Posts matching #Data Center

Return to Keyword Browsing

Qualcomm Pushes for Data Center CPUs, Hires Ex-Intel Chief Xeon Architect

Qualcomm is becoming serious about its server CPU ambitions. Today, we have learned that Sailesh Kottapalli, Intel's former chief architect for Xeon server processors, has joined Qualcomm as Senior Vice President after 28 years at Intel. Kottapalli, who announced his departure on LinkedIn Monday, previously led the development of multiple Xeon and Itanium processors at Intel. Qualcomm's data center team is currently working on reference platforms based on their Snapdragon technology. The company already sells AI accelerator chips under the Qualcomm Cloud AI brand, supported by major providers including AWS, HPE, and Lenovo.

This marks Qualcomm's second attempt at entering the server CPU market, following an unsuccessful Centriq effort that ended in 2018. The company is now leveraging technology from its $1.4 billion Nuvia acquisition in 2021, though this has led to ongoing legal disputes with Arm over licensing terms. While Qualcomm hasn't officially detailed Kottapalli's role, the company confirmed in legal filings its intentions to continue developing data center CPUs, as originally planned by Nuvia.

Transcend Unveils Enterprise SSD to Boost Data Center Performance and Security

Transcend Information Inc. (Transcend) a global leader in storage solutions, introduces the new ETD210T enterprise 2.5-inch SSD, designed to meet the read-intensive needs of business users. Featuring enterprise-grade TLC (eTLC) NAND flash and a SATA III 6 Gb/s interface, the ETD210T includes a built-in DRAM cache to deliver fast data transfer, exceptional Quality of Service (QoS), ultra-low latency, and superior endurance. Ideal for read-intensive and high-capacity storage workloads in cloud and data center applications, it provides a reliable and efficient storage solution for enterprise computing.

Designed for Enterprises, Optimized for Data Centers
The ETD210T supports various enterprise applications, including data centers, virtualized servers, and large-scale data processing. Equipped with high-endurance eTLC NAND flash, it delivers exceptional read and write performance. Its endurance rating of DWPD = 1 meets the requirements of most enterprise-class applications, while its read and write speeds of up to 530 MB/s and 510 MB/s, respectively, address the need for highly efficient storage.

SK hynix Develops PS1012 U.2 High Capacity SSD for AI Data Centers

SK hynix Inc. announced today that it has completed development of its high-capacity SSD product, PS1012 U.2, designed for AI data centers. As the era of AI accelerates, the demand for high-performance enterprise SSDs (eSSD) is rapidly increasing, and QLC technology, which enables high capacity, has become the industry standard. In line with this trend, SK hynix has developed a 61 TB product using this technology and introduced it to the market.

SK hynix has been leading the SSD market for AI data centers with Solidigm, a subsidiary which commercialized QLC-based eSSD for the first time in the world. With the development of PS1012, the company expects to build a balanced SSD portfolio, thereby maximizing synergy between the two companies. With the latest 5th generation (Gen 5) PCIe, PS1012 doubles its bandwidth compared to 4th generation based products. As a result, the data transfer speed reaches 32 GT/s (Gig-transfers per second), with the sequential read performance of 13 GB/s (Gigabyte per second), which is twice that of previous generation products.

IBM Develops Co-Packaged Optical Interconnect for Data Center

IBM Research has unveiled a significant advancement in optical interconnect technology for advanced data center communications. The breakthrough centers on a novel co-packaged optics (CPO) system featuring a sophisticated Polymer Optical Waveguide (PWG) design, marking a potential shift from traditional copper-based interconnects. The innovation introduces a Photonic Integrated Circuit (PIC) measuring 8x10mm, mounted on a 17x17mm substrate, capable of converting electrical signals to optical ones and vice versa. The system's waveguide, spanning 12 mm in width, efficiently channels light waves through precisely engineered pathways, with channels converging from 250 to 50 micrometers.

While current copper-based solutions like NVIDIA's NVLink offer impressive 1.8 TB/s bandwidth rates, and Intel's Optical Compute Interconnect achieves 4 TBit/s bidirectional throughput, IBM's technology focuses on scalability and efficiency. The company plans to implement 12 carrier waves initially, with the potential to accommodate up to 32 waves by reducing spacing to 18 micrometers. Furthermore, the design allows for vertical stacking of up to four PWGs, potentially enabling 128 transmission channels. The technology has undergone rigorous JEDEC-standard testing, including 1,000 cycles of thermal stress between -40°C and 125°C, and extended exposure to extreme conditions including 85% humidity at 85°C. The components have also proven reliable during thousand-hour storage tests at various temperature extremes. The bandwidth of the CPO is currently unknown, but we expect it to surpass current solutions.

JPR: Q3'24 PC Graphics AiB Shipments Decreased 14.5% Compared to the Last Quarter

According to a new research report from the analyst firm Jon Peddie Research, the growth of the global PC-based graphics add-in board market reached 8.1 million units in Q3'24 and desktop PC CPU shipments increased to 20.1 million units. Overall, AIBs will have a compound annual growth rate of -6.0% from 2024 to 2028 and reach an installed base of 119 million units at the end of the forecast period. Over the next five years, the penetration of AIBs in desktop PCs will be 83%.

As indicated in the following chart, AMD's overall AIB market share decreased -2.0% from last quarter, and NVIDIA's market share increased by 2.0%. These slight flips of market share in a down quarter don't mean much except to the winner. The overall market dynamics haven't changed.
  • The AIB overall attach rate in desktop PCs for the quarter decreased to 141%, down -26.9% from last quarter.
  • The desktop PC CPU market decreased -3.4% year to year and increased 42.2% quarter to quarter, which influenced the attach rate of AIBs.

Intel at CES 2025: Pioneering AI-Driven Innovation in Work and Mobility

AI is fundamentally changing technology - from the PC to edge and cloud—and redefining the way we work, create and collaborate. Next-generation technologies will empower workers through powerful new tools that enhance productivity and make intelligent, personalized computing more accessible than ever. At CES 2025, Intel will show how it's advancing what's possible on the AI PC—for on-the-go professionals and enthusiasts - by designing for the needs and experiences of tomorrow with breakthrough efficiency, no-compromise compatibility and an unmatched software ecosystem. Intel will also showcase the latest automotive innovations as the industry embraces software-defined connected electric vehicles powered by AI.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

MSI Presents New AMD EPYC 9005 Series CPU-Based Server Platforms at SC24

MSI, a leading global provider of high-performance server solutions, is excited to unveil its latest AMD EPYC 9005 Series CPU-based server boards and platforms at SC24 (SuperComputing 2024), Booth #3655, from November 19-21. Built on the OCP Modular Hardware System (DC-MHS) architecture, these new platforms deliver high-density, AI-ready solutions, including multi-node, enterprise, CXL memory expansion, and GPU servers, designed to meet the intensive demands of modern data centers.

"As AI continues to reshape the landscape of data center infrastructure, MSI's servers, powered by the AMD EPYC 9005 Series processors, offer unmatched density, energy efficiency, and cost optimization—making them ideal for modern data centers," said Danny Hsu, General Manager of Enterprise Platform Solutions. "Our servers optimize thermal management and performance for virtualized and containerized environments, positioning MSI at the forefront of AI and cloud-based workloads."

Solidigm Launches D5-P5336 PCIe Data Center SSDs With 122 TB Capacity

Solidigm, a leading provider of innovative NAND flash memory solutions, announced today the introduction of the world's highest capacity PCIe solid-state drive (SSD): the 122 TB (terabyte) Solidigm D5-P5336 data center SSD. The D5-P5336 doubles the storage space of Solidigm's earlier 61.44 TB version of the drive and is the world's first SSD with unlimited Random Write endurance for five years—offering an ideal solution for AI and data-intensive workloads. Just how much storage is 122.88 TB? Roughly enough for 4K-quality copies of every movie theatrically released in the 1990s, 2.6 times over.

Data storage power, thermal and space constraints are accelerating as AI adoption increases. Power and space-efficient, the new 122 TB D5-P5336 delivers industry-leading storage efficiency from the core data center to the edge. Data center operators can deploy with confidence the 122 TB D5-P5336 from Solidigm, the proven QLC (quad-level cell) density leader with more than 100EB (exabytes) of QLC-based product shipped since 2018.

Innodisk Introduces E1.S Edge Server SSD for Edge Computing and AI Applications

Innodisk, a leading global AI solution provider, has introduced its new E1.S SSD, which is specifically designed to meet the demands of growing edge computing applications. The E1.S edge server SSD offers exceptional performance, reliability, and thermal management capabilities to address the critical needs of modern data-intensive environments and bridge the gap between traditional industrial SSDs and data center SSDs.

As AI and 5G technologies rapidly evolve, the demands on data processing and storage continue to grow. The E1.S SSD addresses the challenges of balancing heat dissipation and performance, which has become a major concern for today's SSDs. Traditional industrial and data center SSDs often struggle to meet the needs of edge applications. Innodisk's E1.S eliminates these bottlenecks with its Enterprise and Data Center Standard Form Factor (EDSFF) design and offers a superior alternative to U.2 and M.2 SSDs.

Micron Launches 6550 ION 60TB PCIe Gen5 NVMe SSD Series

Micron Technology, Inc., today announced it has begun qualification of the 6550 ION NVMe SSD with customers. The Micron 6550 ION is the world's fastest 60 TB data center SSD and the industry's first E3.S and PCIe Gen 5 60 TB SSD. It follows the success of the award-winning 6500 ION and is engineered to provide best-in-class performance, energy efficiency, endurance, security, and rack density for exascale data center deployments. The 6550 ION excels in high-capacity NVMe workloads such as networked AI data lakes, ingest, data preparation and check pointing, file and object storage, public cloud storage, analytic databases, and content delivery.

"The Micron 6550 ION achieves a remarkable 12 GB/s while using just 20 watts of power, setting a new standard in data center performance and energy efficiency," said Alvaro Toledo, vice president and general manager of Micron's Data Center Storage Group. "Featuring a first-to-market 60 TB capacity in an E3.S form factor and up to 20% better energy efficiency than competitive drives, the Micron 6550 ION is a game-changer for high-capacity storage solutions to address the insatiable capacity and power demands of AI workloads."

AMD Reports Third Quarter 2024 Financial Results, Revenue Up 18 Percent YoY

AMD today announced revenue for the third quarter of 2024 of $6.8 billion, gross margin of 50%, operating income of $724 million, net income of $771 million and diluted earnings per share of $0.47. On a non-GAAP basis, gross margin was 54%, operating income was $1.7 billion, net income was $1.5 billion and diluted earnings per share was $0.92.

"We delivered strong third quarter financial results with record revenue led by higher sales of EPYC and Instinct data center products and robust demand for our Ryzen PC processors," said AMD Chair and CEO Dr. Lisa Su. "Looking forward, we see significant growth opportunities across our data center, client and embedded businesses driven by the insatiable demand for more compute."

Meta Shows Open-Architecture NVIDIA "Blackwell" GB200 System for Data Center

During the Open Compute Project (OCP) Summit 2024, Meta, one of the prime members of the OCP project, showed its NVIDIA "Blackwell" GB200 systems for its massive data centers. We previously covered Microsoft's Azure server rack with GB200 GPUs featuring one-third of the rack space for computing and two-thirds for cooling. A few days later, Google showed off its smaller GB200 system, and today, Meta is showing off its GB200 system—the smallest of the bunch. To train a dense transformer large language model with 405B parameters and a context window of up to 128k tokens, like the Llama 3.1 405B, Meta must redesign its data center infrastructure to run a distributed training job on two 24,000 GPU clusters. That is 48,000 GPUs used for training a single AI model.

Called "Catalina," it is built on the NVIDIA Blackwell platform, emphasizing modularity and adaptability while incorporating the latest NVIDIA GB200 Grace Blackwell Superchip. To address the escalating power requirements of GPUs, Catalina introduces the Orv3, a high-power rack capable of delivering up to 140kW. The comprehensive liquid-cooled setup encompasses a power shelf supporting various components, including a compute tray, switch tray, the Orv3 HPR, Wedge 400 fabric switch with 12.8 Tbps switching capacity, management switch, battery backup, and a rack management controller. Interestingly, Meta also upgraded its "Grand Teton" system for internal usage, such as deep learning recommendation models (DLRMs) and content understanding with AMD Instinct MI300X. Those are used to inference internal models, and MI300X appears to provide the best performance per Dollar for inference. According to Meta, the computational demand stemming from AI will continue to increase exponentially, so more NVIDIA and AMD GPUs is needed, and we can't wait to see what the company builds.

SK hynix Showcases Memory Solutions at the 2024 OCP Global Summit

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15-17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event's theme is "From Ideas to Impact," which aims to foster the realization of theoretical concepts into real-world technologies.

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM and CMS.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.

Supermicro's Liquid-Cooled SuperClusters for AI Data Centers Powered by NVIDIA GB200 NVL72 and NVIDIA HGX B200 Systems

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is accelerating the industry's transition to liquid-cooled data centers with the NVIDIA Blackwell platform to deliver a new paradigm of energy-efficiency for the rapidly heightened energy demand of new AI infrastructures. Supermicro's industry-leading end-to-end liquid-cooling solutions are powered by the NVIDIA GB200 NVL72 platform for exascale computing in a single rack and have started sampling to select customers for full-scale production in late Q4. In addition, the recently announced Supermicro X14 and H14 4U liquid-cooled systems and 10U air-cooled systems are production-ready for the NVIDIA HGX B200 8-GPU system.

"We're driving the future of sustainable AI computing, and our liquid-cooled AI solutions are rapidly being adopted by some of the most ambitious AI Infrastructure projects in the world with over 2000 liquid-cooled racks shipped since June 2024," said Charles Liang, president and CEO of Supermicro. "Supermicro's end-to-end liquid-cooling solution, with the NVIDIA Blackwell platform, unlocks the computational power, cost-effectiveness, and energy-efficiency of the next generation of GPUs, such as those that are part of the NVIDIA GB200 NVL72, an exascale computer contained in a single rack. Supermicro's extensive experience in deploying liquid-cooled AI infrastructure, along with comprehensive on-site services, management software, and global manufacturing capacity, provides customers a distinct advantage in transforming data centers with the most powerful and sustainable AI solutions."

Flex Announces Liquid-Cooled Rack and Power Solutions for AI Data Centers at 2024 OCP Global Summit

Flex today announced new reference platforms for liquid-cooled servers, rack, and power products that will enable customers to sustainably accelerate data center growth. These innovations build on Flex's ability to address technical challenges associated with power, heat generation, and scale to support artificial intelligence (AI) and high-performance computing (HPC) workloads.

"Flex delivers integrated data center IT and power infrastructure solutions that address the growing power and compute demands in the AI era," said Michael Hartung, president and chief commercial officer, Flex. "We are expanding our unique portfolio of advanced manufacturing capabilities, innovative products, and lifecycle services, enabling customers to deploy IT and power infrastructure at scale and drive AI data center expansion."

Rittal Unveils Modular Cooling Distribution Unit With Over 1 MW Capacity

In close cooperation with hyperscalers and server OEMs, Rittal has developed a modular cooling distribution unit (CDU) that delivers a cooling capacity of over 1 MW. It will be the centerpiece exhibit at Rittal's booth A24 at 2024 OCP Global Summit. The CDU uses direct liquid cooling based on water - and is thus an example for new IT infrastructure technologies that are enablers for AI applications.

New technology, familiar handling?
"To put the technology into practice, it is not enough to simply provide the cooling capacity and integrate the solution into the facility - which also still poses challenges," says Lars Platzhoff, Head of Rittal's Business Unit Cooling Solutions: "Despite the new technology, the solutions must remain manageable by the data center team as part of the usual service. At best, this should be taken into account already at the design stage."

Western Digital Enterprise SSDs Certified to Support NVIDIA GB200 NVL72 System for Compute-Intensive AI Environments

Western Digital Corp. today announced that its PCIe Gen 5 DC SN861 E.1S enterprise-class NVMe SSDs have been certified to support the NVIDIA GB200 NVL72 rack-scale system.

The rapid rise of AI, ML, and large language models (LLMs) is creating a challenge for companies with two opposing forces. Data generation and consumption are accelerating, while organizations face pressure to quickly derive value from this data. Performance, scalability, and efficiency are essential for AI technology stacks as storage demands rise. Certified to be compatible with the GB200 NVL72 system, Western Digital's enterprise SSD addresses the growing needs of the AI market for high-speed accelerated computing combined with low latency to serve compute-intensive AI environments.

Advantech Announces CXL 2.0 Memory to Boost Data Center Efficiency

Advantech, a global leader in embedded computing, is excited to announce the release of the SQRAM CXL 2.0 Type 3 Memory Module. Compute Express Link (CXL) 2.0 is the next evolution in memory technology, providing memory expansion with a high-speed, low-latency interconnect designed to meet the demands of large AI Training and HPC clusters. CXL 2.0 builds on the foundation of the original CXL specification, introducing advanced features such as memory sharing, and expansion, enabling more efficient utilization of resources across heterogeneous computing environments.

Memory Expansion via E3.S 2T Form Factor
Traditional memory architectures are often limited by fixed allocations, which can result in underutilized resources and bottlenecks in data-intensive workloads. With the E3.S form factor, based on the EDSFF standard, the CXL 2.0 Memory Module overcomes these limitations, allowing for dynamic resource management. This not only improves performance but reduces costs by maximizing existing resources.

HPE Announces Industry's First 100% Fanless Direct Liquid Cooling Systems Architecture

Hewlett Packard Enterprise announced the industry's first 100% fanless direct liquid cooling systems architecture to enhance the energy and cost efficiency of large-scale AI deployments. The company introduced the innovation at its AI Day, held for members of the financial community at one of its state-of-the-art AI systems manufacturing facilities. During the event, the company showcased its expertise and leadership in AI across enterprises, sovereign governments, service providers and model builders.

Industry's first 100% fanless direct liquid cooling system
While efficiency has improved in next-generation accelerators, power consumption is continuing to intensify with AI adoption, outstripping traditional cooling techniques.

Lenovo Accelerates Business Transformation with New ThinkSystem Servers Engineered for Optimal AI and Powered by AMD

Today, Lenovo announced its industry-leading ThinkSystem infrastructure solutions powered by AMD EPYC 9005 Series processors, as well as AMD Instinct MI325X accelerators. Backed by 225 of AMD's world-record performance benchmarks, the Lenovo ThinkSystem servers deliver an unparalleled combination of AMD technology-based performance and efficiency to tackle today's most demanding edge-to-cloud workloads, including AI training, inferencing and modeling.

"Lenovo is helping organizations of all sizes and across various industries achieve AI-powered business transformations," said Vlad Rozanovich, Senior Vice President, Lenovo Infrastructure Solutions Group. "Not only do we deliver unmatched performance, we offer the right mix of solutions to change the economics of AI and give customers faster time-to-value and improved total value of ownership."

HP Launches HPE ProLiant Compute XD685 Servers Powered by 5th Gen AMD EPYC Processors and AMD Instinct MI325X Accelerators

Hewlett Packard Enterprise today announced the HPE ProLiant Compute XD685 for complex AI model training tasks, powered by 5th Gen AMD EPYC processors and AMD Instinct MI325X accelerators. The new HPE system is optimized to quickly deploy high-performing, secure and energy-efficient AI clusters for use in large language model training, natural language processing and multi-modal training.

The race is on to unlock the promise of AI and its potential to dramatically advance outcomes in workforce productivity, healthcare, climate sciences and much more. To capture this potential, AI service providers, governments and large model builders require flexible, high-performance solutions that can be brought to market quickly.

Supermicro Introduces New Servers and GPU Accelerated Systems with AMD EPYC 9005 Series CPUs and AMD Instinct MI325X GPUs

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, announces the launch of a new series of servers, GPU-accelerated systems, and storage servers featuring the AMD EPYC 9005 Series processors and AMD Instinct MI325X GPUs. The new H14 product line represents one of the most extensive server families in the industry, including Supermicro's Hyper systems, the Twin multi-node servers, and AI inferencing GPU systems, all available with air or liquid cooling options. The new "Zen 5" processor core architecture implements full data path AVX-512 vector instructions for CPU-based AI inference and provides 17% better instructions per cycle (IPC) than the previous 4th generation EPYC processor, enabling more performance per core.

Supermicro's new H14 family uses the latest 5th Gen AMD EPYC processors which enable up to 192 cores per CPU with up to 500 W TDP (thermal design power). Supermicro has designed new H14 systems including the Hyper and FlexTwin systems which can accommodate the higher thermal requirements. The H14 family also includes three systems for AI training and inference workloads supporting up to 10 GPUs which feature the AMD EPYC 9005 Series CPU as the host processor and two which support the AMD Instinct MI325X GPU.

MSI Launches AMD EPYC 9005 Series CPU-Based Server Solutions

MSI, a leading global provider of high-performance server solutions, today introduced its latest AMD EPYC 9005 Series CPU-based server boards and platforms, engineered to tackle the most demanding data center workloads with leadership performance and efficiency.

Featuring AMD EPYC 9005 Series processors with up to 192 cores and 384 threads, MSI's new server platforms deliver breakthrough compute power, unparalleled density, and exceptional energy efficiency, making them ideal for handling AI-enabled, cloud-native, and business-critical workloads in modern data centers.
Return to Keyword Browsing
Jan 17th, 2025 17:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts