News Posts matching #Data Center

Return to Keyword Browsing

Senao Networks Unveils AI Driven Computing at MWC Barcelona 2025

Senao Networks Inc. (SNI), a global leader in AI computing and networking solutions, will be exhibiting at 2025 Mobile World Congress (MWC) in Barcelona. At the event, SNI will showcase its latest AI-driven innovations, including AI Servers, AI Cameras, AIPCs, Cloud Solutions, and Titanium Power Supply, reinforcing its vision of "AI Everywhere."

Senao Networks continues to advance AI computing with new products designed to enhance security, efficiency, and connectivity.

Arm to Develop In-House Server CPUs, Signs Meta as First Customer

Reports from Financial Times suggest Arm has plans to create its own CPU, set to hit the market in 2025 with Meta Platforms said to be one of the first customers. The chip is said to be a CPU for data center servers, with TSMC handling the manufacturing. However, when the Financial Times asked about this, SoftBank (the majority owner of Arm) and Meta stayed quiet, while Arm didn't give a statement. A Nikkei report from May 2024 suggested that a prototype AI processor chip would be completed by spring 2025 and available for sale by fall 2025, so the latest information from the Financial Times report feels like a confirmation of previous rumors.

Right now, Arm makes money by letting others use its instruction set and core designs to make their own chips. This new move could mean Arm will compete with its current customers. Sources in the industry say Arm is trying to win business from Qualcomm, with rumors that Arm has been bringing in executives from companies it works with to help develop this chip. While Qualcomm had talked in the past about giving Meta a data center CPU using Arm's design, it looks like Arm has won at least some of that deal. However, no technical or specification details are available currently for Arm's 1st in-house server CPU.

Intel's Head of Data Center and AI Division Exits to Lead Nokia

Intel experienced another leadership setback on Monday when Justin Hotard, who led its Data Center and AI (DCAI) division said he was leaving to become Nokia's CEO. Hotard joined Intel in early 2024 and worked there for just over a year. He will take over from Pekka Lundmark at Nokia on April 1. In his short time at Intel, Hotard oversaw the release of Intel's Sierra Forest E-core and Granite Rapids P-core Xeon 6 platforms. These helped Intel catch up to AMD in core count for the first time since 2017. Intel has temporarily appointed Karin Eibschitz Segal, an 18-year company veteran and co-CEO at Intel Israel, as the interim chief of DCAI.

However, Justin Hotard's exit comes as the DCAI division faces several problems. Not long ago, Intel said it would push back the launch of its next-generation Clearwater Forest Xeons to the first half of 2026 blaming low demand. The company also scrapped its Falcon Shores accelerators to focus on a future rack-scale platform called Jaguar Shores. These setbacks came after Intel fellow Sailesh Kottapalli left for Qualcomm last month. Kottapalli had worked at Intel for 28 years and played a key role in developing many Xeon server processors.

Supermicro Ramps Full Production of NVIDIA Blackwell Rack-Scale Solutions With NVIDIA HGX B200

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing full production availability of its end-to-end AI data center Building Block Solutions accelerated by the NVIDIA Blackwell platform. The Supermicro Building Block portfolio provides the core infrastructure elements necessary to scale Blackwell solutions with exceptional time to deployment. The portfolio includes a broad range of air-cooled and liquid-cooled systems with multiple CPU options. These include superior thermal design supporting traditional air cooling, liquid-to-liquid (L2L) and liquid-to-air (L2A) cooling. In addition, a full data center management software suite, rack-level integration, including full network switching and cabling and cluster-level L12 solution validation can be delivered as turn-key offering with global delivery, professional support, and service.

"In this transformative moment of AI, where scaling laws are pushing the limits of data center capabilities, our latest NVIDIA Blackwell-powered solutions, developed through close collaboration with NVIDIA, deliver outstanding computational power," said Charles Liang, president and CEO of Supermicro. "Supermicro's NVIDIA Blackwell GPU offerings in plug-and-play scalable units with advanced liquid cooling and air cooling are empowering customers to deploy an infrastructure that supports increasingly complex AI workloads while maintaining exceptional efficiency. This reinforces our commitment to providing sustainable, cutting-edge solutions that accelerate AI innovation."

Intel Cuts Xeon 6 Prices up to 30% to Battle AMD in the Data Center

Intel has implemented substantial price cuts across its Xeon 6 "Granite Rapids" server processor lineup, marking a significant shift in its data center strategy. The reductions, quietly introduced and reflected in Intel's ARK database, come just four months after the processors' September launch. The most dramatic cut affects Intel's flagship 128-core Xeon 6980P, which saw its price drop from $17,800 by 30% to $12,460. This aggressive pricing positions the processor below AMD's competing EPYC "Turin" 9755 128-core CPU both absolute and per-core pricing, intensifying the rivalry between the two semiconductor giants. AMD's SKU at 128 cores is now pricier at $12,984, with higher core count SKUs reaching up to $14,813 for 192-core EPYC 9965 CPU based on Zen 5c core. Intel is expected to release 288-core "Sierra Forest" Xeon SKUs this quarter, so we can get an updated pricing structure and compare it to AMD.

Additionally, Intel's price adjustments extend beyond the flagship model, with three of the five Granite Rapids processors receiving substantial reductions. The 96-core Xeon 6972P and 6952P models have been marked down by 13% and 20% respectively. These cuts make Intel's offerings particularly attractive to cloud providers who prioritize core density and cost efficiency. However, Intel's competitive pricing comes with trade-offs. The higher power consumption of Intel's processors—exemplified by the 96-core Xeon 6972P's 500 W requirement, which exceeds AMD's comparable model by 100 W—could offset the initial savings through increased operational costs. Ultimately, most of the data center buildout will be won by whoever can serve the most CPU volume shipped (read wafer production capacity) and the best TCO/ROI balance, including power consumption and performance.

Numem to Showcase Next-Gen Memory Solutions at the Upcoming Chiplet Summit

Numem, an innovator focused on accelerating memory for AI workloads, will be at the upcoming Chiplet Summit to showcase its high-performance solutions. By accelerating the delivery of data via new memory subsystem designs, Numem solutions are re-architecting the hierarchy of AI memory tiers to eliminate the bottlenecks that negatively impact power and performance.

The rapid growth of AI workloads and AI Processor/GPUs are exacerbating the memory bottleneck caused by the slowing performance improvements and scalability of SRAM and DRAM - presenting a major obstacle to maximizing system performance. To overcome this, there is a pressing need for intelligent memory solutions that offer higher power efficiency and greater bandwidth, coupled with a reevaluation of traditional memory architectures.

Qualcomm Pushes for Data Center CPUs, Hires Ex-Intel Chief Xeon Architect

Qualcomm is becoming serious about its server CPU ambitions. Today, we have learned that Sailesh Kottapalli, Intel's former chief architect for Xeon server processors, has joined Qualcomm as Senior Vice President after 28 years at Intel. Kottapalli, who announced his departure on LinkedIn Monday, previously led the development of multiple Xeon and Itanium processors at Intel. Qualcomm's data center team is currently working on reference platforms based on their Snapdragon technology. The company already sells AI accelerator chips under the Qualcomm Cloud AI brand, supported by major providers including AWS, HPE, and Lenovo.

This marks Qualcomm's second attempt at entering the server CPU market, following an unsuccessful Centriq effort that ended in 2018. The company is now leveraging technology from its $1.4 billion Nuvia acquisition in 2021, though this has led to ongoing legal disputes with Arm over licensing terms. While Qualcomm hasn't officially detailed Kottapalli's role, the company confirmed in legal filings its intentions to continue developing data center CPUs, as originally planned by Nuvia.

Transcend Unveils Enterprise SSD to Boost Data Center Performance and Security

Transcend Information Inc. (Transcend) a global leader in storage solutions, introduces the new ETD210T enterprise 2.5-inch SSD, designed to meet the read-intensive needs of business users. Featuring enterprise-grade TLC (eTLC) NAND flash and a SATA III 6 Gb/s interface, the ETD210T includes a built-in DRAM cache to deliver fast data transfer, exceptional Quality of Service (QoS), ultra-low latency, and superior endurance. Ideal for read-intensive and high-capacity storage workloads in cloud and data center applications, it provides a reliable and efficient storage solution for enterprise computing.

Designed for Enterprises, Optimized for Data Centers
The ETD210T supports various enterprise applications, including data centers, virtualized servers, and large-scale data processing. Equipped with high-endurance eTLC NAND flash, it delivers exceptional read and write performance. Its endurance rating of DWPD = 1 meets the requirements of most enterprise-class applications, while its read and write speeds of up to 530 MB/s and 510 MB/s, respectively, address the need for highly efficient storage.

SK hynix Develops PS1012 U.2 High Capacity SSD for AI Data Centers

SK hynix Inc. announced today that it has completed development of its high-capacity SSD product, PS1012 U.2, designed for AI data centers. As the era of AI accelerates, the demand for high-performance enterprise SSDs (eSSD) is rapidly increasing, and QLC technology, which enables high capacity, has become the industry standard. In line with this trend, SK hynix has developed a 61 TB product using this technology and introduced it to the market.

SK hynix has been leading the SSD market for AI data centers with Solidigm, a subsidiary which commercialized QLC-based eSSD for the first time in the world. With the development of PS1012, the company expects to build a balanced SSD portfolio, thereby maximizing synergy between the two companies. With the latest 5th generation (Gen 5) PCIe, PS1012 doubles its bandwidth compared to 4th generation based products. As a result, the data transfer speed reaches 32 GT/s (Gig-transfers per second), with the sequential read performance of 13 GB/s (Gigabyte per second), which is twice that of previous generation products.

IBM Develops Co-Packaged Optical Interconnect for Data Center

IBM Research has unveiled a significant advancement in optical interconnect technology for advanced data center communications. The breakthrough centers on a novel co-packaged optics (CPO) system featuring a sophisticated Polymer Optical Waveguide (PWG) design, marking a potential shift from traditional copper-based interconnects. The innovation introduces a Photonic Integrated Circuit (PIC) measuring 8x10mm, mounted on a 17x17mm substrate, capable of converting electrical signals to optical ones and vice versa. The system's waveguide, spanning 12 mm in width, efficiently channels light waves through precisely engineered pathways, with channels converging from 250 to 50 micrometers.

While current copper-based solutions like NVIDIA's NVLink offer impressive 1.8 TB/s bandwidth rates, and Intel's Optical Compute Interconnect achieves 4 TBit/s bidirectional throughput, IBM's technology focuses on scalability and efficiency. The company plans to implement 12 carrier waves initially, with the potential to accommodate up to 32 waves by reducing spacing to 18 micrometers. Furthermore, the design allows for vertical stacking of up to four PWGs, potentially enabling 128 transmission channels. The technology has undergone rigorous JEDEC-standard testing, including 1,000 cycles of thermal stress between -40°C and 125°C, and extended exposure to extreme conditions including 85% humidity at 85°C. The components have also proven reliable during thousand-hour storage tests at various temperature extremes. The bandwidth of the CPO is currently unknown, but we expect it to surpass current solutions.

JPR: Q3'24 PC Graphics AiB Shipments Decreased 14.5% Compared to the Last Quarter

According to a new research report from the analyst firm Jon Peddie Research, the growth of the global PC-based graphics add-in board market reached 8.1 million units in Q3'24 and desktop PC CPU shipments increased to 20.1 million units. Overall, AIBs will have a compound annual growth rate of -6.0% from 2024 to 2028 and reach an installed base of 119 million units at the end of the forecast period. Over the next five years, the penetration of AIBs in desktop PCs will be 83%.

As indicated in the following chart, AMD's overall AIB market share decreased -2.0% from last quarter, and NVIDIA's market share increased by 2.0%. These slight flips of market share in a down quarter don't mean much except to the winner. The overall market dynamics haven't changed.
  • The AIB overall attach rate in desktop PCs for the quarter decreased to 141%, down -26.9% from last quarter.
  • The desktop PC CPU market decreased -3.4% year to year and increased 42.2% quarter to quarter, which influenced the attach rate of AIBs.

Intel at CES 2025: Pioneering AI-Driven Innovation in Work and Mobility

AI is fundamentally changing technology - from the PC to edge and cloud—and redefining the way we work, create and collaborate. Next-generation technologies will empower workers through powerful new tools that enhance productivity and make intelligent, personalized computing more accessible than ever. At CES 2025, Intel will show how it's advancing what's possible on the AI PC—for on-the-go professionals and enthusiasts - by designing for the needs and experiences of tomorrow with breakthrough efficiency, no-compromise compatibility and an unmatched software ecosystem. Intel will also showcase the latest automotive innovations as the industry embraces software-defined connected electric vehicles powered by AI.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

MSI Presents New AMD EPYC 9005 Series CPU-Based Server Platforms at SC24

MSI, a leading global provider of high-performance server solutions, is excited to unveil its latest AMD EPYC 9005 Series CPU-based server boards and platforms at SC24 (SuperComputing 2024), Booth #3655, from November 19-21. Built on the OCP Modular Hardware System (DC-MHS) architecture, these new platforms deliver high-density, AI-ready solutions, including multi-node, enterprise, CXL memory expansion, and GPU servers, designed to meet the intensive demands of modern data centers.

"As AI continues to reshape the landscape of data center infrastructure, MSI's servers, powered by the AMD EPYC 9005 Series processors, offer unmatched density, energy efficiency, and cost optimization—making them ideal for modern data centers," said Danny Hsu, General Manager of Enterprise Platform Solutions. "Our servers optimize thermal management and performance for virtualized and containerized environments, positioning MSI at the forefront of AI and cloud-based workloads."

Solidigm Launches D5-P5336 PCIe Data Center SSDs With 122 TB Capacity

Solidigm, a leading provider of innovative NAND flash memory solutions, announced today the introduction of the world's highest capacity PCIe solid-state drive (SSD): the 122 TB (terabyte) Solidigm D5-P5336 data center SSD. The D5-P5336 doubles the storage space of Solidigm's earlier 61.44 TB version of the drive and is the world's first SSD with unlimited Random Write endurance for five years—offering an ideal solution for AI and data-intensive workloads. Just how much storage is 122.88 TB? Roughly enough for 4K-quality copies of every movie theatrically released in the 1990s, 2.6 times over.

Data storage power, thermal and space constraints are accelerating as AI adoption increases. Power and space-efficient, the new 122 TB D5-P5336 delivers industry-leading storage efficiency from the core data center to the edge. Data center operators can deploy with confidence the 122 TB D5-P5336 from Solidigm, the proven QLC (quad-level cell) density leader with more than 100EB (exabytes) of QLC-based product shipped since 2018.

Innodisk Introduces E1.S Edge Server SSD for Edge Computing and AI Applications

Innodisk, a leading global AI solution provider, has introduced its new E1.S SSD, which is specifically designed to meet the demands of growing edge computing applications. The E1.S edge server SSD offers exceptional performance, reliability, and thermal management capabilities to address the critical needs of modern data-intensive environments and bridge the gap between traditional industrial SSDs and data center SSDs.

As AI and 5G technologies rapidly evolve, the demands on data processing and storage continue to grow. The E1.S SSD addresses the challenges of balancing heat dissipation and performance, which has become a major concern for today's SSDs. Traditional industrial and data center SSDs often struggle to meet the needs of edge applications. Innodisk's E1.S eliminates these bottlenecks with its Enterprise and Data Center Standard Form Factor (EDSFF) design and offers a superior alternative to U.2 and M.2 SSDs.

Micron Launches 6550 ION 60TB PCIe Gen5 NVMe SSD Series

Micron Technology, Inc., today announced it has begun qualification of the 6550 ION NVMe SSD with customers. The Micron 6550 ION is the world's fastest 60 TB data center SSD and the industry's first E3.S and PCIe Gen 5 60 TB SSD. It follows the success of the award-winning 6500 ION and is engineered to provide best-in-class performance, energy efficiency, endurance, security, and rack density for exascale data center deployments. The 6550 ION excels in high-capacity NVMe workloads such as networked AI data lakes, ingest, data preparation and check pointing, file and object storage, public cloud storage, analytic databases, and content delivery.

"The Micron 6550 ION achieves a remarkable 12 GB/s while using just 20 watts of power, setting a new standard in data center performance and energy efficiency," said Alvaro Toledo, vice president and general manager of Micron's Data Center Storage Group. "Featuring a first-to-market 60 TB capacity in an E3.S form factor and up to 20% better energy efficiency than competitive drives, the Micron 6550 ION is a game-changer for high-capacity storage solutions to address the insatiable capacity and power demands of AI workloads."

AMD Reports Third Quarter 2024 Financial Results, Revenue Up 18 Percent YoY

AMD today announced revenue for the third quarter of 2024 of $6.8 billion, gross margin of 50%, operating income of $724 million, net income of $771 million and diluted earnings per share of $0.47. On a non-GAAP basis, gross margin was 54%, operating income was $1.7 billion, net income was $1.5 billion and diluted earnings per share was $0.92.

"We delivered strong third quarter financial results with record revenue led by higher sales of EPYC and Instinct data center products and robust demand for our Ryzen PC processors," said AMD Chair and CEO Dr. Lisa Su. "Looking forward, we see significant growth opportunities across our data center, client and embedded businesses driven by the insatiable demand for more compute."

Meta Shows Open-Architecture NVIDIA "Blackwell" GB200 System for Data Center

During the Open Compute Project (OCP) Summit 2024, Meta, one of the prime members of the OCP project, showed its NVIDIA "Blackwell" GB200 systems for its massive data centers. We previously covered Microsoft's Azure server rack with GB200 GPUs featuring one-third of the rack space for computing and two-thirds for cooling. A few days later, Google showed off its smaller GB200 system, and today, Meta is showing off its GB200 system—the smallest of the bunch. To train a dense transformer large language model with 405B parameters and a context window of up to 128k tokens, like the Llama 3.1 405B, Meta must redesign its data center infrastructure to run a distributed training job on two 24,000 GPU clusters. That is 48,000 GPUs used for training a single AI model.

Called "Catalina," it is built on the NVIDIA Blackwell platform, emphasizing modularity and adaptability while incorporating the latest NVIDIA GB200 Grace Blackwell Superchip. To address the escalating power requirements of GPUs, Catalina introduces the Orv3, a high-power rack capable of delivering up to 140kW. The comprehensive liquid-cooled setup encompasses a power shelf supporting various components, including a compute tray, switch tray, the Orv3 HPR, Wedge 400 fabric switch with 12.8 Tbps switching capacity, management switch, battery backup, and a rack management controller. Interestingly, Meta also upgraded its "Grand Teton" system for internal usage, such as deep learning recommendation models (DLRMs) and content understanding with AMD Instinct MI300X. Those are used to inference internal models, and MI300X appears to provide the best performance per Dollar for inference. According to Meta, the computational demand stemming from AI will continue to increase exponentially, so more NVIDIA and AMD GPUs is needed, and we can't wait to see what the company builds.

SK hynix Showcases Memory Solutions at the 2024 OCP Global Summit

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15-17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event's theme is "From Ideas to Impact," which aims to foster the realization of theoretical concepts into real-world technologies.

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM and CMS.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.

Supermicro's Liquid-Cooled SuperClusters for AI Data Centers Powered by NVIDIA GB200 NVL72 and NVIDIA HGX B200 Systems

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is accelerating the industry's transition to liquid-cooled data centers with the NVIDIA Blackwell platform to deliver a new paradigm of energy-efficiency for the rapidly heightened energy demand of new AI infrastructures. Supermicro's industry-leading end-to-end liquid-cooling solutions are powered by the NVIDIA GB200 NVL72 platform for exascale computing in a single rack and have started sampling to select customers for full-scale production in late Q4. In addition, the recently announced Supermicro X14 and H14 4U liquid-cooled systems and 10U air-cooled systems are production-ready for the NVIDIA HGX B200 8-GPU system.

"We're driving the future of sustainable AI computing, and our liquid-cooled AI solutions are rapidly being adopted by some of the most ambitious AI Infrastructure projects in the world with over 2000 liquid-cooled racks shipped since June 2024," said Charles Liang, president and CEO of Supermicro. "Supermicro's end-to-end liquid-cooling solution, with the NVIDIA Blackwell platform, unlocks the computational power, cost-effectiveness, and energy-efficiency of the next generation of GPUs, such as those that are part of the NVIDIA GB200 NVL72, an exascale computer contained in a single rack. Supermicro's extensive experience in deploying liquid-cooled AI infrastructure, along with comprehensive on-site services, management software, and global manufacturing capacity, provides customers a distinct advantage in transforming data centers with the most powerful and sustainable AI solutions."

Flex Announces Liquid-Cooled Rack and Power Solutions for AI Data Centers at 2024 OCP Global Summit

Flex today announced new reference platforms for liquid-cooled servers, rack, and power products that will enable customers to sustainably accelerate data center growth. These innovations build on Flex's ability to address technical challenges associated with power, heat generation, and scale to support artificial intelligence (AI) and high-performance computing (HPC) workloads.

"Flex delivers integrated data center IT and power infrastructure solutions that address the growing power and compute demands in the AI era," said Michael Hartung, president and chief commercial officer, Flex. "We are expanding our unique portfolio of advanced manufacturing capabilities, innovative products, and lifecycle services, enabling customers to deploy IT and power infrastructure at scale and drive AI data center expansion."

Rittal Unveils Modular Cooling Distribution Unit With Over 1 MW Capacity

In close cooperation with hyperscalers and server OEMs, Rittal has developed a modular cooling distribution unit (CDU) that delivers a cooling capacity of over 1 MW. It will be the centerpiece exhibit at Rittal's booth A24 at 2024 OCP Global Summit. The CDU uses direct liquid cooling based on water - and is thus an example for new IT infrastructure technologies that are enablers for AI applications.

New technology, familiar handling?
"To put the technology into practice, it is not enough to simply provide the cooling capacity and integrate the solution into the facility - which also still poses challenges," says Lars Platzhoff, Head of Rittal's Business Unit Cooling Solutions: "Despite the new technology, the solutions must remain manageable by the data center team as part of the usual service. At best, this should be taken into account already at the design stage."

Western Digital Enterprise SSDs Certified to Support NVIDIA GB200 NVL72 System for Compute-Intensive AI Environments

Western Digital Corp. today announced that its PCIe Gen 5 DC SN861 E.1S enterprise-class NVMe SSDs have been certified to support the NVIDIA GB200 NVL72 rack-scale system.

The rapid rise of AI, ML, and large language models (LLMs) is creating a challenge for companies with two opposing forces. Data generation and consumption are accelerating, while organizations face pressure to quickly derive value from this data. Performance, scalability, and efficiency are essential for AI technology stacks as storage demands rise. Certified to be compatible with the GB200 NVL72 system, Western Digital's enterprise SSD addresses the growing needs of the AI market for high-speed accelerated computing combined with low latency to serve compute-intensive AI environments.
Return to Keyword Browsing
Feb 19th, 2025 20:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts