News Posts matching #Data Center

Return to Keyword Browsing

Supermicro Adds Portfolio for Next Wave of AI with NVIDIA Blackwell Ultra Solutions

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new systems and rack solutions powered by the NVIDIA's Blackwell Ultra platform, featuring the NVIDIA HGX B300 NVL16 and NVIDIA GB300 NVL72 platforms. Supermicro and NVIDIA's new AI solutions strengthen leadership in AI by delivering breakthrough performance for the most compute-intensive AI workloads, including AI reasoning, agentic AI, and video inference applications.

"At Supermicro, we are excited to continue our long-standing partnership with NVIDIA to bring the latest AI technology to market with the NVIDIA Blackwell Ultra Platforms," said Charles Liang, president and CEO, Supermicro. "Our Data Center Building Block Solutions approach has streamlined the development of new air and liquid-cooled systems, optimized to the thermals and internal topology of the NVIDIA HGX B300 NVL16 and GB300 NVL72. Our advanced liquid-cooling solution delivers exceptional thermal efficiency, operating with 40℃ warm water in our 8-node rack configuration, or 35℃ warm water in double-density 16-node rack configuration, leveraging our latest CDUs. This innovative solution reduces power consumption by up to 40% while conserving water resources, providing both environmental and operational cost benefits for enterprise data centers."

Dell Technologies Accelerates Enterprise AI Innovation from PC to Data Center with NVIDIA 

Marking one year since the launch of the Dell AI Factory with NVIDIA, Dell Technologies (NYSE: DELL) announces new AI PCs, infrastructure, software and services advancements to accelerate enterprise AI innovation at any scale. Successful AI deployments are vital for enterprises to remain competitive, but challenges like system integration and skill gaps can delay the value enterprises realize from AI. More than 75% of organizations want their infrastructure providers to deliver capabilities across all aspects of the AI adoption journey, driving customer demand for simplified AI deployments that can scale.

As the top provider of AI centric infrastructure, Dell Technologies - in collaboration with NVIDIA - provides a consistent experience across AI infrastructure, software and services, offering customers a one-stop shop to scale AI initiatives from deskside to large-scale data center deployments.

Micron Innovates From the Data Center to the Edge With NVIDIA

Secular growth of AI is built on the foundation of high-performance, high-bandwidth memory solutions. These high-performing memory solutions are critical to unlock the capabilities of GPUs and processors. Micron Technology, Inc., today announced it is the world's first and only memory company shipping both HBM3E and SOCAMM (small outline compression attached memory module) products for AI servers in the data center. This extends Micron's industry leadership in designing and delivering low-power DDR (LPDDR) for data center applications.

Micron's SOCAMM, a modular LPDDR5X memory solution, was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace Blackwell Ultra Superchip. The Micron HBM3E 12H 36 GB is also designed into the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms, while the HBM3E 8H 24 GB is available for the NVIDIA HGX B200 and GB200 NVL72 platforms. The deployment of Micron HBM3E products in NVIDIA Hopper and NVIDIA Blackwell systems underscores Micron's critical role in accelerating AI workloads.

GIGABYTE Showcases Cutting-Edge AI and Cloud Computing Solutions at CloudFest 2025

Giga Computing, a subsidiary of GIGABYTE, a global leader in IT technology solutions, is thrilled to announce its participation at CloudFest 2025, the world's premier cloud, hosting, and internet infrastructure event. As a key exhibitor, Giga Computing will highlight its latest innovations in AI, cloud computing, and edge solutions at the GIGABYTE booth. In line with its commitment to shaping the future of AI development and deployment, the GIGABYTE booth will showcase its industry-leading hardware and platforms optimized for AI workloads, cloud applications, and edge computing. As cloud adoption continues to accelerate, Giga Computing solutions are designed to empower businesses with unparalleled performance, scalability, and efficiency.

At CloudFest 2025, Giga Computing invites attendees to visit booth #E03 to experience firsthand its cutting-edge cloud computing solutions. From state-of-the-art hardware to innovative total solutions, a comprehensive suite of products and services designed to meet the evolving needs of the cloud industry are being showcased.

Meta Reportedly Reaches Test Phase with First In-house AI Training Chip

According to a Reuters technology report, Meta's engineering department is engaged in the testing of their "first in-house chip for training artificial intelligence systems." Two inside sources have declared this significant development milestone; involving a small-scale deployment of early samples. The owner of Facebook could ramp up production, upon initial batches passing muster. Despite a recent-ish showcasing of an open-architecture NVIDIA "Blackwell" GB200 system for enterprise, Meta leadership is reported to be pursuing proprietary solutions. Multiple big players—in the field of artificial intelligence—are attempting to breakaway from a total reliance on Team Green. Last month, press outlets concentrated on OpenAI's alleged finalization of an in-house design, with rumored involvement coming from Broadcom and TSMC.

One of the Reuters industry moles believes that Meta has signed up with TSMC—supposedly, the Taiwanese foundry was responsible for the production of test batches. Tom's Hardware reckons that Meta and Broadcom were working together with the tape out of the social media giant's "first AI training accelerator." Development of the company's "Meta Training and Inference Accelerator" (MTIA) series has stretched back a couple of years—according to Reuters, this multi-part project: "had a wobbly start for years, and at one point scrapped a chip at a similar phase of development...Meta last year, started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds." Leadership is reportedly aiming to get custom silicon solutions up and running for AI training by next year. Past examples of MTIA hardware were deployed with open-source RISC-V cores (for inference tasks), but is not clear whether this architecture will form the basis of Meta's latest AI chip design.

Giga Computing, SK Telecom, and SK Enmove to Collaborate on AI Data Center Liquid Cooling Technology

Giga Computing, a subsidiary of GIGABYTE Technology, has signed a Memorandum of Understanding (MoU) with SK Telecom and SK Enmove to collaborate on advancing AI Data Center (AIDC) and high-performance computing (HPC) while accelerating the adoption of liquid cooling technology in next-generation data centers.
This strategic partnership sets the stage to nurture and develop high-performance, energy-efficient, and sustainable data center solutions.

Driving AI and Cooling Technology Innovation Together
Performance AI servers, liquid cooling technologies, and modular AI clusters to support SK's various business units, including:
  • SK Telecom: Strengthening AIDC infrastructure to support next-generation data centers
  • SK Enmove: Advancing liquid cooling technologies to improve energy efficiency and sustainability in data centers

Jio Platforms Limited Along with AMD, Cisco, and Nokia Unveil Plans for Open Telecom AI Platform at MWC 2025

Jio Platforms Limited (JPL), together with AMD, Cisco, and Nokia, announced at Mobile World Congress 2025 plans to form an innovative, new Open Telecom AI Platform. Designed to support today's operators and service providers with real-world, AI-driven solutions, the Telecom AI Platform is set to drive unprecedented efficiency, security, capabilities, and new revenue opportunities for the service provider industry.

End-to-end Network Intelligence
Fueled by the collective expertise of world leaders from across domains including RAN, Routing, AI Data Center, Security and Telecom, the Telecom AI Platform will create a new central intelligence layer for telecom and digital services. This multi-domain intelligence framework will integrate AI and automation into every layer of network operations.

STMicroelectronics Enhances Optical Interconnects for Faster AI and Cloud Datacenters

STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, is unveiling its next generation of proprietary technologies for higher-performing optical interconnect in datacenters and AI clusters. With the exponential growth of AI computing needs, challenges arise in performance and energy efficiency across computing, memory, power supply, and the interconnections linking them. ST is helping hyperscalers, and the leading optical module provider, overcome those challenges with new silicon photonics and next-gen BiCMOS technologies, scheduled to ramp up from the second half of 2025 for 800 Gb/s and 1.6 Tb/s optical modules.

At the heart of interconnections in a datacenter are thousands, or even hundreds of thousands, of optical transceivers. These devices convert optical into electrical signals and vice versa to allow data flow between graphics processing unit (GPU) computing resources, switches and storage. Inside these transceivers, ST's new, proprietary silicon photonics (SiPho) technology will bring customers the ability to integrate multiple complex components into one single chip, while ST's next-gen, proprietary BiCMOS technology brings ultra high-speed and low power optical connectivity, which are key to sustain the AI growth.

Senao Networks Unveils AI Driven Computing at MWC Barcelona 2025

Senao Networks Inc. (SNI), a global leader in AI computing and networking solutions, will be exhibiting at 2025 Mobile World Congress (MWC) in Barcelona. At the event, SNI will showcase its latest AI-driven innovations, including AI Servers, AI Cameras, AIPCs, Cloud Solutions, and Titanium Power Supply, reinforcing its vision of "AI Everywhere."

Senao Networks continues to advance AI computing with new products designed to enhance security, efficiency, and connectivity.

Arm to Develop In-House Server CPUs, Signs Meta as First Customer

Reports from Financial Times suggest Arm has plans to create its own CPU, set to hit the market in 2025 with Meta Platforms said to be one of the first customers. The chip is said to be a CPU for data center servers, with TSMC handling the manufacturing. However, when the Financial Times asked about this, SoftBank (the majority owner of Arm) and Meta stayed quiet, while Arm didn't give a statement. A Nikkei report from May 2024 suggested that a prototype AI processor chip would be completed by spring 2025 and available for sale by fall 2025, so the latest information from the Financial Times report feels like a confirmation of previous rumors.

Right now, Arm makes money by letting others use its instruction set and core designs to make their own chips. This new move could mean Arm will compete with its current customers. Sources in the industry say Arm is trying to win business from Qualcomm, with rumors that Arm has been bringing in executives from companies it works with to help develop this chip. While Qualcomm had talked in the past about giving Meta a data center CPU using Arm's design, it looks like Arm has won at least some of that deal. However, no technical or specification details are available currently for Arm's 1st in-house server CPU.

Intel's Head of Data Center and AI Division Exits to Lead Nokia

Intel experienced another leadership setback on Monday when Justin Hotard, who led its Data Center and AI (DCAI) division said he was leaving to become Nokia's CEO. Hotard joined Intel in early 2024 and worked there for just over a year. He will take over from Pekka Lundmark at Nokia on April 1. In his short time at Intel, Hotard oversaw the release of Intel's Sierra Forest E-core and Granite Rapids P-core Xeon 6 platforms. These helped Intel catch up to AMD in core count for the first time since 2017. Intel has temporarily appointed Karin Eibschitz Segal, an 18-year company veteran and co-CEO at Intel Israel, as the interim chief of DCAI.

However, Justin Hotard's exit comes as the DCAI division faces several problems. Not long ago, Intel said it would push back the launch of its next-generation Clearwater Forest Xeons to the first half of 2026 blaming low demand. The company also scrapped its Falcon Shores accelerators to focus on a future rack-scale platform called Jaguar Shores. These setbacks came after Intel fellow Sailesh Kottapalli left for Qualcomm last month. Kottapalli had worked at Intel for 28 years and played a key role in developing many Xeon server processors.

Supermicro Ramps Full Production of NVIDIA Blackwell Rack-Scale Solutions With NVIDIA HGX B200

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing full production availability of its end-to-end AI data center Building Block Solutions accelerated by the NVIDIA Blackwell platform. The Supermicro Building Block portfolio provides the core infrastructure elements necessary to scale Blackwell solutions with exceptional time to deployment. The portfolio includes a broad range of air-cooled and liquid-cooled systems with multiple CPU options. These include superior thermal design supporting traditional air cooling, liquid-to-liquid (L2L) and liquid-to-air (L2A) cooling. In addition, a full data center management software suite, rack-level integration, including full network switching and cabling and cluster-level L12 solution validation can be delivered as turn-key offering with global delivery, professional support, and service.

"In this transformative moment of AI, where scaling laws are pushing the limits of data center capabilities, our latest NVIDIA Blackwell-powered solutions, developed through close collaboration with NVIDIA, deliver outstanding computational power," said Charles Liang, president and CEO of Supermicro. "Supermicro's NVIDIA Blackwell GPU offerings in plug-and-play scalable units with advanced liquid cooling and air cooling are empowering customers to deploy an infrastructure that supports increasingly complex AI workloads while maintaining exceptional efficiency. This reinforces our commitment to providing sustainable, cutting-edge solutions that accelerate AI innovation."

Intel Cuts Xeon 6 Prices up to 30% to Battle AMD in the Data Center

Intel has implemented substantial price cuts across its Xeon 6 "Granite Rapids" server processor lineup, marking a significant shift in its data center strategy. The reductions, quietly introduced and reflected in Intel's ARK database, come just four months after the processors' September launch. The most dramatic cut affects Intel's flagship 128-core Xeon 6980P, which saw its price drop from $17,800 by 30% to $12,460. This aggressive pricing positions the processor below AMD's competing EPYC "Turin" 9755 128-core CPU both absolute and per-core pricing, intensifying the rivalry between the two semiconductor giants. AMD's SKU at 128 cores is now pricier at $12,984, with higher core count SKUs reaching up to $14,813 for 192-core EPYC 9965 CPU based on Zen 5c core. Intel is expected to release 288-core "Sierra Forest" Xeon SKUs this quarter, so we can get an updated pricing structure and compare it to AMD.

Additionally, Intel's price adjustments extend beyond the flagship model, with three of the five Granite Rapids processors receiving substantial reductions. The 96-core Xeon 6972P and 6952P models have been marked down by 13% and 20% respectively. These cuts make Intel's offerings particularly attractive to cloud providers who prioritize core density and cost efficiency. However, Intel's competitive pricing comes with trade-offs. The higher power consumption of Intel's processors—exemplified by the 96-core Xeon 6972P's 500 W requirement, which exceeds AMD's comparable model by 100 W—could offset the initial savings through increased operational costs. Ultimately, most of the data center buildout will be won by whoever can serve the most CPU volume shipped (read wafer production capacity) and the best TCO/ROI balance, including power consumption and performance.

Numem to Showcase Next-Gen Memory Solutions at the Upcoming Chiplet Summit

Numem, an innovator focused on accelerating memory for AI workloads, will be at the upcoming Chiplet Summit to showcase its high-performance solutions. By accelerating the delivery of data via new memory subsystem designs, Numem solutions are re-architecting the hierarchy of AI memory tiers to eliminate the bottlenecks that negatively impact power and performance.

The rapid growth of AI workloads and AI Processor/GPUs are exacerbating the memory bottleneck caused by the slowing performance improvements and scalability of SRAM and DRAM - presenting a major obstacle to maximizing system performance. To overcome this, there is a pressing need for intelligent memory solutions that offer higher power efficiency and greater bandwidth, coupled with a reevaluation of traditional memory architectures.

Qualcomm Pushes for Data Center CPUs, Hires Ex-Intel Chief Xeon Architect

Qualcomm is becoming serious about its server CPU ambitions. Today, we have learned that Sailesh Kottapalli, Intel's former chief architect for Xeon server processors, has joined Qualcomm as Senior Vice President after 28 years at Intel. Kottapalli, who announced his departure on LinkedIn Monday, previously led the development of multiple Xeon and Itanium processors at Intel. Qualcomm's data center team is currently working on reference platforms based on their Snapdragon technology. The company already sells AI accelerator chips under the Qualcomm Cloud AI brand, supported by major providers including AWS, HPE, and Lenovo.

This marks Qualcomm's second attempt at entering the server CPU market, following an unsuccessful Centriq effort that ended in 2018. The company is now leveraging technology from its $1.4 billion Nuvia acquisition in 2021, though this has led to ongoing legal disputes with Arm over licensing terms. While Qualcomm hasn't officially detailed Kottapalli's role, the company confirmed in legal filings its intentions to continue developing data center CPUs, as originally planned by Nuvia.

Transcend Unveils Enterprise SSD to Boost Data Center Performance and Security

Transcend Information Inc. (Transcend) a global leader in storage solutions, introduces the new ETD210T enterprise 2.5-inch SSD, designed to meet the read-intensive needs of business users. Featuring enterprise-grade TLC (eTLC) NAND flash and a SATA III 6 Gb/s interface, the ETD210T includes a built-in DRAM cache to deliver fast data transfer, exceptional Quality of Service (QoS), ultra-low latency, and superior endurance. Ideal for read-intensive and high-capacity storage workloads in cloud and data center applications, it provides a reliable and efficient storage solution for enterprise computing.

Designed for Enterprises, Optimized for Data Centers
The ETD210T supports various enterprise applications, including data centers, virtualized servers, and large-scale data processing. Equipped with high-endurance eTLC NAND flash, it delivers exceptional read and write performance. Its endurance rating of DWPD = 1 meets the requirements of most enterprise-class applications, while its read and write speeds of up to 530 MB/s and 510 MB/s, respectively, address the need for highly efficient storage.

SK hynix Develops PS1012 U.2 High Capacity SSD for AI Data Centers

SK hynix Inc. announced today that it has completed development of its high-capacity SSD product, PS1012 U.2, designed for AI data centers. As the era of AI accelerates, the demand for high-performance enterprise SSDs (eSSD) is rapidly increasing, and QLC technology, which enables high capacity, has become the industry standard. In line with this trend, SK hynix has developed a 61 TB product using this technology and introduced it to the market.

SK hynix has been leading the SSD market for AI data centers with Solidigm, a subsidiary which commercialized QLC-based eSSD for the first time in the world. With the development of PS1012, the company expects to build a balanced SSD portfolio, thereby maximizing synergy between the two companies. With the latest 5th generation (Gen 5) PCIe, PS1012 doubles its bandwidth compared to 4th generation based products. As a result, the data transfer speed reaches 32 GT/s (Gig-transfers per second), with the sequential read performance of 13 GB/s (Gigabyte per second), which is twice that of previous generation products.

IBM Develops Co-Packaged Optical Interconnect for Data Center

IBM Research has unveiled a significant advancement in optical interconnect technology for advanced data center communications. The breakthrough centers on a novel co-packaged optics (CPO) system featuring a sophisticated Polymer Optical Waveguide (PWG) design, marking a potential shift from traditional copper-based interconnects. The innovation introduces a Photonic Integrated Circuit (PIC) measuring 8x10mm, mounted on a 17x17mm substrate, capable of converting electrical signals to optical ones and vice versa. The system's waveguide, spanning 12 mm in width, efficiently channels light waves through precisely engineered pathways, with channels converging from 250 to 50 micrometers.

While current copper-based solutions like NVIDIA's NVLink offer impressive 1.8 TB/s bandwidth rates, and Intel's Optical Compute Interconnect achieves 4 TBit/s bidirectional throughput, IBM's technology focuses on scalability and efficiency. The company plans to implement 12 carrier waves initially, with the potential to accommodate up to 32 waves by reducing spacing to 18 micrometers. Furthermore, the design allows for vertical stacking of up to four PWGs, potentially enabling 128 transmission channels. The technology has undergone rigorous JEDEC-standard testing, including 1,000 cycles of thermal stress between -40°C and 125°C, and extended exposure to extreme conditions including 85% humidity at 85°C. The components have also proven reliable during thousand-hour storage tests at various temperature extremes. The bandwidth of the CPO is currently unknown, but we expect it to surpass current solutions.

JPR: Q3'24 PC Graphics AiB Shipments Decreased 14.5% Compared to the Last Quarter

According to a new research report from the analyst firm Jon Peddie Research, the growth of the global PC-based graphics add-in board market reached 8.1 million units in Q3'24 and desktop PC CPU shipments increased to 20.1 million units. Overall, AIBs will have a compound annual growth rate of -6.0% from 2024 to 2028 and reach an installed base of 119 million units at the end of the forecast period. Over the next five years, the penetration of AIBs in desktop PCs will be 83%.

As indicated in the following chart, AMD's overall AIB market share decreased -2.0% from last quarter, and NVIDIA's market share increased by 2.0%. These slight flips of market share in a down quarter don't mean much except to the winner. The overall market dynamics haven't changed.
  • The AIB overall attach rate in desktop PCs for the quarter decreased to 141%, down -26.9% from last quarter.
  • The desktop PC CPU market decreased -3.4% year to year and increased 42.2% quarter to quarter, which influenced the attach rate of AIBs.

Intel at CES 2025: Pioneering AI-Driven Innovation in Work and Mobility

AI is fundamentally changing technology - from the PC to edge and cloud—and redefining the way we work, create and collaborate. Next-generation technologies will empower workers through powerful new tools that enhance productivity and make intelligent, personalized computing more accessible than ever. At CES 2025, Intel will show how it's advancing what's possible on the AI PC—for on-the-go professionals and enthusiasts - by designing for the needs and experiences of tomorrow with breakthrough efficiency, no-compromise compatibility and an unmatched software ecosystem. Intel will also showcase the latest automotive innovations as the industry embraces software-defined connected electric vehicles powered by AI.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

MSI Presents New AMD EPYC 9005 Series CPU-Based Server Platforms at SC24

MSI, a leading global provider of high-performance server solutions, is excited to unveil its latest AMD EPYC 9005 Series CPU-based server boards and platforms at SC24 (SuperComputing 2024), Booth #3655, from November 19-21. Built on the OCP Modular Hardware System (DC-MHS) architecture, these new platforms deliver high-density, AI-ready solutions, including multi-node, enterprise, CXL memory expansion, and GPU servers, designed to meet the intensive demands of modern data centers.

"As AI continues to reshape the landscape of data center infrastructure, MSI's servers, powered by the AMD EPYC 9005 Series processors, offer unmatched density, energy efficiency, and cost optimization—making them ideal for modern data centers," said Danny Hsu, General Manager of Enterprise Platform Solutions. "Our servers optimize thermal management and performance for virtualized and containerized environments, positioning MSI at the forefront of AI and cloud-based workloads."

Solidigm Launches D5-P5336 PCIe Data Center SSDs With 122 TB Capacity

Solidigm, a leading provider of innovative NAND flash memory solutions, announced today the introduction of the world's highest capacity PCIe solid-state drive (SSD): the 122 TB (terabyte) Solidigm D5-P5336 data center SSD. The D5-P5336 doubles the storage space of Solidigm's earlier 61.44 TB version of the drive and is the world's first SSD with unlimited Random Write endurance for five years—offering an ideal solution for AI and data-intensive workloads. Just how much storage is 122.88 TB? Roughly enough for 4K-quality copies of every movie theatrically released in the 1990s, 2.6 times over.

Data storage power, thermal and space constraints are accelerating as AI adoption increases. Power and space-efficient, the new 122 TB D5-P5336 delivers industry-leading storage efficiency from the core data center to the edge. Data center operators can deploy with confidence the 122 TB D5-P5336 from Solidigm, the proven QLC (quad-level cell) density leader with more than 100EB (exabytes) of QLC-based product shipped since 2018.

Innodisk Introduces E1.S Edge Server SSD for Edge Computing and AI Applications

Innodisk, a leading global AI solution provider, has introduced its new E1.S SSD, which is specifically designed to meet the demands of growing edge computing applications. The E1.S edge server SSD offers exceptional performance, reliability, and thermal management capabilities to address the critical needs of modern data-intensive environments and bridge the gap between traditional industrial SSDs and data center SSDs.

As AI and 5G technologies rapidly evolve, the demands on data processing and storage continue to grow. The E1.S SSD addresses the challenges of balancing heat dissipation and performance, which has become a major concern for today's SSDs. Traditional industrial and data center SSDs often struggle to meet the needs of edge applications. Innodisk's E1.S eliminates these bottlenecks with its Enterprise and Data Center Standard Form Factor (EDSFF) design and offers a superior alternative to U.2 and M.2 SSDs.

Micron Launches 6550 ION 60TB PCIe Gen5 NVMe SSD Series

Micron Technology, Inc., today announced it has begun qualification of the 6550 ION NVMe SSD with customers. The Micron 6550 ION is the world's fastest 60 TB data center SSD and the industry's first E3.S and PCIe Gen 5 60 TB SSD. It follows the success of the award-winning 6500 ION and is engineered to provide best-in-class performance, energy efficiency, endurance, security, and rack density for exascale data center deployments. The 6550 ION excels in high-capacity NVMe workloads such as networked AI data lakes, ingest, data preparation and check pointing, file and object storage, public cloud storage, analytic databases, and content delivery.

"The Micron 6550 ION achieves a remarkable 12 GB/s while using just 20 watts of power, setting a new standard in data center performance and energy efficiency," said Alvaro Toledo, vice president and general manager of Micron's Data Center Storage Group. "Featuring a first-to-market 60 TB capacity in an E3.S form factor and up to 20% better energy efficiency than competitive drives, the Micron 6550 ION is a game-changer for high-capacity storage solutions to address the insatiable capacity and power demands of AI workloads."
Return to Keyword Browsing
Mar 25th, 2025 01:58 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts