News Posts matching #InfiniBand

Return to Keyword Browsing

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA is working with companies worldwide to build out AI factories—speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training—the 12th since the benchmark's introduction in 2018—the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark's toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining.

The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark—underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.

Doudna Supercomputer Will be Powered by NVIDIA's Next-gen Vera Rubin Platform

Ready for a front-row seat to the next scientific revolution? That's the idea behind Doudna—a groundbreaking supercomputer announced today at Lawrence Berkeley National Laboratory in Berkeley, California. The system represents a major national investment in advancing U.S. high-performance computing (HPC) leadership, ensuring U.S. researchers have access to cutting-edge tools to address global challenges. "It will advance scientific discovery from chemistry to physics to biology and all powered by—unleashing this power—of artificial intelligence," U.S. Energy Secretary Chris Wright (pictured above) said at today's event.

Also known as NERSC-10, Doudna is named for Nobel laureate and CRISPR pioneer Jennifer Doudna. The next-generation system announced today is designed not just for speed but for impact. Powered by Dell Technologies infrastructure with the NVIDIA Vera Rubin architecture, and set to launch in 2026, Doudna is tailored for real-time discovery across the U.S. Department of Energy's most urgent scientific missions. It's poised to catapult American researchers to the forefront of critical scientific breakthroughs, fostering innovation and securing the nation's competitive edge in key technological fields.

NVIDIA & Microsoft Accelerate Agentic AI Innovation - From Cloud to PC

Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries. Through deepened collaboration, NVIDIA and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC. At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists.

Microsoft Discovery will integrate the NVIDIA ALCHEMI NIM microservice, which optimizes AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation. The platform will also integrate NVIDIA BioNeMo NIM microservices, tapping into pretrained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries. In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centers in under 200 hours, rather than months or years with traditional methods.

Dell Technologies Unveils Next Generation Enterprise AI Solutions with NVIDIA

The world's top provider of AI-centric infrastructure, Dell Technologies, announces innovations across the Dell AI Factory with NVIDIA - all designed to help enterprises accelerate AI adoption and achieve faster time to value.

Why it matters
As enterprises make AI central to their strategy and progress from experimentation to implementation, their demand for accessible AI skills and technologies grows exponentially. Dell and NVIDIA continue the rapid pace of innovation with updates to the Dell AI Factory with NVIDIA, including robust AI infrastructure, solutions and services that streamline the path to full-scale implementation.

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will present its latest innovations in AI infrastructure at COMPUTEX 2025. At booth M1110, MiTAC Computing will display its next-level AI server platforms MiTAC G4527G6, fully optimized for NVIDIA MGX architecture, which supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the NVIDIA H200 NVL platform to address the evolving demands of enterprise AI workloads.

Next-Gen AI with High-Performance Computing
With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solution, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Built on Intel Xeon 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8 TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, and an NVIDIA BlueField-3 DPU for efficient north-south connectivity. Crucially, it integrates four next-generation NVIDIA ConnectX-8 SuperNICs, delivering up to 800 gigabits per second (Gb/s) of NVIDIA InfiniBand and Ethernet networking—significantly enhancing system performance for AI factories and cloud data center environments.

NVIDIA Discusses the Revenue-Generating Potential of AI Factories

AI is creating value for everyone—from researchers in drug discovery to quantitative analysts navigating financial market changes. The faster an AI system can produce tokens, a unit of data used to string together outputs, the greater its impact. That's why AI factories are key, providing the most efficient path from "time to first token" to "time to first value." AI factories are redefining the economics of modern infrastructure. They produce intelligence by transforming data into valuable outputs—whether tokens, predictions, images, proteins or other forms—at massive scale.

They help enhance three key aspects of the AI journey—data ingestion, model training and high-volume inference. AI factories are being built to generate tokens faster and more accurately, using three critical technology stacks: AI models, accelerated computing infrastructure and enterprise-grade software. Read on to learn how AI factories are helping enterprises and organizations around the world convert the most valuable digital commodity—data—into revenue potential.

NVIDIA Wins Multiple COMPUTEX Best Choice Awards

NVIDIA today received multiple accolades at COMPUTEX's Best Choice Awards, in recognition of innovation across the company. The NVIDIA GeForce RTX 5090 GPU won the Gaming and Entertainment category award; the NVIDIA Quantum-X Photonics InfiniBand switch system won the Networking and Communication category award; NVIDIA DGX Spark won the Computer and System category award; and the NVIDIA GB200 NVL72 system and NVIDIA Cosmos world foundation model development platform won Golden Awards. The awards recognize the outstanding functionality, innovation and market promise of technologies in each category. Jensen Huang, founder and CEO of NVIDIA, will deliver a keynote at COMPUTEX on Monday, May 19, at 11 a.m. Taiwan time.

GB200 NVL72 and NVIDIA Cosmos Go Gold
NVIDIA GB200 NVL72 and NVIDIA Cosmos each won Golden Awards. The NVIDIA GB200 NVL72 system connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale design. It delivers 1.4 exaflops of AI performance and 30 terabytes of fast memory, as well as 30x faster real-time trillion-parameter large language model inference with 25x energy efficiency compared with the NVIDIA H100 GPU. By design, the GB200 NVL72 accelerates the most compute-intensive AI and high-performance computing workloads, including AI training and data processing for engineering design and simulation. NVIDIA Cosmos accelerates physical AI development by enabling developers to build and deploy world foundation models with unprecedented speed and scale.

Oracle Cloud Infrastructure Bolstered by Thousands of NVIDIA Blackwell GPUs

Oracle has stood up and optimized its first wave of liquid-cooled NVIDIA GB200 NVL72 racks in its data centers. Thousands of NVIDIA Blackwell GPUs are now being deployed and ready for customer use on NVIDIA DGX Cloud and Oracle Cloud Infrastructure (OCI) to develop and run next-generation reasoning models and AI agents. Oracle's state-of-the-art GB200 deployment includes high-speed NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet networking to enable scalable, low-latency performance, as well as a full stack of software and database integrations from NVIDIA and OCI.

OCI, one of the world's largest and fastest-growing cloud service providers, is among the first to deploy NVIDIA GB200 NVL72 systems. The company has ambitious plans to build one of the world's largest Blackwell clusters. OCI Superclusters will scale beyond 100,000 NVIDIA Blackwell GPUs to meet the world's skyrocketing need for inference tokens and accelerated computing. The torrid pace of AI innovation continues as several companies including OpenAI have released new reasoning models in the past few weeks.

Thousands of NVIDIA Grace Blackwell GPUs Now Live at CoreWeave

CoreWeave today became one of the first cloud providers to bring NVIDIA GB200 NVL72 systems online for customers at scale, and AI frontier companies Cohere, IBM and Mistral AI are already using them to train and deploy next-generation AI models and applications. CoreWeave, the first cloud provider to make NVIDIA Grace Blackwell generally available, has already shown incredible results in MLPerf benchmarks with NVIDIA GB200 NVL72 - a powerful rack-scale accelerated computing platform designed for reasoning and AI agents. Now, CoreWeave customers are gaining access to thousands of NVIDIA Blackwell GPUs.

"We work closely with NVIDIA to quickly deliver to customers the latest and most powerful solutions for training AI models and serving inference," said Mike Intrator, CEO of CoreWeave. "With new Grace Blackwell rack-scale systems in hand, many of our customers will be the first to see the benefits and performance of AI innovators operating at scale."

Supermicro Adds Portfolio for Next Wave of AI with NVIDIA Blackwell Ultra Solutions

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new systems and rack solutions powered by the NVIDIA's Blackwell Ultra platform, featuring the NVIDIA HGX B300 NVL16 and NVIDIA GB300 NVL72 platforms. Supermicro and NVIDIA's new AI solutions strengthen leadership in AI by delivering breakthrough performance for the most compute-intensive AI workloads, including AI reasoning, agentic AI, and video inference applications.

"At Supermicro, we are excited to continue our long-standing partnership with NVIDIA to bring the latest AI technology to market with the NVIDIA Blackwell Ultra Platforms," said Charles Liang, president and CEO, Supermicro. "Our Data Center Building Block Solutions approach has streamlined the development of new air and liquid-cooled systems, optimized to the thermals and internal topology of the NVIDIA HGX B300 NVL16 and GB300 NVL72. Our advanced liquid-cooling solution delivers exceptional thermal efficiency, operating with 40℃ warm water in our 8-node rack configuration, or 35℃ warm water in double-density 16-node rack configuration, leveraging our latest CDUs. This innovative solution reduces power consumption by up to 40% while conserving water resources, providing both environmental and operational cost benefits for enterprise data centers."

Supermicro Expands Enterprise AI Portfolio With Support for Upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA H200 NVL Platform

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today announced support for the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on a range of workload-optimized GPU servers and workstations. Specifically optimized for the NVIDIA Blackwell generation of PCIe GPUs, the broad range of Supermicro servers will enable more enterprises to leverage accelerated computing for LLM-inference and fine-tuning, agentic AI, visualization, graphics & rendering, and virtualization. Many Supermicro GPU-optimized systems are NVIDIA Certified, guaranteeing compatibility and support for NVIDIA AI Enterprise to simplify the process of developing and deploying production AI.

"Supermicro leads the industry with its broad portfolio of application optimized GPU servers that can be deployed in a wide range of enterprise environments with very short lead times," said Charles Liang, president and CEO of Supermicro. "Our support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU adds yet another dimension of performance and flexibility for customers looking to deploy the latest in accelerated computing capabilities from the data center to the intelligent edge. Supermicro's broad range of PCIe GPU-optimized products also support NVIDIA H200 NVL in 2-way and 4-way NVIDIA NVLink configurations to maximize inference performance for today's state-of-the-art AI models, as well as accelerating HPC workloads."

NVIDIA Commercializes Silicon Photonics with InfiniBand and Ethernet Switches

NVIDIA has developed co-packaged optics (CPO) technology with TSMC for its upcoming Quantum-X InfiniBand and Spectrum-X Ethernet switches, integrating silicon photonics directly onto switch ASICs. The engineering approach reduces power consumption by 3.5x. It decreases signal loss from 22 dB to 4 dB compared to traditional pluggable optics, addressing critical power and connectivity limitations in large-scale GPU deployments, especially in 10,000+ GPU systems. The architecture incorporates continuous wave laser sources within the switch chassis, consuming 2 W per port, compared to the 10 W required by conventional externally modulated lasers in pluggable modules. This configuration, combined with integrated optical engines that use 7 W versus 20 W for traditional digital signal processors, reduces total optical interconnect power from approximately 72 MW to 21.6 MW in a 400,000 GPU data center scenario.

Specifications for the Quantum 3450-LD InfiniBand model include 144 ports running at 800 Gb/s, delivering 115 Tb/s of aggregate bandwidth using four Quantum-X CPO sockets in a liquid-cooled chassis. The Spectrum-X lineup features the SN6810 with 128 ports at 800 Gb/s (102.4 Tb/s) and the higher-density SN6800 providing 512 ports at 800 Gb/s for 409.6 Tb/s total throughput. The Quantum-X InfiniBand implementation uses a monolithic switch ASIC with six CPO modules supporting 36 ports at 800 Gb/s, while the Spectrum-X Ethernet design employs a multi-chip approach with a central packet processing engine surrounded by eight SerDes chiplets. Both architectures utilize 224 Gb/s signaling per lane with four lanes per port. NVIDIA's Quantum-X switches are scheduled for availability in H2 2025, with Spectrum-X models following in H2 2026.

ASRock Rack Debuts High-Performance AI Server Solutions at GTC 2025

ASRock Rack Inc., the leading innovative server company, will showcase its high-performance AI server lineup at GTC 2025 in San Jose, California, from March 18-21. The featured solutions include next-gen AI servers based on the NVIDIA Blackwell Ultra platform and RTX PRO 6000 Blackwell Server Edition, and the debut of the liquid-cooled 4U8X-GNR2.

Agentic AI, autonomous systems that perceive, decide, and act, is gaining traction across industries, from healthcare to robotics. The growing demand for real-time interactions and adaptive learning is driving the need for accelerated computing servers with exceptional computational power. At GTC today, ASRock Rack showcases servers based on NVIDIA Blackwell Ultra, the latest addition to the NVIDIA Blackwell accelerated computing platform, offering optimized compute and increased memory, leading the way for a new era of AI reasoning, agentic AI, and physical AI applications.

CoreWeave Launches Debut Wave of NVIDIA GB200 NVL72-based Cloud Instances

AI reasoning models and agents are set to transform industries, but delivering their full potential at scale requires massive compute and optimized software. The "reasoning" process involves multiple models, generating many additional tokens, and demands infrastructure with a combination of high-speed communication, memory and compute to ensure real-time, high-quality results. To meet this demand, CoreWeave has launched NVIDIA GB200 NVL72-based instances, becoming the first cloud service provider to make the NVIDIA Blackwell platform generally available. With rack-scale NVIDIA NVLink across 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, scaling to up to 110,000 GPUs with NVIDIA Quantum-2 InfiniBand networking, these instances provide the scale and performance needed to build and deploy the next generation of AI reasoning models and agents.

NVIDIA GB200 NVL72 on CoreWeave
NVIDIA GB200 NVL72 is a liquid-cooled, rack-scale solution with a 72-GPU NVLink domain, which enables the six dozen GPUs to act as a single massive GPU. NVIDIA Blackwell features many technological breakthroughs that accelerate inference token generation, boosting performance while reducing service costs. For example, fifth-generation NVLink enables 130 TB/s of GPU bandwidth in one 72-GPU NVLink domain, and the second-generation Transformer Engine enables FP4 for faster AI performance while maintaining high accuracy. CoreWeave's portfolio of managed cloud services is purpose-built for Blackwell. CoreWeave Kubernetes Service optimizes workload orchestration by exposing NVLink domain IDs, ensuring efficient scheduling within the same rack. Slurm on Kubernetes (SUNK) supports the topology block plug-in, enabling intelligent workload distribution across GB200 NVL72 racks. In addition, CoreWeave's Observability Platform provides real-time insights into NVLink performance, GPU utilization and temperatures.

NVIDIA's Next-Gen "Rubin" AI GPU Development 6 Months Ahead of Schedule: Report

The "Rubin" architecture succeeds NVIDIA's current "Blackwell," which powers the company's AI GPUs, as well as the upcoming GeForce RTX 50-series gaming GPUs. NVIDIA will likely not build gaming GPUs with "Rubin," just like it didn't with "Hopper," and for the most part, "Volta." NVIDIA's AI GPU product roadmap put out at SC'24 puts "Blackwell" firmly in charge of the company's AI GPU product stack throughout 2025, with "Rubin" only succeeding it in the following year, for a two-year run in the market, being capped off with a "Rubin Ultra" larger GPU slated for 2027. A new report by United Daily News (UDN), a Taiwan-based publication, says that the development of "Rubin" is running 6 months ahead of schedule.

Being 6 months ahead of schedule doesn't necessarily mean that the product will launch sooner. It would give NVIDIA headroom to get "Rubin" better evaluated in the industry, and make last-minute changes to the product if needed; or even advance the launch if it wants to. The first AI GPU powered by "Rubin" will feature 8-high HBM4 memory stacks. The company will also introduce the "Vera" CPU, the long-awaited successor to "Grace." It will also introduce the X1600 InfiniBand/Ethernet network processor. According to the SC'24 roadmap by NVIDIA, these three would've seen a 2026 launch. Then in 2027, the company would follow up with an even larger AI GPU based on the same "Rubin" architecture, codenamed "Rubin Ultra." This features 12-high HBM4 stacks. NVIDIA's current GB200 "Blackwell" is a tile-based GPU, with two dies that have full cache-coherence. "Rubin" is rumored to feature four tiles.

Marvell Unveils Industry's First 3nm 1.6 Tbps PAM4 Interconnect Platform to Scale Accelerated Infrastructure

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today introduced Marvell Ara, the industry's first 3 nm 1.6 Tbps PAM4 interconnects platform featuring 200 Gbps electrical and optical interfaces. Building on the success of the Nova 2 DSP, the industry's first 5 nm 1.6 Tbps PAM4 DSP with 200 Gbps electrical and optical interfaces, Ara leverages the comprehensive Marvell 3 nm platform with industry-leading 200 Gbps SerDes and integrated optical modulator drivers, to reduce 1.6 Tbps optical module power by over 20%. The energy efficiency improvement reduces operational costs and enables new AI server and networking architectures to address the need for higher bandwidth and performance for AI workloads, within the significant power constraints of the data center.

Ara, the industry's first 3 nm PAM4 optical DSP, builds on six generations of Marvell leadership in PAM4 optical DSP technology. It integrates eight 200 Gbps electrical lanes to the host and eight 200 Gbps optical lanes, enabling 1.6 Tbps in a compact, standardized module form factor. Leveraging 3 nm technology and laser driver integration, Ara reduces module design complexity, power consumption and cost, setting a new benchmark for next-generation AI and cloud infrastructure.

NVIDIA and Microsoft Showcase Blackwell Preview, Omniverse Industrial AI and RTX AI PCs at Microsoft Ignite

NVIDIA and Microsoft today unveiled product integrations designed to advance full-stack NVIDIA AI development on Microsoft platforms and applications. At Microsoft Ignite, Microsoft announced the launch of the first cloud private preview of the Azure ND GB200 V6 VM series, based on the NVIDIA Blackwell platform. The Azure ND GB200 v6 will be a new AI-optimized virtual machine (VM) series and combines the NVIDIA GB200 NVL72 rack design with NVIDIA Quantum InfiniBand networking.

In addition, Microsoft revealed that Azure Container Apps now supports NVIDIA GPUs, enabling simplified and scalable AI deployment. Plus, the NVIDIA AI platform on Azure includes new reference workflows for industrial AI and an NVIDIA Omniverse Blueprint for creating immersive, AI-powered visuals. At Ignite, NVIDIA also announced multimodal small language models (SLMs) for RTX AI PCs and workstations, enhancing digital human interactions and virtual assistants with greater realism.

TOP500: El Capitan Achieves Top Spot, Frontier and Aurora Follow Behind

The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier and Aurora. Both systems have since moved down to No. 2 and No. 3 spots, respectively. Additionally, new systems have found their way onto the Top 10.

The new El Capitan system at the Lawrence Livermore National Laboratory in California, U.S.A., has debuted as the most powerful system on the list with an HPL score of 1.742 EFlop/s. It has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. El Capitan relies on a Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 58.89 GigaFLOPS/watt. This power efficiency rating helped El Capitan achieve No. 18 on the GREEN500 list as well.

NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

We know that NVIDIA's latest "Blackwell" GPUs are fast, but how much faster are they over the previous generation "Hopper"? Thanks to the latest MLPerf Training v4.1 results, NVIDIA's HGX B200 Blackwell platform has demonstrated massive performance gains, measuring up to 2.2x improvement per GPU compared to its HGX H200 Hopper. The latest results, verified by MLCommons, reveal impressive achievements in large language model (LLM) training. The Blackwell architecture, featuring HBM3e high-bandwidth memory and fifth-generation NVLink interconnect technology, achieved double the performance per GPU for GPT-3 pre-training and a 2.2x boost for Llama 2 70B fine-tuning compared to the previous Hopper generation. Each benchmark system incorporated eight Blackwell GPUs operating at a 1,000 W TDP, connected via NVLink Switch for scale-up.

The network infrastructure utilized NVIDIA ConnectX-7 SuperNICs and Quantum-2 InfiniBand switches, enabling high-speed node-to-node communication for distributed training workloads. While previous Hopper-based systems required 256 GPUs to optimize performance for the GPT-3 175B benchmark, Blackwell accomplished the same task with just 64 GPUs, leveraging its larger HBM3e memory capacity and bandwidth. One thing to look out for is the upcoming GB200 NVL72 system, which promises even more significant gains past the 2.2x. It features expanded NVLink domains, higher memory bandwidth, and tight integration with NVIDIA Grace CPUs, complemented by ConnectX-8 SuperNIC and Quantum-X800 switch technologies. With faster switching and better data movement with Grace-Blackwell integration, we could see even more software optimization from NVIDIA to push the performance envelope.

NVIDIA Ethernet Networking Accelerates World's Largest AI Supercomputer, Built by xAI

NVIDIA today announced that xAI's Colossus supercomputer cluster comprising 100,000 NVIDIA Hopper GPUs in Memphis, Tennessee, achieved this massive scale by using the NVIDIA Spectrum-X Ethernet networking platform, which is designed to deliver superior performance to multi-tenant, hyperscale AI factories using standards-based Ethernet, for its Remote Direct Memory Access (RDMA) network.

Colossus, the world's largest AI supercomputer, is being used to train xAI's Grok family of large language models, with chatbots offered as a feature for X Premium subscribers. xAI is in the process of doubling the size of Colossus to a combined total of 200,000 NVIDIA Hopper GPUs.

Supermicro Adds New Petascale JBOF All-Flash Storage Solution Integrating NVIDIA BlueField-3 DPU for AI Data Pipeline Acceleration

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is launching a new optimized storage system for high performance AI training, inference and HPC workloads. This JBOF (Just a Bunch of Flash) system utilizes up to four NVIDIA BlueField-3 data processing units (DPUs) in a 2U form factor to run software-defined storage workloads. Each BlueField-3 DPU features 400 Gb Ethernet or InfiniBand networking and hardware acceleration for high computation storage and networking workloads such as encryption, compression and erasure coding, as well as AI storage expansion. The state-of-the-art, dual port JBOF architecture enables active-active clustering ensuring high availability for scale up mission critical storage applications as well as scale-out storage such as object storage and parallel file systems.

"Supermicro's new high performance JBOF Storage System is designed using our Building Block approach which enables support for either E3.S or U.2 form-factor SSDs and the latest PCIe Gen 5 connectivity for the SSDs and the DPU networking and storage platform," said Charles Liang, president and CEO of Supermicro. "Supermicro's system design supports 24 or 36 SSD's enabling up to 1.105PB of raw capacity using 30.71 TB SSDs. Our balanced network and storage I/O design can saturate the full 400 Gb/s BlueField-3 line-rate realizing more than 250 GB/s bandwidth of the Gen 5 SSDs."

Marvell Demonstrates Industry-Leading 3 nm PCIe Gen 7 Connectivity

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, demonstrates its industry-leading 3 nm PCIe Gen 7 connectivity at the OCP Global Summit, October 15-17, at the San Jose Convention Center. PCIe Gen 7 doubles data transfer speeds, enabling continued scaling of compute fabrics inside accelerated server platforms, general-purpose servers, CXL systems and disaggregated infrastructure. Building on its widely deployed PAM4 technology and utilizing its industry-leading accelerated infrastructure silicon platform, Marvell has developed the industry's most comprehensive interconnect portfolio addressing all high-bandwidth optical and copper connections in AI data centers. This innovative portfolio empowers cloud data center operators to optimize their infrastructure for their specific architectures and workloads to meet the exponential demands of AI.

Marvell pioneered PAM4 technology over a decade ago and leads the industry in PAM4 interconnect shipments. Today, most of the optical interconnects used in data center backend and frontend networks are based on PAM4 technology. As compared to PCIe Gen 5, which was based on NRZ modulation, PCIe Gen 6 and 7 require the use of PAM4 modulation. With its recent PCIe Gen 6 retimer announcement and this PCIe Gen 7 demonstration, Marvell extends its industry-leading PAM4-based optical and copper interconnect portfolio beyond Ethernet and InfiniBand into copper and optical PCIe, CXL and proprietary compute fabric links.

Supermicro Adds New Max-Performance Intel-Based X14 Servers

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today adds new maximum performance GPU, multi-node, and rackmount systems to the X14 portfolio, which are based on the Intel Xeon 6900 Series Processors with P-Cores (formerly codenamed Granite Rapids-AP). The new industry-leading selection of workload-optimized servers addresses the needs of modern data centers, enterprises, and service providers. Joining the efficiency-optimized X14 servers leveraging the Xeon 6700 Series Processors with E-cores launched in June 2024, today's additions bring maximum compute density and power to the Supermicro X14 lineup to create the industry's broadest range of optimized servers supporting a wide variety of workloads from demanding AI, HPC, media, and virtualization to energy-efficient edge, scale-out cloud-native, and microservices applications.

"Supermicro X14 systems have been completely re-engineered to support the latest technologies including next-generation CPUs, GPUs, highest bandwidth and lowest latency with MRDIMMs, PCIe 5.0, and EDSFF E1.S and E3.S storage," said Charles Liang, president and CEO of Supermicro. "Not only can we now offer more than 15 families, but we can also use these designs to create customized solutions with complete rack integration services and our in-house developed liquid cooling solutions."

Oracle Offers First Zettascale Cloud Computing Cluster

Oracle today announced the first zettascale cloud computing clusters accelerated by the NVIDIA Blackwell platform. Oracle Cloud Infrastructure (OCI) is now taking orders for the largest AI supercomputer in the cloud—available with up to 131,072 NVIDIA Blackwell GPUs.

"We have one of the broadest AI infrastructure offerings and are supporting customers that are running some of the most demanding AI workloads in the cloud," said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure. "With Oracle's distributed cloud, customers have the flexibility to deploy cloud and AI services wherever they choose while preserving the highest levels of data and AI sovereignty."

Marvell Expands Connectivity Portfolio With New PCIe Gen 6 Retimer Product Line

Marvell Technology, a leader in data infrastructure semiconductor solutions, today expanded its connectivity portfolio with the launch of the new Alaska P PCIe retimer product line built to scale data center compute fabrics inside accelerated servers, general-purpose servers, CXL systems and disaggregated infrastructure. The first two products, 8- and 16-lane PCIe Gen 6 retimers, connect AI accelerators, GPUs, CPUs and other components inside server systems.

Artificial intelligence (AI) and machine learning (ML) applications are driving data flows and connections inside server systems at significantly higher bandwidth, necessitating PCIe retimers to meet the required connection distances at faster speeds. PCIe is the industry standard for inside-server-system connections between AI accelerators, GPUs, CPUs and other server components. AI models are doubling their computation requirements every six months1 and are now the primary driver of the PCIe roadmap, with PCIe Gen 6 becoming a requirement.
Return to Keyword Browsing
Jul 10th, 2025 01:59 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts