News Posts matching #B200

Return to Keyword Browsing

Giga Computing Unveils Liquid and Air-Cooled GIGABYTE AI Servers Accelerated by NVIDIA HGX B200 Platform

Giga Computing, an industry innovator and leader in enterprise hardware and advanced cooling solutions, today announced four new GIGABYTE servers built on the NVIDIA HGX B200 platform. This expansion of the GIGABYTE GPU server portfolio brings greater thermal design flexibility and support for the latest processors, including the new AI-optimized Intel Xeon 6 CPUs, giving customers more options as they tailor their systems for workloads and efficiency.

NVIDIA HGX B200 propels the data center into a new era of accelerating computing and generative AI. Built on NVIDIA Blackwell GPUs, the HGX B200 platform features 15X faster real-time inference on trillion-parameter models.

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA is working with companies worldwide to build out AI factories—speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training—the 12th since the benchmark's introduction in 2018—the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark's toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining.

The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark—underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.

NVIDIA Plans 800 V Power Infrastructure to Drive 1 MW AI Racks

AI infrastructure buildout is pushing data center desings beyond the limits of conventional power delivery. Traditional in-rack 54 V DC distribution was designed for racks drawing tens of kilowatts and cannot scale to the megawatt requirements for next-generation AI facilities. At GTC and Computex 2025, NVIDIA introduced a comprehensive solution: an end-to-end 800-volt high-voltage DC (HVDC) infrastructure that will support 1-megawatt AI racks and beyond, with deployments planned to begin in 2027. Cooling and cabling already place immense strain on rack designs. NVIDIA's current GB200 and GB300 NVL72 systems can draw up to 132 kW per rack—significantly more than the 50 to 80 kW that most data halls were built to handle. If rack power rises to the 700 kW to 1 MW range under a 54 V distribution, it would require roughly 64 U of chassis space devoted solely to copper busbars, which is almost the entire rack, and about 200 kg of copper per rack. For a 1 GW installation, that adds up to nearly half a million metric tons of copper.

NVIDIA's 800 V HVDC architecture eliminates multiple AC-to-DC and DC-to-DC conversion stages by consolidating them into a single grid-edge rectifier. From a 13.8 kV AC feed, power is converted directly to 800 V DC and then routed through row-level busways to each rack. Compact DC-DC modules in the rack step down the voltage for the GPUs. Fewer power supply units mean fewer fans, lower heat output, and a simpler electrical footprint. Beyond scalability, 800 V HVDC offers up to 5 percent gains in end-to-end efficiency and a 45 percent reduction in copper usage. This results in lower electricity costs and reduced infrastructure buildout costs. To drive industry adoption, NVIDIA has partnered with leaders across the power ecosystem. Silicon and power-electronics specialists such as Infineon, MPS, Navitas, ROHM, STMicroelectronics, and Texas Instruments are contributing components. System integrators, including Delta, Flex Power, Lead Wealth, LiteOn, and Megmeet, are developing power shelves. Data-center infrastructure companies Eaton, Schneider Electric, and Vert iv are standardizing protective devices at every boundary from the power room to the rack. Below you can compare the traditional rack system in the top to the newly proposed variation in the middle and the bottom part of the image. Thanks to HardwareLuxx, we can even see how it looks in reality.

Supermicro Unveils DLC-2, the Next Generation Direct Liquid-Cooling Solutions for Data Centers

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing several improvements to its Direct Liquid Cooling (DLC) solution that incorporate new technologies for cooling various server components, accommodate warmer liquid inflow temperatures, and introduce innovative mechanical designs that enhance AI per watt. The Supermicro DLC-2 solution reduces data center power consumption by up to 40% compared to air-cooled installations. These advanced technologies enable faster deployment and reduced time-to-online for cutting-edge liquid-cooled AI infrastructure. Additionally, the total cost of ownership decreases by up to 20%. The comprehensive cold plate coverage of components allows for lower fan speeds and fewer required fans, significantly reducing data center noise levels to approximately 50dB.

"With the expected demand for liquid-cooled data centers rising to 30% of all installations, we realized that current technologies were insufficient to cool these new AI-optimized systems," said Charles Liang, president and CEO of Supermicro. "Supermicro continues to remain committed to innovation, green computing, and improving the future of AI, by significantly reducing data center power and water consumption, noise, and space. Our latest liquid-cooling innovation, DLC-2, saves data center electricity costs by up to 40%."

Report: Customers Show Little Interest in AMD Instinct MI325X Accelerators

AMD's Instinct MI325X accelerator has struggled to gain traction with large customers, according to extensive data from SemiAnalysis. Launched in Q2 2025, the MI325X arrived roughly nine months after NVIDIA's H200 and concurrently with NVIDIA's "Blackwell" mass-production roll-out. That timing proved unfavourable, as many buyers opted instead for Blackwell's superior cost-per-performance ratio. Early interest from Microsoft in 2024 failed to translate into repeat orders. After the initial test purchases, Microsoft did not place any further commitments. In response, AMD reduced its margin expectations in an effort to attract other major clients. Oracle and a handful of additional hyperscalers have since expressed renewed interest, but these purchases remain modest compared with NVIDIA's volume.

A fundamental limitation of the MI325X is its eight-GPU scale-up capacity. By contrast, NVIDIA's rack-scale GB200 NVL72 supports up to 72 GPUs in a single cluster. For large-scale AI inference and frontier-level reasoning workloads, that difference is decisive. AMD positioned the MI325X against NVIDIA's air-cooled HGX B200 NVL8 and HGX B300 NVL16 modules. Even in that non-rack-scale segment, NVIDIA maintains an advantage in both raw performance and total-cost-of-ownership efficiency. Nonetheless, there remains potential for the MI325X in smaller-scale deployments that do not require extensive GPU clusters. Smaller model inference should be sufficient for eight GPU clusters, where lots of memory bandwidth and capacity are the primary needs. AMD continues to improve its software ecosystem and maintain competitive pricing, so AI labs developing mid-sized AI models may find the MI325X appealing.

Cadence Unveils Millennium M2000 Supercomputer with NVIDIA Blackwell for AI-Driven Silicon, Systems and Drug Design

At its annual flagship user event, CadenceLIVE Silicon Valley 2025, Cadence (Nasdaq: CDNS) today announced a major expansion of its Cadence Millennium Enterprise Platform with the introduction of the new Millennium M2000 Supercomputer featuring NVIDIA Blackwell systems, which delivers AI-accelerated simulation at unprecedented speed and scale across engineering and drug design workloads.

The new supercomputer integrates Cadence's industry-leading solvers with NVIDIA HGX B200 systems, NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA CUDA-X libraries and solver software. This powerful combination delivers dramatic reductions in simulation run times and up to 80X higher performance versus CPU-based systems for electronic design automation (EDA), system design and analysis (SDA), and drug discovery applications. The supercomputer provides a tightly co-optimized hardware-software stack that enables breakthrough performance with up to 20X lower power across multiple disciplines, accelerating the build-out of AI infrastructure, advancing physical AI machine design and pushing the frontiers of drug design.

Giga Computing Showcases Next-Gen OCP Solutions at OCP EMEA Regional Summit 2025

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance computing and server solutions, proudly announces its participation in the OCP EMEA Regional Summit 2025, taking place in Dublin, Ireland. As an active contributor to the Open Compute Project (OCP), Giga Computing will showcase its latest data center solutions tailored to meet the demands of hyperscale infrastructure, high-density storage, and AI-centric workloads.

The OCP EMEA Summit serves as a platform where global technical leaders come together to address critical challenges in data center sustainability, energy efficiency, and heat reuse across the region. The summit focuses on how innovations pioneered by hyperscale data center operators can help tackle these issues and drive meaningful change. Additionally, the event spotlights real-world deployments of OCP-recognized equipment in the EMEA region.

NVIDIA Will Bring Agentic AI Reasoning to Enterprises with Google Cloud

NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety. With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks.

"By bringing our Gemini models on premises with NVIDIA Blackwell's breakthrough performance and confidential computing capabilities, we're enabling enterprises to unlock the full potential of agentic AI," said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. "This collaboration helps ensure customers can innovate securely without compromising on performance or operational ease." Confidential computing with NVIDIA Blackwell provides enterprises with the technical assurance that their user prompts to the Gemini models' application programming interface—as well as the data they used for fine-tuning—remain secure and cannot be viewed or modified. At the same time, model owners can protect against unauthorized access or tampering, providing dual-layer protection that enables enterprises to innovate with Gemini models while maintaining data privacy.

Industry's First-to-Market Supermicro NVIDIA HGX B200 Systems Demonstrate AI Performance Leadership

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, has announced first-to-market industry leading performance on several MLPerf Inference v5.0 benchmarks, using the 8-GPU. The 4U liquid-cooled and 10U air-cooled systems achieved the best performance in select benchmarks. Supermicro demonstrated more than 3 times the tokens per second (Token/s) generation for Llama2-70B and Llama3.1-405B benchmarks compared to H200 8-GPU systems. "Supermicro remains a leader in the AI industry, as evidenced by the first new benchmarks released by MLCommons in 2025," said Charles Liang, president and CEO of Supermicro. "Our building block architecture enables us to be first-to-market with a diverse range of systems optimized for various workloads. We continue to collaborate closely with NVIDIA to fine-tune our systems and secure a leadership position in AI workloads." Learn more about the new MLPerf v5.0 Inference benchmarks here.

Supermicro is the only system vendor publishing record MLPerf inference performance (on select benchmarks) for both the air-cooled and liquid-cooled NVIDIA HGX B200 8-GPU systems. Both air-cooled and liquid-cooled systems were operational before the MLCommons benchmark start date. Supermicro engineers optimized the systems and software to showcase the impressive performance. Within the operating margin, the Supermicro air-cooled B200 system exhibited the same level of performance as the liquid-cooled B200 system. Supermicro has been delivering these systems to customers while we conducted the benchmarks. MLCommons emphasizes that all results be reproducible, that the products are available and that the results can be audited by other MLCommons members. Supermicro engineers optimized the systems and software, as allowed by the MLCommons rules.

Supermicro Adds Portfolio for Next Wave of AI with NVIDIA Blackwell Ultra Solutions

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new systems and rack solutions powered by the NVIDIA's Blackwell Ultra platform, featuring the NVIDIA HGX B300 NVL16 and NVIDIA GB300 NVL72 platforms. Supermicro and NVIDIA's new AI solutions strengthen leadership in AI by delivering breakthrough performance for the most compute-intensive AI workloads, including AI reasoning, agentic AI, and video inference applications.

"At Supermicro, we are excited to continue our long-standing partnership with NVIDIA to bring the latest AI technology to market with the NVIDIA Blackwell Ultra Platforms," said Charles Liang, president and CEO, Supermicro. "Our Data Center Building Block Solutions approach has streamlined the development of new air and liquid-cooled systems, optimized to the thermals and internal topology of the NVIDIA HGX B300 NVL16 and GB300 NVL72. Our advanced liquid-cooling solution delivers exceptional thermal efficiency, operating with 40℃ warm water in our 8-node rack configuration, or 35℃ warm water in double-density 16-node rack configuration, leveraging our latest CDUs. This innovative solution reduces power consumption by up to 40% while conserving water resources, providing both environmental and operational cost benefits for enterprise data centers."

Dell Technologies Accelerates Enterprise AI Innovation from PC to Data Center with NVIDIA 

Marking one year since the launch of the Dell AI Factory with NVIDIA, Dell Technologies (NYSE: DELL) announces new AI PCs, infrastructure, software and services advancements to accelerate enterprise AI innovation at any scale. Successful AI deployments are vital for enterprises to remain competitive, but challenges like system integration and skill gaps can delay the value enterprises realize from AI. More than 75% of organizations want their infrastructure providers to deliver capabilities across all aspects of the AI adoption journey, driving customer demand for simplified AI deployments that can scale.

As the top provider of AI centric infrastructure, Dell Technologies - in collaboration with NVIDIA - provides a consistent experience across AI infrastructure, software and services, offering customers a one-stop shop to scale AI initiatives from deskside to large-scale data center deployments.

NVIDIA to Build Accelerated Quantum Computing Research Center

NVIDIA today announced it is building a Boston-based research center to provide cutting-edge technologies to advance quantum computing. The NVIDIA Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing. The NVAQC will help solve quantum computing's most challenging problems, ranging from qubit noise to transforming experimental quantum processors into practical devices.

Leading quantum computing innovators, including Quantinuum, Quantum Machines and QuEra Computing, will tap into the NVAQC to drive advancements through collaborations with researchers from leading universities, such as the Harvard Quantum Initiative in Science and Engineering (HQI) and the Engineering Quantum Systems (EQuS) group at the Massachusetts Institute of Technology (MIT).

Micron Innovates From the Data Center to the Edge With NVIDIA

Secular growth of AI is built on the foundation of high-performance, high-bandwidth memory solutions. These high-performing memory solutions are critical to unlock the capabilities of GPUs and processors. Micron Technology, Inc., today announced it is the world's first and only memory company shipping both HBM3E and SOCAMM (small outline compression attached memory module) products for AI servers in the data center. This extends Micron's industry leadership in designing and delivering low-power DDR (LPDDR) for data center applications.

Micron's SOCAMM, a modular LPDDR5X memory solution, was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace Blackwell Ultra Superchip. The Micron HBM3E 12H 36 GB is also designed into the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms, while the HBM3E 8H 24 GB is available for the NVIDIA HGX B200 and GB200 NVL72 platforms. The deployment of Micron HBM3E products in NVIDIA Hopper and NVIDIA Blackwell systems underscores Micron's critical role in accelerating AI workloads.

NVIDIA Accelerates Science and Engineering With CUDA-X Libraries Powered by GH200 and GB200 Superchips

Scientists and engineers of all kinds are equipped to solve tough problems a lot faster with NVIDIA CUDA-X libraries powered by NVIDIA GB200 and GH200 superchips. Announced today at the NVIDIA GTC global AI conference, developers can now take advantage of tighter automatic integration and coordination between CPU and GPU resources - enabled by CUDA-X working with these latest superchip architectures - resulting in up to 11x speedups for computational engineering tools and 5x larger calculations compared with using traditional accelerated computing architectures.

This greatly accelerates and improves workflows in engineering simulation, design optimization and more, helping scientists and researchers reach groundbreaking results faster. NVIDIA released CUDA in 2006, opening up a world of applications to the power of accelerated computing. Since then, NVIDIA has built more than 900 domain-specific NVIDIA CUDA-X libraries and AI models, making it easier to adopt accelerated computing and driving incredible scientific breakthroughs. Now, CUDA-X brings accelerated computing to a broad new set of engineering disciplines, including astronomy, particle physics, quantum physics, automotive, aerospace and semiconductor design.

Global Top 10 IC Design Houses See 49% YoY Growth in 2024, NVIDIA Commands Half the Market

TrendForce reveals that the combined revenue of the world's top 10 IC design houses reached approximately US$249.8 billion in 2024, marking a 49% YoY increase. The booming AI industry has fueled growth across the semiconductor sector, with NVIDIA leading the charge, posting an astonishing 125% revenue growth, widening its lead over competitors, and solidifying its dominance in the IC industry.

Looking ahead to 2025, advancements in semiconductor manufacturing will further enhance AI computing power, with LLMs continuing to emerge. Open-source models like DeepSeek could lower AI adoption costs, accelerating AI penetration from servers to personal devices. This shift positions edge AI devices as the next major growth driver for the semiconductor industry.

NVIDIA Confirms: "Blackwell Ultra" Coming This Year, "Vera Rubin" in 2026

During its latest FY2024 earnings call, NVIDIA's CEO Jensen Huang gave a few predictions about future products. The upcoming Blackwell B300 series, codenamed "Blackwell Ultra," is scheduled for release in the second half of 2025. It will feature significant performance enhancements over the B200 series. These GPUs will incorporate eight stacks of 12-Hi HBM3E memory, providing up to 288 GB of onboard memory, paired with the Mellanox Spectrum Ultra X800 Ethernet switch, which offers 512 ports. Earlier rumors suggested that this is a 1,400 W TBP chip, meaing that NVIDIA is packing a lot of compute in there. There is a potential 50% performance increase compared to current-generation products. However, NVIDIA has not officially confirmed these figures, but rough estimates of core count and memory bandwidth increase can make it happen.

Looking beyond Blackwell, NVIDIA is preparing to unveil its next-generation "Rubin" architecture, which promises to deliver what Huang described as a "big, big, huge step up" in AI compute capabilities. The Rubin platform, targeted for 2026, will integrate eight stacks of HBM4(E) memory, "Vera" CPUs, NVLink 6 switches delivering 3600 GB/s bandwidth, CX9 network cards supporting 1600 Gb/s, and X1600 switches—creating a comprehensive ecosystem for advanced AI workloads. More surprisingly, Huang indicated that NVIDIA will discuss post-Rubin developments at the upcoming GPU Technology Conference in March. This could include details on Rubin Ultra, projected for 2027, which may incorporate 12 stacks of HBM4E using 5.5-reticle-size CoWoS interposers and 100 mm × 100 mm TSMC substrates, representing another significant architectural leap forward in the company's accelerating AI infrastructure roadmap. While these may seem distant, NVIDIA is battling supply chain constraints to deliver these GPUs to its customers due to the massive demand for its solutions.

GIGABYTE Showcases Comprehensive AI Computing Portfolio at MWC 2025

GIGABYTE, a global leader in computing innovation and technology, will showcase its full-spectrum AI computing solutions that bridge development to deployment at MWC 2025, taking place from March 3-6.

"AI+" and "Enterprise-Reinvented" are two of the themes for MWC. As enterprises accelerate their digital transformation and intelligent upgrades, the transition of AI applications from experimental development to democratized commercial deployment has become a critical turning point in the industry. Continuing its "ACCEVOLUTION" initiative, GIGABYTE provides the comprehensive infrastructure products and solutions spanning cloud-based supercomputing centers to edge computing terminals, aiming to accelerate the next evolution and empower industries to scale AI applications efficiently.

HPE Announces First Shipment of NVIDIA "Grace Blackwell" System

Hewlett Packard Enterprise announced today that it has shipped its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72. This rack-scale system by HPE is designed to help service providers and large enterprises quickly deploy very large, complex AI clusters with advanced, direct liquid cooling solutions to optimize efficiency and performance. "AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment," said Trish Damkroger, senior vice president and general manager of HPC & AI Infrastructure Solutions, HPE. "As builders of the world's top three fastest systems with direct liquid cooling, HPE offers customers lower cost per token training and best-in-class performance with industry-leading services expertise."

The NVIDIA GB200 NVL72 features shared-memory, low-latency architecture with the latest GPU technology designed for extremely large AI models of over a trillion parameters, in one memory space. GB200 NVL72 offers seamless integration of NVIDIA CPUs, GPUs, compute and switch trays, networking, and software, bringing together extreme performance to address heavily parallelizable workloads, like generative AI (GenAI) model training and inferencing, along with NVIDIA software applications. "Engineers, scientists and researchers need cutting-edge liquid cooling technology to keep up with increasing power and compute requirements," said Bob Pette, vice president of enterprise platforms at NVIDIA. "Building on continued collaboration between HPE and NVIDIA, HPE's first shipment of NVIDIA GB200 NVL72 will help service providers and large enterprises efficiently build, deploy and scale large AI clusters."

CoreWeave Launches Debut Wave of NVIDIA GB200 NVL72-based Cloud Instances

AI reasoning models and agents are set to transform industries, but delivering their full potential at scale requires massive compute and optimized software. The "reasoning" process involves multiple models, generating many additional tokens, and demands infrastructure with a combination of high-speed communication, memory and compute to ensure real-time, high-quality results. To meet this demand, CoreWeave has launched NVIDIA GB200 NVL72-based instances, becoming the first cloud service provider to make the NVIDIA Blackwell platform generally available. With rack-scale NVIDIA NVLink across 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, scaling to up to 110,000 GPUs with NVIDIA Quantum-2 InfiniBand networking, these instances provide the scale and performance needed to build and deploy the next generation of AI reasoning models and agents.

NVIDIA GB200 NVL72 on CoreWeave
NVIDIA GB200 NVL72 is a liquid-cooled, rack-scale solution with a 72-GPU NVLink domain, which enables the six dozen GPUs to act as a single massive GPU. NVIDIA Blackwell features many technological breakthroughs that accelerate inference token generation, boosting performance while reducing service costs. For example, fifth-generation NVLink enables 130 TB/s of GPU bandwidth in one 72-GPU NVLink domain, and the second-generation Transformer Engine enables FP4 for faster AI performance while maintaining high accuracy. CoreWeave's portfolio of managed cloud services is purpose-built for Blackwell. CoreWeave Kubernetes Service optimizes workload orchestration by exposing NVLink domain IDs, ensuring efficient scheduling within the same rack. Slurm on Kubernetes (SUNK) supports the topology block plug-in, enabling intelligent workload distribution across GB200 NVL72 racks. In addition, CoreWeave's Observability Platform provides real-time insights into NVLink performance, GPU utilization and temperatures.

Supermicro Ramps Full Production of NVIDIA Blackwell Rack-Scale Solutions With NVIDIA HGX B200

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing full production availability of its end-to-end AI data center Building Block Solutions accelerated by the NVIDIA Blackwell platform. The Supermicro Building Block portfolio provides the core infrastructure elements necessary to scale Blackwell solutions with exceptional time to deployment. The portfolio includes a broad range of air-cooled and liquid-cooled systems with multiple CPU options. These include superior thermal design supporting traditional air cooling, liquid-to-liquid (L2L) and liquid-to-air (L2A) cooling. In addition, a full data center management software suite, rack-level integration, including full network switching and cabling and cluster-level L12 solution validation can be delivered as turn-key offering with global delivery, professional support, and service.

"In this transformative moment of AI, where scaling laws are pushing the limits of data center capabilities, our latest NVIDIA Blackwell-powered solutions, developed through close collaboration with NVIDIA, deliver outstanding computational power," said Charles Liang, president and CEO of Supermicro. "Supermicro's NVIDIA Blackwell GPU offerings in plug-and-play scalable units with advanced liquid cooling and air cooling are empowering customers to deploy an infrastructure that supports increasingly complex AI workloads while maintaining exceptional efficiency. This reinforces our commitment to providing sustainable, cutting-edge solutions that accelerate AI innovation."

ASUS AI POD With NVIDIA GB200 NVL72 Platform Ready to Ramp-Up Production for Scheduled Shipment in March

ASUS is proud to announce that ASUS AI POD, featuring the NVIDIA GB200 NVL72 platform, is ready to ramp-up production for a scheduled shipping date of March 2025. ASUS remains dedicated to providing comprehensive end-to-end solutions and software services, encompassing everything from AI supercomputing to cloud services. With a strong focus on fostering AI adoption across industries, ASUS is positioned to empower clients in accelerating their time to market by offering a full spectrum of solutions.

Proof of concept, funded by ASUS
Honoring the commitment to delivering exceptional value to clients, ASUS is set to launch a proof of concept (POC) for the groundbreaking ASUS AI POD, powered by the NVIDIA Blackwell platform. This exclusive opportunity is now open to a select group of innovators who are eager to harness the full potential of AI computing. Innovators and enterprises can experience firsthand the full potential of AI and deep learning solutions at exceptional scale. To take advantage of this limited-time offer, please complete this surveyi at: forms.office.com/r/FrAbm5BfH2. The expert ASUS team of NVIDIA GB200 specialists will guide users through the next steps.

NVIDIA Revises "Blackwell" Architecture Production Roadmap for More Complex Packaging

According to a well-known industry analyst, Ming-Chi Kuo, NVIDIA has restructured its "Blackwell" architecture roadmap, emphasizing dual-die designs using CoWoS-L packaging technology. The new roadmap eliminates several single-die products that would have used CoWoS-S packaging, changing NVIDIA's manufacturing strategy. The 200 Series will exclusively use dual-die designs with CoWoS-L packaging, featuring the GB200 NVL72 and HGX B200 systems. Notably absent is the previously expected B200A single-die variant. The 300 Series will include both dual-die and single-die options, though NVIDIA and cloud providers are prioritizing the GB200 NVL72 dual-die system. Starting Q1 2025, NVIDIA will reduce H series production, which uses CoWoS-S packaging, while ramping up 200 Series production. This transition indicates significantly decreased demand for CoWoS-S capacity through 2025.

While B300 systems using single-die CoWoS-S are planned for 2026 mass production, the current focus remains on dual-die CoWoS-L products. From TSMC's perspective, the transition between Blackwell generations requires minimal process adjustments, as both use similar front-end-of-line processes with only back-end-of-line modifications needed. Supply chain partners heavily dependent on CoWoS-S production face significant impact, reflected in recent stock price corrections. However, NVIDIA maintains this change reflects product strategy evolution rather than market demand weakness. TSMC continues expanding CoWoS-R capacity while slowing CoWoS-S expansion, viewing AI and high-performance computing as sustained growth drivers despite these packaging technology transitions.

Gigabyte Expands Its Accelerated Computing Portfolio with New Servers Using the NVIDIA HGX B200 Platform

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, announced new GIGABYTE G893 series servers using the NVIDIA HGX B200 platform. The launch of these flagship 8U air-cooled servers, the G893-SD1-AAX5 and G893-ZD1-AAX5, signifies a new architecture and platform change for GIGABYTE in the demanding world of high-performance computing and AI, setting new standards for speed, scalability, and versatility.

These servers join GIGABYTE's accelerated computing portfolio alongside the NVIDIA GB200 NVL72 platform, which is a rack-scale design that connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. At CES 2025 (January 7-10), the GIGABYTE booth will display the NVIDIA GB200 NVL72, and attendees can engage in discussions about the benefits of GIGABYTE platforms with the NVIDIA Blackwell architecture.

NVIDIA GB300 "Blackwell Ultra" Will Feature 288 GB HBM3E Memory, 1400 W TDP

NVIDIA "Blackwell" series is barely out with B100, B200, and GB200 chips shipping to OEMs and hyperscalers, but the company is already setting in its upgraded "Blackwell Ultra" plans with its upcoming GB300 AI server. According to UDN, the next generation NVIDIA system will be powered by the B300 GPU chip, operating at 1400 W and delivering a remarkable 1.5x improvement in FP4 performance per card compared to its B200 predecessor. One of the most notable upgrades is the memory configuration, with each GPU now sporting 288 GB of HBM3e memory, a substantial increase from the previous 192 GB of GB200. The new design implements a 12-layer stack architecture, advancing from the GB200's 8-layer configuration. The system's cooling infrastructure has been completely reimagined, incorporating advanced water cooling plates and enhanced quick disconnects in the liquid cooling system.

Networking capabilities have also seen a substantial upgrade, with the implementation of ConnectX 8 network cards replacing the previous ConnectX 7 generation, while optical modules have been upgraded from 800G to 1.6T, ensuring faster data transmission. Regarding power management and reliability, the GB300 NVL72 cabinet will standardize capacitor tray implementation, with an optional Battery Backup Unit (BBU) system. Each BBU module costs approximately $300 to manufacture, with a complete GB300 system's BBU configuration totaling around $1,500. The system's supercapacitor requirements are equally substantial, with each NVL72 rack requiring over 300 units, priced between $20-25 per unit during production due to its high-power nature. The GB300, carrying Grace CPU and Blackwell Ultra GPU, also introduces the implementation of LPCAMM on its computing boards, indicating that the LPCAMM memory standard is about to take over servers, not just laptops and desktops. We have to wait for the official launch before seeing LPCAMM memory configurations.
Return to Keyword Browsing
Jul 5th, 2025 23:23 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts