News Posts matching #B300

Return to Keyword Browsing

Insiders Predict Introduction of NVIDIA "Blackwell Ultra" GB300 AI Series at GTC, with Fully Liquid-cooled Clusters

Supply chain insiders believe that NVIDIA's "Blackwell Ultra" GB300 AI chip design will get a formal introduction at next week's GTC 2025 conference. Jensen Huang's keynote presentation is scheduled—the company's calendar is marked with a very important date: Tuesday, March 18. Team Green's chief has already revealed a couple of Blackwell B300 series details to investors; a recent earnings call touched upon the subject of a second half (of 2025) launch window. Industry moles have put spotlights on the GB300 GPU's alleged energy hungry nature. According to inside tracks, power consumption has "significantly" increased when compared to a slightly older equivalent; NVIDIA's less refined "Blackwell" GB200 design.

A Taiwan Economic Daily news article predicts an upcoming "second cooling revolution," due to reports of "Blackwell Ultra" parts demanding greater heat dissipation solutions. Supply chain leakers have suggested effective countermeasures—in the form of fully liquid-cooled systems: "not only will more water cooling plates be introduced, but the use of water cooling quick connectors will increase four times compared to GB200." The pre-Christmas 2024 news cycle proposed a 1400 W TDP rating. Involved "Taiwanese cooling giants" are expected to pull in tidy sums of money from the supply of optimal heat dissipating gear, with local "water-cooling quick-connector" manufacturers also tipped to benefit greatly. The UDN report pulled quotes from a variety of regional cooling specialists; the consensus being that involved partners are struggling to keep up with demand across GB200 and GB300 product lines.

NVIDIA Confirms: "Blackwell Ultra" Coming This Year, "Vera Rubin" in 2026

During its latest FY2024 earnings call, NVIDIA's CEO Jensen Huang gave a few predictions about future products. The upcoming Blackwell B300 series, codenamed "Blackwell Ultra," is scheduled for release in the second half of 2025. It will feature significant performance enhancements over the B200 series. These GPUs will incorporate eight stacks of 12-Hi HBM3E memory, providing up to 288 GB of onboard memory, paired with the Mellanox Spectrum Ultra X800 Ethernet switch, which offers 512 ports. Earlier rumors suggested that this is a 1,400 W TBP chip, meaing that NVIDIA is packing a lot of compute in there. There is a potential 50% performance increase compared to current-generation products. However, NVIDIA has not officially confirmed these figures, but rough estimates of core count and memory bandwidth increase can make it happen.

Looking beyond Blackwell, NVIDIA is preparing to unveil its next-generation "Rubin" architecture, which promises to deliver what Huang described as a "big, big, huge step up" in AI compute capabilities. The Rubin platform, targeted for 2026, will integrate eight stacks of HBM4(E) memory, "Vera" CPUs, NVLink 6 switches delivering 3600 GB/s bandwidth, CX9 network cards supporting 1600 Gb/s, and X1600 switches—creating a comprehensive ecosystem for advanced AI workloads. More surprisingly, Huang indicated that NVIDIA will discuss post-Rubin developments at the upcoming GPU Technology Conference in March. This could include details on Rubin Ultra, projected for 2027, which may incorporate 12 stacks of HBM4E using 5.5-reticle-size CoWoS interposers and 100 mm × 100 mm TSMC substrates, representing another significant architectural leap forward in the company's accelerating AI infrastructure roadmap. While these may seem distant, NVIDIA is battling supply chain constraints to deliver these GPUs to its customers due to the massive demand for its solutions.

NVIDIA Revises "Blackwell" Architecture Production Roadmap for More Complex Packaging

According to a well-known industry analyst, Ming-Chi Kuo, NVIDIA has restructured its "Blackwell" architecture roadmap, emphasizing dual-die designs using CoWoS-L packaging technology. The new roadmap eliminates several single-die products that would have used CoWoS-S packaging, changing NVIDIA's manufacturing strategy. The 200 Series will exclusively use dual-die designs with CoWoS-L packaging, featuring the GB200 NVL72 and HGX B200 systems. Notably absent is the previously expected B200A single-die variant. The 300 Series will include both dual-die and single-die options, though NVIDIA and cloud providers are prioritizing the GB200 NVL72 dual-die system. Starting Q1 2025, NVIDIA will reduce H series production, which uses CoWoS-S packaging, while ramping up 200 Series production. This transition indicates significantly decreased demand for CoWoS-S capacity through 2025.

While B300 systems using single-die CoWoS-S are planned for 2026 mass production, the current focus remains on dual-die CoWoS-L products. From TSMC's perspective, the transition between Blackwell generations requires minimal process adjustments, as both use similar front-end-of-line processes with only back-end-of-line modifications needed. Supply chain partners heavily dependent on CoWoS-S production face significant impact, reflected in recent stock price corrections. However, NVIDIA maintains this change reflects product strategy evolution rather than market demand weakness. TSMC continues expanding CoWoS-R capacity while slowing CoWoS-S expansion, viewing AI and high-performance computing as sustained growth drivers despite these packaging technology transitions.

NVIDIA GB300 "Blackwell Ultra" Will Feature 288 GB HBM3E Memory, 1400 W TDP

NVIDIA "Blackwell" series is barely out with B100, B200, and GB200 chips shipping to OEMs and hyperscalers, but the company is already setting in its upgraded "Blackwell Ultra" plans with its upcoming GB300 AI server. According to UDN, the next generation NVIDIA system will be powered by the B300 GPU chip, operating at 1400 W and delivering a remarkable 1.5x improvement in FP4 performance per card compared to its B200 predecessor. One of the most notable upgrades is the memory configuration, with each GPU now sporting 288 GB of HBM3e memory, a substantial increase from the previous 192 GB of GB200. The new design implements a 12-layer stack architecture, advancing from the GB200's 8-layer configuration. The system's cooling infrastructure has been completely reimagined, incorporating advanced water cooling plates and enhanced quick disconnects in the liquid cooling system.

Networking capabilities have also seen a substantial upgrade, with the implementation of ConnectX 8 network cards replacing the previous ConnectX 7 generation, while optical modules have been upgraded from 800G to 1.6T, ensuring faster data transmission. Regarding power management and reliability, the GB300 NVL72 cabinet will standardize capacitor tray implementation, with an optional Battery Backup Unit (BBU) system. Each BBU module costs approximately $300 to manufacture, with a complete GB300 system's BBU configuration totaling around $1,500. The system's supercapacitor requirements are equally substantial, with each NVL72 rack requiring over 300 units, priced between $20-25 per unit during production due to its high-power nature. The GB300, carrying Grace CPU and Blackwell Ultra GPU, also introduces the implementation of LPCAMM on its computing boards, indicating that the LPCAMM memory standard is about to take over servers, not just laptops and desktops. We have to wait for the official launch before seeing LPCAMM memory configurations.

NVIDIA Might Consider Major Design Shift for Future 300 GPU Series

NVIDIA is reportedly considering a significant design change for its GPU products, shifting from the current on-board solution to an independent GPU socket design following the GB200 shipment in Q4, according to reports from MoneyDJ and the Economic Daily News quoted by TrendForce. This move is not new in the industry, AMD has already introduced socket design in 2023 with their MI300A series via Supermicro dedicated servers. The B300 series, expected to become NVIDIA's mainstream product in the second half of 2025, is rumored to be the main beneficiary of this design change that could improve yield rates, though it may come with some performance trade-offs.

According to the Economic Daily News, the socket design will simplify after-sales service and server board maintenance, allowing users to replace or upgrade the GPUs quickly. The report also pointed out that based on the slot design, boards will contain up to four NVIDIA GPUs and a CPU, with each GPU having its dedicated slot. This will bring benefits for Taiwanese manufacturers like Foxconn and LOTES, who will supply different components and connectors. The move seems logical since with the current on-board design, once a GPU becomes faulty, the entire motherboard needs to be replaced, leading to significant downtime and high operational and maintenance costs.
Return to Keyword Browsing
Mar 12th, 2025 01:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts