News Posts matching #B100

Return to Keyword Browsing

NVIDIA "Blackwell" GPUs are Sold Out for 12 Months, Customers Ordering in 100K GPU Quantities

NVIDIA's "Blackwell" series of GPUs, including B100, B200, and GB200, are reportedly sold out for 12 months or an entire year. This directly means that if a new customer is willing to order a new Blackwell GPU now, there is a 12-month waitlist to get that GPU. Analyst from Morgan Stanley Joe Moore confirmed that in a meeting with NVIDIA and its investors, NVIDIA executives confirmed that the demand for "Blackwell" is so great that there is a 12-month backlog to fulfill first before shipping to anyone else. We expect that this includes customers like Amazon, META, Microsoft, Google, Oracle, and others, who are ordering GPUs in insane quantities to keep up with the demand from their customers.

The previous generation of "Hopper" GPUs was ordered in 10s of thousands of GPUs, while this "Blackwell" generation was ordered in 100s of thousands of GPUs simultaneously. For NVIDIA, that is excellent news, as that demand is expected to continue. The only one standing in the way of customers is TSMC, which manufactures these GPUs as fast as possible to meet demand. NVIDIA is one of TSMC's largest customers, so wafer allocation at TSMC's facilities is only expected to grow. We are now officially in the era of the million-GPU data centers, and we can only question at what point this massive growth stops or if it will stop at all in the near future.

NVIDIA Resolves "Blackwell" Yield Issues with New Photomask

During its Q2 2024 earnings call, NVIDIA confirmed that its upcoming Blackwell-based products are facing low-yield challenges. However, the company announced that it has implemented design changes to improve the production yields of its B100 and B200 processors. Despite these setbacks, NVIDIA remains optimistic about its production timeline. The tech giant plans to commence the production ramp of Blackwell GPUs in Q4 2024, with expected shipments worth several billion dollars by the end of the year. In an official statement, NVIDIA explained, "We executed a change to the Blackwell GPU mask to improve production yield." The company also reaffirmed that it had successfully sampled Blackwell GPUs with customers in the second quarter.

However, NVIDIA acknowledged that meeting demand required producing "low-yielding Blackwell material," which impacted its gross margins. During an earnings call, NVIDIA's CEO Jensen Huang assured investors that the supply of B100 and B200 GPUs will be there. He expressed confidence in the company's ability to mass-produce these chips starting in the fourth quarter. The Blackwell B100 and B200 GPUs use TSMC's CoWoS-L packaging technology and a complex design, which prompted rumors about the company facing yield issues with its designs. Reports suggest that initial challenges arose from mismatched thermal expansion coefficients among various components, leading to warping and system failures. However, now the company claims that the fix that solved these problems was a new GPU photomask, which bumped yields back to normal levels.

NVIDIA's New B200A Targets OEM Customers; High-End GPU Shipments Expected to Grow 55% in 2025

Despite recent rumors speculating on NVIDIA's supposed cancellation of the B100 in favor of the B200A, TrendForce reports that NVIDIA is still on track to launch both the B100 and B200 in the 2H24 as it aims to target CSP customers. Additionally, a scaled-down B200A is planned for other enterprise clients, focusing on edge AI applications.

TrendForce reports that NVIDIA will prioritize the B100 and B200 for CSP customers with higher demand due to the tight production capacity of CoWoS-L. Shipments are expected to commence after 3Q24. In light of yield and mass production challenges with CoWoS-L, NVIDIA is also planning the B200A for other enterprise clients, utilizing CoWoS-S packaging technology.

Global AI Server Demand Surge Expected to Drive 2024 Market Value to US$187 Billion; Represents 65% of Server Market

TrendForce's latest industry report on AI servers reveals that high demand for advanced AI servers from major CSPs and brand clients is expected to continue in 2024. Meanwhile, TSMC, SK hynix, Samsung, and Micron's gradual production expansion has significantly eased shortages in 2Q24. Consequently, the lead time for NVIDIA's flagship H100 solution has decreased from the previous 40-50 weeks to less than 16 weeks.

TrendForce estimates that AI server shipments in the second quarter will increase by nearly 20% QoQ, and has revised the annual shipment forecast up to 1.67 million units—marking a 41.5% YoY growth.

Blackwell Shipments Imminent, Total CoWoS Capacity Expected to Surge by Over 70% in 2025

TrendForce reports that NVIDIA's Hopper H100 began to see a reduction in shortages in 1Q24. The new H200 from the same platform is expected to gradually ramp in Q2, with the Blackwell platform entering the market in Q3 and expanding to data center customers in Q4. However, this year will still primarily focus on the Hopper platform, which includes the H100 and H200 product lines. The Blackwell platform—based on how far supply chain integration has progressed—is expected to start ramping up in Q4, accounting for less than 10% of the total high-end GPU market.

The die size of Blackwell platform chips like the B100 is twice that of the H100. As Blackwell becomes mainstream in 2025, the total capacity of TSMC's CoWoS is projected to grow by 150% in 2024 and by over 70% in 2025, with NVIDIA's demand occupying nearly half of this capacity. For HBM, the NVIDIA GPU platform's evolution sees the H100 primarily using 80 GB of HBM3, while the 2025 B200 will feature 288 GB of HBM3e—a 3-4 fold increase in capacity per chip. The three major manufacturers' expansion plans indicate that HBM production volume will likely double by 2025.

NVIDIA Blackwell GB200 Superchip to Cost up to 70,000 US Dollars

According to analysts at HSBC, NVIDIA's upcoming Blackwell GPUs for AI workloads are expected to carry premium pricing significantly higher than the company's current Hopper-based processors. The analysts estimate that NVIDIA's "entry-level" Blackwell GPU, the B100, will have an average selling price between $30,000 and $35,000 per chip. That's already on par with the flagship H100 GPU from the previous Hopper generation. But the real premium lies with the top-end GB200 "superchip" that combines a Grace CPU with two enhanced B200 GPUs. HSBC analysts peg pricing for this monster chip at a staggering $60,000 to $70,000 per unit. NVIDIA may opt to primarily sell complete servers powered by Blackwell rather than individual chips. The estimates suggest a fully-loaded GB200 NVL72 server with 72 GB200 Superchips could fetch around $3 million.

The sky-high pricing continues NVIDIA's aggressive strategy of charging a premium for its leading AI and accelerator hardware. With rivals like AMD and Intel still lagging in this space, NVIDIA can essentially name its price for now. The premium pricing reflects the massive performance uplift promised by Blackwell. A single GB200 Superchip is rated for five PetaFLOPs at TF32 of AI compute power with sparsity, a 5x increase over the H100's one PetaFLOP. Of course, actual street pricing will depend on volume and negotiating power. Hyperscalers like Amazon and Microsoft may secure significant discounts, while smaller players could pay even more than these eye-watering analyst projections. NVIDIA is betting that the industry's insatiable demand for more AI compute power will make these premium price tags palatable, at least for a while. But it's also raising the stakes for competitors to catch up quickly before losing too much ground.

NVIDIA "Blackwell" Successor Codenamed "Rubin," Coming in Late-2025

NVIDIA barely started shipping its "Blackwell" line of AI GPUs, and its next-generation architecture is already on the horizon. Codenamed "Rubin," after Vera Rubin, the new architecture will power NVIDIA's future AI GPUs with generational jumps in performance, but more importantly, a design focus on lowering the power draw. This will become especially important as NVIDIA's current architectures already approach the kilowatt range, and cannot scale boundlessly. TF International Securities analyst, Mich-Chi Kuo says that NVIDIA's first AI GPU based on "Rubin," the R100 (not to be confused with an ATI GPU from many moons ago); is expected to enter mass-production in Q4-2025, which means it could be unveiled and demonstrated sooner than that; and select customers could have access to the silicon sooner, for evaluations.

The R100, according to Mich-Chi Kuo, is expected to leverage TSMC's 3 nm EUV FinFET process, specifically the TSMC-N3 node. In comparison, the new "Blackwell" B100 uses the TSMC-N4P. This will be a chiplet GPU, and use a 4x reticle design compared to Blackwell's 3.3x reticle design, and use TSMC's CoWoS-L packaging, just like the B100. The silicon is expected to be among the first users of HBM4 stacked memory, and feature 8 stacks of a yet unknown stack height. The Grace Ruben GR200 CPU+GPU combo could feature a refreshed "Grace" CPU built on the 3 nm node, likely an optical shrink meant to reduce power. A Q4-2025 mass-production roadmap target would mean that customers will start receiving the chips by early 2026.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

U.S. Updates Advanced Semiconductor Ban, Actual Impact on the Industry Will Be Insignificant

On March 29th, the United States announced another round of updates to its export controls, targeting advanced computing, supercomputers, semiconductor end-uses, and semiconductor manufacturing products. These new regulations, which took effect on April 4th, are designed to prevent certain countries and businesses from circumventing U.S. restrictions to access sensitive chip technologies and equipment. Despite these tighter controls, TrendForce believes the practical impact on the industry will be minimal.

The latest updates aim to refine the language and parameters of previous regulations, tightening the criteria for exports to Macau and D:5 countries (China, North Korea, Russia, Iran, etc.). They require a detailed examination of all technology products' Total Processing Performance (TPP) and Performance Density (PD). If a product exceeds certain computing power thresholds, it must undergo a case-by-case review. Nevertheless, a new provision, Advanced Computing Authorized (ACA), allows for specific exports and re-exports among selected countries, including the transshipment of particular products between Macau and D:5 countries.

Nvidia CEO Reiterates Solid Partnership with TSMC

One key takeaway from the ongoing GTC is that Nvidia's AI empire has taken shape with strong partnerships from TSMC and other Taiwanese makers, such as those major server ODMs.

According to the news report from the technology-focused media DIGITIMES Asia, during his keynote at GTC on March 18, Huang underscored his company's partnerships with TSMC, as well as the supply chain in Taiwan. Speaking to the press later, Huang said Nvidia will have a very strong demand for CoWoS, the advanced packaging services TSMC offers.

Samsung Prepares Mach-1 Chip to Rival NVIDIA in AI Inference

During its 55th annual shareholders' meeting, Samsung Electronics announced its entry into the AI processor market with the upcoming launch of its Mach-1 AI accelerator chips in early 2025. The South Korean tech giant revealed its plans to compete with established players like NVIDIA in the rapidly growing AI hardware sector. The Mach-1 generation of chips is an application-specific integrated circuit (ASIC) design equipped with LPDDR memory that is envisioned to excel in edge computing applications. While Samsung does not aim to directly rival NVIDIA's ultra-high-end AI solutions like the H100, B100, or B200, the company's strategy focuses on carving out a niche in the market by offering unique features and performance enhancements at the edge, where low power and efficient computing is what matters the most.

According to SeDaily, the Mach-1 chips boast a groundbreaking feature that significantly reduces memory bandwidth requirements for inference to approximately 0.125x compared to existing designs, which is an 87.5% reduction. This innovation could give Samsung a competitive edge in terms of efficiency and cost-effectiveness. As the demand for AI-powered devices and services continues to soar, Samsung's foray into the AI chip market is expected to intensify competition and drive innovation in the industry. While NVIDIA currently holds a dominant position, Samsung's cutting-edge technology and access to advanced semiconductor manufacturing nodes could make it a formidable contender. The Mach-1 has been field-verified on an FPGA, while the final design is currently going through a physical design for SoC, which includes placement, routing, and other layout optimizations.

Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000

Jensen Huang has been talking to media outlets following the conclusion of his keynote presentation at NVIDIA's GTC 2024 conference—an NBC TV "exclusive" interview with the Team Green boss has caused a stir in tech circles. Jim Cramer's long-running "Squawk on the Street" trade segment hosted Huang for just under five minutes—NBC's presenter labelled the latest edition of GTC the "Woodstock of AI." NVIDIA's leader reckoned that around $1 trillion of industry was in attendance at this year's event—folks turned up to witness the unveiling of "Blackwell" B200 and GB200 AI GPUs. In the interview, Huang estimated that his company had invested around $10 billion into the research and development of its latest architecture: "we had to invent some new technology to make it possible."

Industry watchdogs have seized on a major revelation—as disclosed during the televised NBC report—Huang revealed that his next-gen AI GPUs "will cost between $30,000 and $40,000 per unit." NVIDIA (and its rivals) are not known to publicly announce price ranges for AI and HPC chips—leaks from hardware partners and individuals within industry supply chains are the "usual" sources. An investment banking company has already delved into alleged Blackwell production costs—as shared by Tae Kim/firstadopter: "Raymond James estimates it will cost NVIDIA more than $6000 to make a B200 and they will price the GPU at a 50-60% premium to H100...(the bank) estimates it costs NVIDIA $3320 to make the H100, which is then sold to customers for $25,000 to $30,000." Huang's disclosure should be treated as an approximation, since his company (normally) deals with the supply of basic building blocks.

Dell Expands Generative AI Solutions Portfolio, Selects NVIDIA Blackwell GPUs

Dell Technologies is strengthening its collaboration with NVIDIA to help enterprises adopt AI technologies. By expanding the Dell Generative AI Solutions portfolio, including with the new Dell AI Factory with NVIDIA, organizations can accelerate integration of their data, AI tools and on-premises infrastructure to maximize their generative AI (GenAI) investments. "Our enterprise customers are looking for an easy way to implement AI solutions—that is exactly what Dell Technologies and NVIDIA are delivering," said Michael Dell, founder and CEO, Dell Technologies. "Through our combined efforts, organizations can seamlessly integrate data with their own use cases and streamline the development of customized GenAI models."

"AI factories are central to creating intelligence on an industrial scale," said Jensen Huang, founder and CEO, NVIDIA. "Together, NVIDIA and Dell are helping enterprises create AI factories to turn their proprietary data into powerful insights."

Unwrapping the NVIDIA B200 and GB200 AI GPU Announcements

NVIDIA on Monday, at the 2024 GTC conference, unveiled the "Blackwell" B200 and GB200 AI GPUs. These are designed to offer an incredible 5X the AI inferencing performance gain over the current-gen "Hopper" H100, and come with four times the on-package memory. The B200 "Blackwell" is the largest chip physically possible using existing foundry tech, according to its makers. The chip is an astonishing 208 billion transistors, and is made up of two chiplets, which by themselves are the largest possible chips.

Each chiplet is built on the TSMC N4P foundry node, which is the most advanced 4 nm-class node by the Taiwanese foundry. Each chiplet has 104 billion transistors. The two chiplets have a high degree of connectivity with each other, thanks to a 10 TB/s custom interconnect. This is enough bandwidth and latency for the two to maintain cache coherency (i.e. address each other's memory as if they're their own). Each of the two "Blackwell" chiplets has a 4096-bit memory bus, and is wired to 96 GB of HBM3E spread across four 24 GB stacks; which totals to 192 GB for the B200 package. The GPU has a staggering 8 TB/s of memory bandwidth on tap. The B200 package features a 1.8 TB/s NVLink interface for host connectivity, and connectivity to another B200 chip.

TSMC Reportedly Investing $16 Billion into New CoWoS Facilities

TSMC is experiencing unprecedented demand from AI chip customers—unnamed parties have (fancifully) requested the construction of entirely new fabrication facilities. Taiwan's leading semiconductor contract manufacturer seems to concentrating on "sensible" expansions, mainly in the area of CoWoS packaging output—according to an Economic Daily report, company leadership and local government were negotiating over the construction of four new advanced packaging plants. Insiders propose that plans have been revised—an investment in excess of 500 billion yuan ($16 billion) will enable the founding of six new CoWoS-focused facilities. TSMC is expected to make an official announcement next month—industry moles reckon that construction work will start in April. Two (of the six total) advanced packaging plants could become fully operational before the conclusion of 2024.

Lately, TSMC has initiated an ambitious recruitment drive—targeting around 6000 new workers. A touring entity is tasked with the attraction of "talents with high enthusiasm for semiconductors." The majority of new recruits are likely heading to new or expanded Taiwan-based facilities. The Economic Daily report proposes that Chiayi City's technological hub will play host to TSMC's new CoWoS packaging plants. A DigiTimes Asia news piece (from January) posited that TSMC leadership anticipates CoWoS output reaching 44,000 units by the end of 2024. This predicted tally could grow, thanks to the (rumored) activation of additional factories. CoWoS packaging is considered to be a vital aspect of AI accelerators—insiders believe that TSMC's latest investment will boost production of NVIDIA H100 GPUs. The combined output of six new CoWoS plants will assist greatly in the creation of next-gen B100 chips.

NVIDIA B100 "Blackwell" AI GPU Technical Details Leak Out

Jensen Huang's opening GTC 2024 keynote is scheduled to happen tomorrow afternoon (13:00 Pacific time)—many industry experts believe that the NVIDIA boss will take the stage and formally introduce his company's B100 "Blackwell" GPU architecture. An enlightened few have been treated to preview (AI and HPC) units—including Dell's CEO, Jeff Clarke—but pre-introduction leaks have not flowed out. Team Green is likely enforcing strict conditions upon a fortunate selection of trusted evaluators, within a pool of ecosystem partners and customers.

Today, a brave soul has broken that silence—tech tipster, AGF/XpeaGPU, fears repercussions from the leather-jacketed one. They revealed a handful of technical details, a day prior to Team Green's highly anticipated unveiling: "I don't want to spoil NVIDIA B100 launch tomorrow, but this thing is a monster. 2 dies on (TSMC) CoWoS-L, 8x8-Hi HBM3E stacks for 192 GB of memory." They also crystal balled an inevitable follow-up card: "one year later, B200 goes with 12-Hi stacks and will offer a beefy 288 GB. And the performance! It's... oh no Jensen is there... me run away!" Reuters has also joined in on the fun, with some predictions and insider information: "NVIDIA is unlikely to give specific pricing, but the B100 is likely to cost more than its predecessor, which sells for upwards of $20,000." Enterprise products are expected to arrive first—possibly later this year—followed by gaming variants, maybe months later.

HBM3 Initially Exclusively Supplied by SK Hynix, Samsung Rallies Fast After AMD Validation

TrendForce highlights the current landscape of the HBM market, which as of early 2024, is primarily focused on HBM3. NVIDIA's upcoming B100 or H200 models will incorporate advanced HBM3e, signaling the next step in memory technology. The challenge, however, is the supply bottleneck caused by both CoWoS packaging constraints and the inherently long production cycle of HBM—extending the timeline from wafer initiation to the final product beyond two quarters.

The current HBM3 supply for NVIDIA's H100 solution is primarily met by SK hynix, leading to a supply shortfall in meeting burgeoning AI market demands. Samsung's entry into NVIDIA's supply chain with its 1Znm HBM3 products in late 2023, though initially minor, signifies its breakthrough in this segment.

Next-Generation NVIDIA DGX Systems Could Launch Soon with Liquid Cooling

During the 2024 SIEPR Economic Summit, NVIDIA CEO Jensen Huang acknowledged that the company's next-generation DGX systems, designed for AI and high-performance computing workloads, will require liquid cooling due to their immense power consumption. Huang also hinted that these new systems are set to be released in the near future. The revelation comes as no surprise, given the increasing power of GPUs needed to satisfy AI and machine learning applications. As computational requirements continue to grow, so does the need for more powerful hardware. However, with great power comes great heat generation, necessitating advanced cooling solutions to maintain optimal performance and system stability. Liquid cooling has long been a staple in high-end computing systems, offering superior thermal management compared to traditional air cooling methods.

By implementing liquid cooling in the upcoming DGX systems, NVIDIA aims to push the boundaries of performance while ensuring the hardware remains reliable and efficient. Although Huang did not provide a specific release date for the new DGX systems, his statement suggests that they are on the horizon. Whether the next generation of DGX systems uses the current NVIDIA H200 or the upcoming Blackwell B100 GPU as their primary accelerator, the performance will undoubtedly be delivered. As the AI and high-performance computing landscape continues to evolve, NVIDIA's position continues to strengthen, and liquid-cooled systems will certainly play a crucial role in shaping the future of these industries.

Dell Exec Confirms NVIDIA "Blackwell" B100 Doesn't Need Liquid Cooling

NVIDIA's next-generation AI GPU, the B100 "Blackwell," is now in the hands of the company's biggest ecosystem partners and customers for evaluation, and one of them is Dell. Jeff Clarke, the OEM giant's chief operating officer, speaking to industry analysts in an investor teleconference, said that he is excited about the upcoming B100 and B200 chips from NVIDIA. B100 is codename for the AI GPU NVIDIA designs for PCIe add-on card and the SXM socket, meant for systems powered by x86 CPUs such as the AMD EPYC or Intel Xeon Scalable. The B200 is its variant meant for machines powered by NVIDIA's in-house Arm-based processors, such as the successor to its Grace CPU, and its combination with an AI GPU, called Grace Hopper (GH200).

Perhaps the most interesting remark by Clarke about the B100 is that he doesn't think it needs liquid cooling, and can make do with high-airflow cooling like the H100. "We're excited about what happens at the B100 and the B200, and we think that's where there's actually another opportunity to distinguish engineering confidence. Our characterization in the thermal side, you really don't need to direct-liquid cooling to get to the energy density of 1000 W per GPU. That happens next year with the B200," he said. NVIDIA is planning a 2024 debut for "Blackwell" in the AI GPU space with the B100, with B200 slated for 2025, possibly alongside a new CPU.

NVIDIA Expects Upcoming Blackwell GPU Generation to be Capacity-Constrained

NVIDIA is anticipating supply issues for its upcoming Blackwell GPUs, which are expected to significantly improve artificial intelligence compute performance. "We expect our next-generation products to be supply constrained as demand far exceeds supply," said Colette Kress, NVIDIA's chief financial officer, during a recent earnings call. This prediction of scarcity comes just days after an analyst noted much shorter lead times for NVIDIA's current flagship Hopper-based H100 GPUs tailored to AI and high-performance computing. The eagerly anticipated Blackwell architecture and B100 GPUs built on it promise major leaps in capability—likely spurring NVIDIA's existing customers to place pre-orders already. With skyrocketing demand in the red-hot AI compute market, NVIDIA appears poised to capitalize on the insatiable appetite for ever-greater processing power.

However, the scarcity of NVIDIA's products may present an excellent opportunity for significant rivals like AMD and Intel. If both companies can offer a product that could beat NVIDIA's current H100 and provide a suitable software stack, customers would be willing to jump to their offerings and not wait many months for the anticipated high lead times. Intel is preparing the next-generation Gaudi 3 and working on the Falcon Shores accelerator for AI and HPC. AMD is shipping its Instinct MI300 accelerator, a highly competitive product, while already working on the MI400 generation. It remains to be seen if AI companies will begin the adoption of non-NVIDIA hardware or if they will remain a loyal customer and agree to the higher lead times of the new Blackwell generation. However, capacity constrain should only be a problem at launch, where the availability should improve from quarter to quarter. As TSMC improves CoWoS packaging capacity and 3 nm production, NVIDIA's allocation of the 3 nm wafers will likely improve over time as the company moves its priority from H100 to B100.

Jensen Huang Heads to Taiwan, B100 "Blackwell" GPUs Reportedly in Focus

NVIDIA's intrepid CEO, Jensen Huang, has spent a fair chunk of January travelling around China—news outlets believe that Team Green's leader has conducted business meetings with very important clients in the region. Insiders proposed that his low-profile business trip included visits to NVIDIA operations in Shenzhen, Shanghai and Beijing. The latest updates allege that a stopover in Taiwan was also planned, following the conclusion of Mainland activities. Photos from an NVIDIA Chinese new year celebratory event have been spreading across the internet lately—many were surprised to see Huang appear on-stage in Shanghai and quickly dispense with his trademark black leather jacket. He swapped into a colorful "Year of the Wood Dragon" sleeveless shirt for a traditional dance routine.

It was not all fun and games during Huang's first trip to China in four years—inside sources have informed the Wall Street Journey about growing unrest within the nation's top ranked Cloud AI tech firms. Anonymous informants allege that leadership, at Alibaba Group and Tencent, are not happy with NVIDIA's selection of compromised enterprise GPUs—it is posited that NVIDIA's President has spent time convincing key clients to not adopt natively-developed solutions (unaffected by US Sanctions). The short hop over to Taiwan is reported not to be for R&R purposes—insiders had Huang's visiting key supply partners; TSMC and Wistron. Industry experts think that these meetings are linked to NVIDIA's upcoming "Blackwell" B100 AI GPU, and "supercharged" H200 "Hopper" accelerator. It is too early for the rumor mill to start speculation about nerfed versions of NVIDIA's 2024 enterprise products reaching Chinese shores, but Jensen Huang is seemingly ready to hold diplomatic talks with all sides.

Manufacturers Anticipate Completion of NVIDIA's HBM3e Verification by 1Q24; HBM4 Expected to Launch in 2026

TrendForce's latest research into the HBM market indicates that NVIDIA plans to diversify its HBM suppliers for more robust and efficient supply chain management. Samsung's HBM3 (24 GB) is anticipated to complete verification with NVIDIA by December this year. The progress of HBM3e, as outlined in the timeline below, shows that Micron provided its 8hi (24 GB) samples to NVIDIA by the end of July, SK hynix in mid-August, and Samsung in early October.

Given the intricacy of the HBM verification process—estimated to take two quarters—TrendForce expects that some manufacturers might learn preliminary HBM3e results by the end of 2023. However, it's generally anticipated that major manufacturers will have definite results by 1Q24. Notably, the outcomes will influence NVIDIA's procurement decisions for 2024, as final evaluations are still underway.

BIREN BR100 Detailed: China's AI-HPC Processor Storms into the HPC GPU Big Leagues

If InnoSilicon's Fenghua gaming GPU hit the scene last November seemingly out of nowhere, then another Chinese GPU developer is making waves at HotChips 22, this time in the enterprise space. The BR100 by BIREN is a large AI-HPC GPU-based processor that's China's answer to the Hopper, Ponte Vecchio, and CDNA2, and ensure China's growth as an AI/HPC leader is unaffected in the event of a tech embargo for whatever reason.

The BR100 is an MCM of two planar-silicon dies built on the 7 nm DUV node, with a striking 77 billion transistor-count between them, and 550 W TDP (typical). The chip features 64 GB of on-package HBM2E memory. System bus interfaces include PCI-Express 5.0 x16 with CXL, and eight lanes of a proprietary interconnect called B-Link, which total 2.3 TB/s of bandwidth. The processor supports nearly all popular compute formats except double-precision floating-point, or FP64. Among the supported ones are single-precision or FP32, TF32+, FP16, BF16, INT16, and INT8. BIREN claims up to 256 TFLOP/s FP32, up to 512 TFLOP/s TF32+, up to 1 PFLOP/s BF16, and 2,048 TOPS INT8. This would put it at 2.4 to 2.8 times faster than NVIDIA's "Ampere" A100.
Return to Keyword Browsing
Dec 21st, 2024 09:22 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts