News Posts matching #Blackwell

Return to Keyword Browsing

NVIDIA RTX 50-series "Blackwell" to Debut 16-pin PCIe Gen 6 Power Connector Standard

NVIDIA is reportedly looking to change the power connector standard for the fourth successive time in a span of three years, with its upcoming GeForce RTX 50-series "Blackwell" GPUs, Moore's Law is Dead reports. NVIDIA began its post 8-pin PCIe journey with the 12-pin Molex MicroFit connector for the GeForce RTX 3080 and RTX 3090 Founders Edition cards. The RTX 3090 Ti would go on to standardize the 12VHPWR connector, which the company would debut across a wider section of its GeForce RTX 40-series "Ada" product stack (all SKUs with TGP of over 200 W). In the face of rising complains of the reliability of 12VHPWR, some partner RTX 40-series cards are beginning to implement the pin-compatible but sturdier 12V-2x6. The implementation of the 16-pin PCIe Gen 6 connector would be the fourth power connector change, if the rumors are true. A different source says that rival AMD has no plans to change from the classic 8-pin PCIe power connectors.

Update 15:48 UTC: Our friends at Hardware Busters have reliable sources in the power supply industry with equal access to the PCIe CEM specification as NVIDIA, and say that the story of NVIDIA adopting a new power connector with "Blackwell" is likely false. NVIDIA is expected to debut the new GPU series toward the end of 2024, and if a new power connector was in the offing, by now the power supply industry would have some clue. It doesn't. Read more about this in the Hardware Busters article in the source link below.

Update Feb 20th: In an earlier version of the article, it was incorrectly reported that the "16-pin connector" is fundamentally different from the current 12V-2x6, with 16 pins dedicated to power delivery. We have since been corrected by Moore's Law is Dead, that it is in fact the same 12V-2x6, but with an updated PCIe 6.0 CEM specification.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

NVIDIA GeForce RTX 50 Series "Blackwell" On Course for Q4-2024

NVIDIA's next-generation GeForce RTX 50-series "Blackwell" gaming GPUs are on course to debut toward the end of 2024, with a Moore's Law is Dead report pinning the launch to Q4-2024. This is an easy to predict timeline, as every GeForce RTX generation tends to have 2 years of market presence, with the RTX 40-series "Ada" having debuted in Q4-2022 (October 2022), and the RTX 30-series "Ampere" in late-Q3 2020 (September 2020).

NVIDIA's roadmap for 2024 sees a Q1 debut of the RTX 40-series SUPER, with three high-end SKUs refreshing the upper half of the RTX 40-series. The MLID report goes on to speculate that the generational performance uplift of "Blackwell" over "Ada" will be smaller still, than that of "Ada" over "Ampere." With AI HPC GPUs outselling gaming GPUs by 5:1 in terms of revenues, and AMD rumored to be retreating from the enthusiast segment for its next-gen RDNA4, we get to see why this is the case.

NVIDIA Reportedly Selects TSMC 3 nm Process for Blackwell GB100 GPU

NVIDIA is reported to be on next year's 3 nm-class order book at TSMC, with the Blackwell GB100 data-center GPU marked down as an important production project. A DigiTimes insider piece proposes that Team Green has signed up for orders in the second half of 2024, giving TSMC some wiggle room to iron out alleged advanced packaging issues—but it is implied that Apple is already ahead in the queue. Inside sources have not spotted an Intel request for TSMC's advanced 3 nm process (still utilizing FinFET). Industry experts reckon that NVIDIA could be granted access to a customized node for its Blackwell product line, given their VIP status and special relationship with the leading Taiwanese foundry.

DigiTimes believes that the Blackwell GB100 (sporting a chiplet design) will be targeting a Q4 2024 launch window, therefore arriving before any competing next-gen technologies: "For NVIDIA, which monopolizes more than 80% of the AI GPU market, the next generation B100 will use TSMC's 3 nm...It will seize AI deployment business opportunities while the iron is hot and suppress AMD, Intel and other challengers." Team Red, MediaTek and Qualcomm could be next in the procession—it is claimed that unspecified next-gen EPYC server chips are due in 3 nm form.

NVIDIA Blackwell GB100 Die Could Use MCM Packaging

NVIDIA's upcoming Blackwell GPU architecture, expected to succeed the current Ada Lovelace architecture, is gearing up to make some significant changes. While we don't have any microarchitectural leaks, rumors are circulating that Blackwell will have different packaging and die structures. One of the most intriguing aspects of the upcoming Blackwell is the mention of a Multi-Chip Module (MCM) design for the GB100 data-center GPU. This advanced packaging approach allows different GPU components to exist on separate dies, providing NVIDIA with more flexibility in chip customization. This could mean that NVIDIA can more easily tailor its chips to meet the specific needs of various consumer and enterprise applications, potentially gaining a competitive edge against rivals like AMD.

While Blackwell's release is still a few years away, these early tidbits paint a picture of an architecture that isn't just an incremental improvement but could represent a more significant shift in how NVIDIA designs its GPUs. NVIDIA's potential competitor is AMD's upcoming MI300 GPU, which utilized chiplets in its designs. Chiplets also provide ease of integration as smaller dies provide better wafer yields, meaning that it makes more sense to switch to smaller dies and utilize chiplets economically.

AMD "Navi 4C" GPU Detailed: Shader Engines are their own Chiplets

"Navi 4C" is a future high-end GPU from AMD that will likely not see the light of day, as the company is pivoting away from the high-end GPU segment with its next RDNA4 generation. For AMD to continue investing in the development of this GPU, the gaming graphics card segment should have posted better sales, especially in the high-end, which it didn't. Moore's Law is Dead scored details of what could have been a fascinating technological endeavor for AMD, in building a highly disaggregated GPU.

AMD's current "Navi 31" GPU sees a disaggregation of the main logic components of the GPU that benefit from the latest 5 nm foundry node to be located in a central Graphics Compute Die; surrounded by up to six little chiplets built on the older 6 nm foundry node, which contain segments of the GPU's Infinity Cache memory, and its memory interface—hence the name memory cache die. With "Navi 4C," AMD had intended to further disaggregate the GPU, identifying even more components on the GCD that can be spun out into chiplets; as well as breaking up the shader engines themselves into smaller self-contained chiplets (smaller dies == greater yields and lower foundry costs).

NVIDIA Blackwell Graphics Architecture GPU Codenames Revealed, AD104 Has No Successor

The next-generation GeForce RTX 50-series graphics cards will be powered by the Blackwell graphics architecture, named after American mathematician David Blackwell. kopite7kimi, a reliable source with NVIDIA leaks revealed what the lineup of GPUs behind the series could look like. It reportedly will be led by the GB202, followed by the GB203, and then the GB205 and GB206, followed by the GB207 at the entry level. What's surprising here, is the lack of a "GB204" succeeding the AD104, GA104, TU104, and a long line of successful performance-segment GPUs by NVIDIA.

The GeForce Blackwell ASIC series begins with "GB" (GeForce Blackwell) followed by a 200-series number. The last time NVIDIA used a 200-series ASIC number for GeForce GPUs was with "Maxwell," as the GPUs ended up being built on a more advanced node, and with a few more advanced features, than what the architecture was originally conceived for. For "Blackwell," the GB202 logically succeeds the AD102, GA102, TU102, and a long line of "big chips" that have powered the company's flagship client graphics cards. The GB103 succeeds AD103, as a high SIMD count GPU with a narrower memory bus than the GB202, powering the #2 and #3 SKUs in the series. There is curiously the lack of a "GB104."
Return to Keyword Browsing
Nov 18th, 2024 18:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts