News Posts matching #HBM4

Return to Keyword Browsing

NVIDIA's Next-Gen "Rubin" AI GPU Development 6 Months Ahead of Schedule: Report

The "Rubin" architecture succeeds NVIDIA's current "Blackwell," which powers the company's AI GPUs, as well as the upcoming GeForce RTX 50-series gaming GPUs. NVIDIA will likely not build gaming GPUs with "Rubin," just like it didn't with "Hopper," and for the most part, "Volta." NVIDIA's AI GPU product roadmap put out at SC'24 puts "Blackwell" firmly in charge of the company's AI GPU product stack throughout 2025, with "Rubin" only succeeding it in the following year, for a two-year run in the market, being capped off with a "Rubin Ultra" larger GPU slated for 2027. A new report by United Daily News (UDN), a Taiwan-based publication, says that the development of "Rubin" is running 6 months ahead of schedule.

Being 6 months ahead of schedule doesn't necessarily mean that the product will launch sooner. It would give NVIDIA headroom to get "Rubin" better evaluated in the industry, and make last-minute changes to the product if needed; or even advance the launch if it wants to. The first AI GPU powered by "Rubin" will feature 8-high HBM4 memory stacks. The company will also introduce the "Vera" CPU, the long-awaited successor to "Grace." It will also introduce the X1600 InfiniBand/Ethernet network processor. According to the SC'24 roadmap by NVIDIA, these three would've seen a 2026 launch. Then in 2027, the company would follow up with an even larger AI GPU based on the same "Rubin" architecture, codenamed "Rubin Ultra." This features 12-high HBM4 stacks. NVIDIA's current GB200 "Blackwell" is a tile-based GPU, with two dies that have full cache-coherence. "Rubin" is rumored to feature four tiles.

SK Hynix Shifts to 3nm Process for Its HBM4 Base Die in 2025

SK Hynix plans to produce its 6th generation high-bandwidth memory chips (HBM4) using TSMC's 3 nm process, a change from initial plans to use the 5 nm technology. The Korea Economic Daily reports that these chips will be delivered to NVIDIA in the second half of 2025. NVIDIA's GPU products are currently based on 4 nm HBM chips. The HBM4 prototype chip launched in March by SK Hynix features vertical stacking on a 3 nm die., compared to a 5 nm base die, the new 3 nm-based HBM chip is expected to offer a 20-30% performance improvement. However, SK Hynix's general-purpose HBM4 and HBM4E chips will continue to use the 12 nm process in collaboration with TSMC.

While SK Hynix's fifth-generation HBM3E chips used its own base die technology, the company has chosen TSMC's 3 nm technology for HBM4. This decision is anticipated to significantly widen the performance gap with competitor Samsung Electronics, which plans to manufacture its HBM4 chips using the 4 nm process. SK hynix is currently leading the global HBM market with almost 50% of market share, most of its HBM products been delivered to NVIDIA.

NVIDIA CEO Jensen Huang Asks SK hynix to Speed Up HBM4 Delivery by Six Months

SK hynix announced the first 48 GB 16-high HBM3E in the industry at the SK AI Summit in Seoul today. During the event, news came out about newer plans to develop their next-gen memory tech. Reuters and ZDNet Korea reported that NVIDIA CEO Jensen Huang asked SK hynix to speed up their HBM4 delivery by six months. SK Group Chairman Chey Tae-won shared this info at the Summit. The company had earlier said they would give HBM4 chips to customers in the second half of 2025.

When ZDNet asked about this sped-up plan, SK hynix President Kwak Noh-Jung gave a careful answer saying "We will give it a try." A company spokesperson told Reuters that this new schedule would be quicker than first planned, but they didn't share more details. In a video interview shown at the Summit, NVIDIA's Jensen Huang pointed out the strong team-up between the companies. He said working with SK hynix has helped NVIDIA go beyond Moore's Law performance gains. He stressed that NVIDIA will keep needing SK hynix's HBM tech for future products. SK hynix plans to supply the latest 12-layer HBM3E to an undisclosed customer this year, and will start sampling of the 16-layer HBM3E early next year.

HBM5 20hi Stack to Adopt Hybrid Bonding Technology, Potentially Transforming Business Models

TrendForce reports that the focus on HBM products in the DRAM industry is increasingly turning attention toward advanced packaging technologies like hybrid bonding. Major HBM manufacturers are considering whether to adopt hybrid bonding for HBM4 16hi stack products but have confirmed plans to implement this technology in the HBM5 20hi stack generation.

Hybrid bonding offers several advantages when compared to the more widely used micro-bumping. Since it does not require bumps, it allows for more stacked layers and can accommodate thicker chips that help address warpage. Hybrid-bonded chips also benefit from faster data transmission and improved heat dissipation.

Rambus Announces Industry-First HBM4 Controller IP to Accelerate Next-Generation AI Workloads

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today announced the industry's first HBM4 Memory Controller IP, extending its market leadership in HBM IP with broad ecosystem support. This new solution supports the advanced feature set of HBM4 devices, and will enable designers to address the demanding memory bandwidth requirements of next-generation AI accelerators and graphics processing units (GPUs).

"With Large Language Models (LLMs) now exceeding a trillion parameters and continuing to grow, overcoming bottlenecks in memory bandwidth and capacity is mission-critical to meeting the real-time performance requirements of AI training and inference," said Neeraj Paliwal, SVP and general manager of Silicon IP, at Rambus. "As the leading silicon IP provider for AI 2.0, we are bringing the industry's first HBM4 Controller IP solution to the market to help our customers unlock breakthrough performance in their state-of-the-art processors and accelerators."

Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth

As AI workloads continue to evolve and expand, memory bandwidth and capacity are increasingly critical for system performance. The latest GPUs in the industry need the highest performance high bandwidth memory (HBM), significant memory capacity, as well as improved power efficiency. Micron is at the forefront of memory innovation to meet these needs and is now shipping production-capable HBM3E 12-high to key industry partners for qualification across the AI ecosystem.

Micron's industry-leading HBM3E 12-high 36 GB delivers significantly lower power consumption than our competitors' 8-high 24 GB offerings, despite having 50% more DRAM capacity in the package
Micron HBM3E 12-high boasts an impressive 36 GB capacity, a 50% increase over current HBM3E 8-high offerings, allowing larger AI models like Llama 2 with 70 billion parameters to run on a single processor. This capacity increase allows faster time to insight by avoiding CPU offload and GPU-GPU communication delays. Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitors' HBM3E 8-high 24 GB solutions. Micron HBM3E 12-high 36 GB offers more than 1.2 terabytes per second (TB/s) of memory bandwidth at a pin speed greater than 9.2 gigabits per second (Gb/s). These combined advantages of Micron HBM3E offer maximum throughput with the lowest power consumption can ensure optimal outcomes for power-hungry data centers. Additionally, Micron HBM3E 12-high incorporates fully programmable MBIST that can run system representative traffic at full spec speed, providing improved test coverage for expedited validation and enabling faster time to market and enhancing system reliability.

TSMC's Next-Gen AI Packaging: 12 HBM4 and A16 Chiplets by 2027

During the Semicon Taiwan 2024 summit event, TSMC VP of Advanced Packaging Technology, Jun He, spoke about the importance of merging AI chip memory and logic chips using 3D IC technology. He predicted that by 2030 the worldwide semiconductor industry would hit the $1 trillion milestone with HPC and AI leading 40 percent of the market share. In 2027, TSMC will introduce the 2.5D CoWoS technology that includes eight A16 process chipsets and 12 HBM4. AI processors that use this technology will not only be much cheaper to produce but will also provide engineers with a greater level of convenience. Engineers will have the option to write new codes into them instead. Manufacturers are cutting the SoC and HBM architectural conversion and mass production costs down to nearly one-fourth.

Nevertheless, the increasing production capacities of 3D IC technology remain the main challenge, as the size of chips and the complexity of manufacturing are decisive factors. However, the higher the size of the chips, the more chiplets are added, and thus the performance is improved, but this now makes the process even more complicated and is associated with more risks of misalignment, breakage, and extraction failure.

JEDEC Approaches Finalization of HBM4 Standard, Eyes Future Innovations

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced it is nearing completion of the next version of its highly anticipated High Bandwidth Memory (HBM) DRAM standard: HBM4. Designed as an evolutionary step beyond the currently published HBM3 standard, HBM4 aims to further enhance data processing rates while maintaining essential features such as higher bandwidth, lower power consumption, and increased capacity per die and/or stack. These advancements are vital for applications that require efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing, high-end graphics cards, and servers.

HBM4 is set to introduce a doubled channel count per stack compared to HBM3, with a larger physical footprint. To support device compatibility, the standard ensures that a single controller can work with both HBM3 and HBM4 if needed. Different configurations will require various interposers to accommodate the differing footprints. HBM4 will specify 24 Gb and 32 Gb layers, with options for supporting 4-high, 8-high, 12-high and 16-high TSV stacks. The committee has initial agreement on speeds bins up to 6.4 Gbps with discussion ongoing for higher frequencies.

Details Revealed about SK Hynix HBM4E, Computing, and Caching Features Integrated Directly

SK Hynix, leader in HBM3E memory, has now shared more details about HBM4E. Based on fresh reports by Wccftech and ET News, SK Hynix plans to make an HBM memory type that features multiple things like computing, caching, and network memory, all within the same package. This will make SK Hynix stand out from others. This idea is still in the early stages, but SK Hynix has started getting the design information it needs to support its goals. The reports say that SK Hynix wants to lay the groundwork for a versatile HBM with its upcoming HBM4 design. The company reportedly plans to include a memory controller on board, which will allow new computing abilities with its 7th generation HBM4E memory.

By using SK Hynix's method, everything will be unified as a single unit. This will not only make data transfer faster because there is less space between parts, but it will also make it more energy-efficient. Previously in April, SK Hynix announced that it has been working with TSMC to produce the next generation of HBM and improve how logic chips and HBM work together through advanced packaging. In late May, SK Hynix has disclosed yield details regarding HBM3E for the first time, the memory giant reporting successfully reducing the time needed for mass production of HBM3E chips by 50%, while getting closer to the target yield of 80%. The company plans to keep developing HBM4, which is expected to start mass production in 2026.
SK Hynix HBM SK Hynix HBMe3

HBM3e Production Surge Expected to Make Up 35% of Advanced Process Wafer Input by End of 2024

TrendForce reports that the three largest DRAM suppliers are increasing wafer input for advanced processes. Following a rise in memory contract prices, companies have boosted their capital investments, with capacity expansion focusing on the second half of this year. It is expected that wafer input for 1alpha nm and above processes will account for approximately 40% of total DRAM wafer input by the end of the year.

HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50-60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

SK hynix CEO Says HBM from 2025 Production Almost Sold Out

SK hynix held a press conference unveiling its vision and strategy for the AI era today at its headquarters in Icheon, Gyeonggi Province, to share the details of its investment plans for the M15X fab in Cheongju and the Yongin Semiconductor Cluster in Korea and the advanced packaging facilities in Indiana, U.S.

The event, hosted by theChief Executive Officer Kwak Noh-Jung, three years before the May 2027 completion of the first fab in the Yongin Cluster, was attended by key executives including the Head of AI Infra Justin (Ju-Seon) Kim, Head of DRAM Development Kim Jonghwan, Head of the N-S Committee Ahn Hyun, Head of Manufacturing Technology Kim Yeongsik, Head of Package & Test Choi Woojin, Head of Corporate Strategy & Planning Ryu Byung Hoon, and the Chief Financial Officer Kim Woo Hyun.

SK hynix Collaborates with TSMC on HBM4 Chip Packaging

SK hynix Inc. announced today that it has recently signed a memorandum of understanding with TSMC for collaboration to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass-produced from 2026, through this initiative.

SK hynix said the collaboration between the global leader in the AI memory space and TSMC, a top global logic foundry, will lead to more innovations in HBM technology. The collaboration is also expected to enable breakthroughs in memory performance through trilateral collaboration between product design, foundry, and memory provider. The two companies will first focus on improving the performance of the base die that is mounted at the very bottom of the HBM package. HBM is made by stacking a core DRAM die on top of a base die that features TSV technology, and vertically connecting a fixed number of layers in the DRAM stack to the core die with TSV into an HBM package. The base die located at the bottom is connected to the GPU, which controls the HBM.

JEDEC Agrees to Relax HBM4 Package Thickness

JEDEC is currently presiding over standards for 6th generation high bandwidth memory (AKA HBM4)—the 12 and 16-layer DRAM designs are expected to reach mass production status in 2026. According to a ZDNET South Korea report, involved manufacturers are deliberating over HBM4 package thicknesses—allegedly, decision makers have settled on 775 micrometers (μm). This is thicker than the previous generation's measurement of 720 micrometers (μm). Samsung Electronics, SK Hynix and Micron are exploring "hybrid bonding," a new packaging technology—where onboard chips and wafers are linked directly to each other. Hybrid bonding is expected to be quite expensive to implement, so memory makers are carefully considering whether HBM4 warrants its usage.

ZDNET believes that JEDEC's agreement—settling on 775 micrometers (μm) for 12-layer and 16-layer stacked HBM4—could have: "a significant impact on the future packaging investment trends of major memory manufacturers. These companies have been preparing a new packaging technology, hybrid bonding, keeping in mind the possibility that the package thickness of HBM4 will be limited to 720 micrometers. However, if the package thickness is adjusted to 775 micrometers, 16-layer DRAM stacking HBM4 can be sufficiently implemented using existing bonding technology." A revised schedule could delay the rollout of hybrid bonding—perhaps pushed back to coincide with a launch of seventh generation HBM. The report posits that Samsung Electronics, SK Hynix and Micron memory engineers are about to focus on the upgrading of existing bonding technologies.

TSMC & SK Hynix Reportedly Form Strategic AI Alliance, Jointly Developing HBM4

Last week SK Hynix revealed ambitious plans for its next wave of High Bandwidth Memory (HBM) products—their SEMICON Korea 2024 presentation included an announcement about cutting-edge HBM3E entering mass production within the first quarter of this year. True next-gen HBM development has already kicked off—TPU's previous report outlines an HBM4 sampling phase in 2025, followed by full production in 2026. South Korea's Pulse News believes that TSMC has been roped into a joint venture (with SK Hynix). An alleged "One Team" strategic alliance has been formed according to reports emerging from Asia—this joint effort could focus on the development of HBM4 solutions for AI fields.

Reports from last November pointed to a possible SK Hynix and NVIDIA HBM4 partnership, with TSMC involved as the designated fabricator. We are not sure if the emerging "One Team" progressive partnership will have any impact on previously agreed upon deals, but South Korean news outlets reckon that the TSMC + SK Hynix alliance will attempt to outdo Samsung's development of "new-generation AI semiconductor packaging." Team Green's upcoming roster of—"Hopper" H200 and "Blackwell" B100—AI GPUs are linked to a massive pre-paid shipment of SK Hynix HMB3E parts. HBM4 products could be fitted on a second iteration of NVIDIA's Blackwell GPU, and the mysterious "Vera Rubin" family. Notorious silicon industry tipster, kopite7kimi, believes that "R100 and GR200" GPUs are next up in Team Green's AI-cruncher queue.

SK Hynix Targets HBM3E Launch This Year, HBM4 by 2026

SK Hynix has unveiled ambitious High Bandwidth Memory (HBM) roadmaps at SEMICON Korea 2024. Vice President Kim Chun-hwan announced plans to mass produce the cutting-edge HBM3E within the first half of 2024, touting 8-layer stack samples already supplied to clients. This iteration makes major strides towards fulfilling surging data bandwidth demands, offering 1.2 TB/s per stack and 7.2 TB/s in a 6-stack configuration. VP Kim Chun-hwan cites the rapid emergence of generative AI, forecasted for 35% CAGR, as a key driver. He warns that "fierce survival competition" lies ahead across the semiconductor industry amidst rising customer expectations. With limits approaching on conventional process node shrinks, attention is shifting to next-generation memory architectures and materials to unleash performance.

SK Hynix has already initiated HBM4 development for sampling in 2025 and mass production the following year. According to Micron, HBM4 will leverage a wider 2048-bit interface compared to previous HBM generations to increase per-stack theoretical peak memory bandwidth to over 1.5 TB/s. To achieve these high bandwidths while maintaining reasonable power consumption, HBM4 is targeting a data transfer rate of around 6 GT/s. The wider interface and 6 GT/s speeds allow HBM4 to push bandwidth boundaries significantly compared to prior HBM versions, fueling the need for high-performance computing and AI workloads. But power efficiency is carefully balanced by avoiding impractically high transfer rates. Additionally, Samsung is aligned on a similar 2025/2026 timeline. Beyond pushing bandwidth boundaries, custom HBM solutions will become increasingly crucial. Samsung executive Jaejune Kim reveals that over half its HBM volume already comprises specialized products. Further tailoring HBM4 to individual client needs through logic integration presents an opportunity to cement leadership. As AI workloads evolve at breakneck speeds, memory innovation must keep pace. With HBM3E prepping for launch and HBM4 in the plan, SK Hynix and Samsung are gearing up for the challenges ahead.

SK hynix Reports Financial Results for 2023, 4Q23

SK hynix Inc. announced today that it recorded an operating profit of 346 billion won in the fourth quarter of last year amid a recovery of the memory chip market, marking the first quarter of profit following four straight quarters of losses. The company posted revenues of 11.31 trillion won, operating profit of 346 billion won (operating profit margin at 3%), and net loss of 1.38 trillion won (net profit margin at negative 12%) for the three months ended December 31, 2023. (Based on K-IFRS)

SK hynix said that the overall memory market conditions improved in the last quarter of 2023 with demand for AI server and mobile applications increasing and average selling price (ASP) rising. "We recorded the first quarterly profit in a year following efforts to focus on profitability," it said. The financial results of the last quarter helped narrow the operating loss for the entire year to 7.73 trillion won (operating profit margin at negative 24%) and net loss to 9.14 trillion won (with net profit margin at negative 28%). The revenues were 32.77 trillion won.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

Manufacturers Anticipate Completion of NVIDIA's HBM3e Verification by 1Q24; HBM4 Expected to Launch in 2026

TrendForce's latest research into the HBM market indicates that NVIDIA plans to diversify its HBM suppliers for more robust and efficient supply chain management. Samsung's HBM3 (24 GB) is anticipated to complete verification with NVIDIA by December this year. The progress of HBM3e, as outlined in the timeline below, shows that Micron provided its 8hi (24 GB) samples to NVIDIA by the end of July, SK hynix in mid-August, and Samsung in early October.

Given the intricacy of the HBM verification process—estimated to take two quarters—TrendForce expects that some manufacturers might learn preliminary HBM3e results by the end of 2023. However, it's generally anticipated that major manufacturers will have definite results by 1Q24. Notably, the outcomes will influence NVIDIA's procurement decisions for 2024, as final evaluations are still underway.

Samsung Notes: HBM4 Memory is Coming in 2025 with New Assembly and Bonding Technology

According to the editorial blog post published on the Samsung blog by SangJoon Hwang, Executive Vice President and Head of the DRAM Product & Technology Team at Samsung Electronics, we have information that High-Bandwidth Memory 4 (HBM4) is coming in 2025. In the recent timeline of HBM development, we saw the first appearance of HBM memory in 2015 with the AMD Radeon R9 Fury X. The second-generation HBM2 appeared with NVIDIA Tesla P100 in 2016, and the third-generation HBM3 saw the light of the day with NVIDIA Hopper GH100 GPU in 2022. Currently, Samsung has developed 9.8 Gbps HBM3E memory, which will start sampling to customers soon.

However, Samsung is more ambitious with development timelines this time, and the company expects to announce HBM4 in 2025, possibly with commercial products in the same calendar year. Interestingly, the HBM4 memory will have some technology optimized for high thermal properties, such as non-conductive film (NCF) assembly and hybrid copper bonding (HCB). The NCF is a polymer layer that enhances the stability of micro bumps and TSVs in the chip, so memory solder bump dies are protected from shock. Hybrid copper bonding is an advanced semiconductor packaging method that creates direct copper-to-copper connections between semiconductor components, enabling high-density, 3D-like packaging. It offers high I/O density, enhanced bandwidth, and improved power efficiency. It uses a copper layer as a conductor and oxide insulator instead of regular micro bumps to increase the connection density needed for HBM-like structures.
Return to Keyword Browsing
Dec 19th, 2024 05:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts