News Posts matching #TSMC

Return to Keyword Browsing

TSMC to Raise Wafer Prices by 10% in 2025, Customers Seemingly Agree

Taiwanese semiconductor giant TSMC is reportedly planning to increase its wafer prices by up to 10% in 2025, according to a Morgan Stanley note cited by investor Eric Jhonsa. The move comes as demand for cutting-edge processors in smartphones, PCs, AI accelerators, and HPC continues to surge. Industry insiders reveal that TSMC's state-of-the-art 4 nm and 5 nm nodes, used for AI and HPC customers such as AMD, NVIDIA, and Intel, could see up to 10% price hikes. This increase would push the cost of 4 nm-class wafers from $18,000 to approximately $20,000, representing a significant 25% rise since early 2021 for some clients and an 11% rise from the last price hike. Talks about price hikes with major smartphone manufacturers like Apple have proven challenging, but there are indications that modest price increases are being accepted across the industry. Morgan Stanley analysts project a 4% average selling price increase for 3 nm wafers in 2025, which are currently priced at $20,000 or more per wafer.

Mature nodes like 16 nm are unlikely to see price increases due to sufficient capacity. However, TSMC is signaling potential shortages in leading-edge capacity to encourage customers to secure their allocations. Adding to the industry's challenges, advanced chip-on-wafer-on-substrate (CoWoS) packaging prices are expected to rise by 20% over the next two years, following previous increases in 2022 and 2023. TSMC aims to boost its gross margin to 53-54% by 2025, anticipating that customers will absorb these additional costs. The impact of these price hikes on end-user products remains uncertain. Competing foundries like Intel and Samsung may seize this opportunity to offer more competitive pricing, potentially prompting some chip designers to consider alternative manufacturing options. Additionally, TSMC's customers could reportedly be unable to secure their capacity allocation without "appreciating TSMC's value."

Intel Arc Xe2 "Battlemage" Discrete GPUs Made on TSMC 4 nm Process

Intel has reportedly chosen the TSMC 4 nm EUV foundry node for its next generation Arc Xe2 discrete GPUs based on the "Battlemage" graphics architecture. This would mark a generational upgrade from the Arc "Alchemist" family, which Intel built on the TSMC 6 nm DUV process. The TSMC N4 node offers significant increases in transistor densities, performance, and power efficiency over the N6, which is allowing Intel to nearly double the Xe cores on its largest "Battlemage" variant in numerical terms. This, coupled with increased IPC, clock speeds, and other features, should make the "Battlemage" contemporary against today's AMD RDNA 3 and NVIDIA Ada gaming GPUs. Interestingly, TSMC N4 isn't the most advanced foundry node that the Xe2 "Battlemage" is being built on. The iGPU powering Intel's Core Ultra 200V "Lunar Lake" processor is part of its Compute tile, which Intel is building on the more advanced TSMC N3 (3 nm) node.

Demand from AMD and NVIDIA Drives FOPLP Development, Mass Production Expected in 2027-2028

In 2016, TSMC developed and named its InFO FOWLP technology, and applied it to the A10 processor used in the iPhone 7. TrendForce points out that since then, OSAT providers have been striving to develop FOWLP and FOPLP technologies to offer more cost-effective packaging solutions.

Starting in the second quarter, chip companies like AMD have actively engaged with TSMC and OSAT providers to explore the use of FOPLP technology for chip packaging and helping drive industry interest in FOPLP. TrendForce observes that there are three main models for introducing FOPLP packaging technology: Firstly, OSAT providers transitioning from traditional methods of consumer IC packaging to FOPLP. Secondly, foundries and OSAT providers packaging AI GPUs that are transitioning 2.5D packaging from wafer level to panel level. Thirdly, panel makers who are packaging consumer ICs.

Report: Only 10% of TSMC's Capacity will Come from Non-Taiwan Fabs

A recent report from Taiwan TV News has revealed that TSMC's overseas expansion plans will only contribute around 10% of the company's total silicon production capacity. TSMC's overseas expansion strategy has been a topic of significant interest in the tech industry as the company seeks to diversify its manufacturing capabilities beyond its home base in Taiwan. The company has announced plans to build new fabrication plants in the United States, Japan, and potentially other regions in an effort to mitigate supply chain risks and better serve its global customer base. However, according to the report, these overseas facilities will only account for a small fraction of 10% of TSMC's overall production capacity.

The majority of the company's manufacturing will continue to be centered in Taiwan, where it maintains its most advanced and high-volume fabs. There are also significant challenges and investments required to establish new semiconductor manufacturing facilities overseas. Building a state-of-the-art fab can cost billions of dollars and take several years to complete, making it a complex and capital-intensive undertaking. Despite the relatively small contribution of its overseas facilities, TSMC's global expansion is still seen as a crucial step in diversifying its supply chain and mitigating geopolitical risks. The company's ability to maintain its technological leadership and meet the growing demand for advanced chips will be crucial in the years to come.

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Antrophic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

TSMC Begins Experimenting with Rectangular Panel-Like Chip Packaging

TSMC is working on a new advanced chip packaging technology that uses rectangular panel-like substrates instead of the traditional circular wafers, according to a Nikkei report citing sources. This new approach would allow more chips to be placed on a single substrate. TSMC is reportedly experimenting with rectangular substrates measuring 515 mm by 510 mm, providing more than three times the usable area compared to current 12-inch wafers. Using a rectangular-shaped wafer can potentially eliminate more of the incomplete chips found on the edges of current circular ones. While this may sound simple, it would actually require a radical change to the entire manufacturing process.

While the research is still in its early stages and may take several years to reach mass production, it represents a major technological shift for TSMC. The company has responded to Nikkei's inquiry by stating that they are closely monitoring advancements in advanced packaging technologies, including panel-level packaging. This development could potentially give TSMC an edge in meeting future chip demands, however, Intel and Samsung are also testing this new approach.

TSMC Thinking to Raise Prices, NVIDIA's Jensen Fully Supports the Idea

NVIDIA's CEO Jensen Huang said on June 5th that TSMC's stock price is too low, and he agrees with new TSMC chairman C. C. Wei's idea about TSMC's value. Jensen promised to support TSMC in charging more for their wafers and a type of packaging called CoWoS. An article from TrendForce says that NVIDIA and TSMC will talk about chip prices for next year, which could help TSMC make more money. Jensen also said he's not too worried about problems between countries because Taiwan has a strong supply chain; TSMC is doing more than just making chips, they're handling many supply chain issues too.

Last year, many companies were waiting for TSMC's products, ever-increasing demand and production issues causing delays. Even though things got a bit better this year, there's still not enough supply. TSMC says that even making three times more 3-nanometer chips isn't enough, so they need to make even more. NVIDIA's profits are very high, much higher than other companies like AMD and even TSMC. If TSMC raises prices for these advanced processes, it won't hurt NVIDIA's profits much, but it might lower profits for other companies like Apple, AMD, and Qualcomm. It will also have an impact on end-users.

TSMC Begins 3 nm Production for Intel's "Lunar Lake" and "Arrow Lake" Tiles

TSMC has commenced mass-production of chips for Intel on its 3 nm EUV FinFET foundry node, according to a report by Taiwan industry observer DigiTimes. Intel is using the TSMC 3 nm node for the compute tile of its upcoming Core Ultra 300 "Lunar Lake" processor. The company went into depth about "Lunar Lake" in its Computex 2024 presentation. While a disaggregated chiplet-based processor like "Meteor Lake," the new "Lunar Lake" chip sees the CPU cores, iGPU, NPU, and memory controllers sit on a single chiplet called the compute tile, built on the 3 nm node; while the SoC and I/O components are disaggregated the chip's only other chiplet, the SoC tile, which is built on the TSMC 6 nm node.

Intel hasn't gone into the nuts and bolts of "Arrow Lake," besides mentioning that the processor will feature the same "Lion Cove" P-cores and "Skymont" E-cores as "Lunar Lake," albeit arranged in a more familiar ringbus configuration, where the E-core clusters share L3 cache with the P-cores (something that doesn't happen on "Lunar Lake"). "Arrow Lake" also features a iGPU based on the same Xe2 graphics architecture as "Lunar Lake," and will feature an NPU that meets Microsoft Copilot+ AI PC requirements. What remains a mystery about "Arrow Lake" is the way Intel will go about organizing the various chiplets or tiles. Reports from February 2024 mentioned Intel tapping into TSMC 3 nm for just the disaggregated graphics tile of "Arrow Lake," but we now know from "Lunar Lake" that Intel doesn't shy away from letting TSMC fabricate its CPU cores. The first notebooks powered by "Lunar Lake" are expected to hit shelves within Q3-2024, with "Arrow Lake" following on in Q4.

ASML Unveils Plans for Next-Generation "Hyper-NA" Extreme Ultraviolet Lithography

ASML, the world's sole provider of extreme ultraviolet (EUV) lithography systems essential for manufacturing the most advanced chips, has revealed its roadmap for pushing semiconductor scaling even further. In a recent presentation, former ASML president Martin van den Brink announced the company's plans for a new "Hyper-NA" EUV technology that would succeed the High-NA EUV systems, which are just beginning to deploy. The Hyper-NA tools, still in early research stages, would increase the numerical aperture to 0.75 from High-NA's 0.55, enabling chips with transistor densities beyond the projected limits of High-NA in the early 2030s. This higher numerical aperture should reduce reliance on multi-patterning techniques that add complexity and cost.

Hyper-NA is bringing challenges of its own to commercialization. Key obstacles include light polarization effects that degrade imaging contrast, requiring polarization filters that reduce light throughput. Resist materials may also need to become thinner to maintain resolution. While leading EUV chipmakers like TSMC can likely extend scaling for several more nodes using multi-patterning with existing 0.33 NA EUV tools, Intel has adopted 0.55 High-NA to avoid these complexities. But Hyper-NA will likely become essential across the industry later this decade as High-NA's physical limits are reached. Beyond Hyper-NA, few alternative patterning solutions exist besides expensive multi-beam electron lithography, which lacks the throughput of EUV photolithography. To continue classical scaling, the industry may need to eventually transition to new channel materials with superior electron mobility properties compared to silicon, requiring novel deposition and etch capabilities.

Silicon Motion's SM2508 Set to Launch in Q4, Edging Out Phison as Top SSD Controller

Silicon Motion's SM2508 was first revealed in August last year at the Flash Memory Summit 2023, but after that things went pretty quiet. However, the company was demoing the SM2508 up and running at Computex this past week and it's set to edge out Phison's E26 Max14um in the battle of fastest NVMe SSD controller. We're not talking about any massive gains here, but the reference drive from Silicon Motion was shown running CrystalDiskMark 8.0.4 at the show and if we do a rough comparison to a Phison E26 Max14um, the SM2508 beats Phison by about 800 MB/s in sequential read performance and 500 MB/s in sequential write performance.

This might not seem like a whole lot, but the SM2508 is built on TSMC's N6 node which results in a 3.5 Watt peak power consumption, or 7 Watts for the entire SSD at load. A typical Phison E26 based SSD draws in excess of 11 Watts of power at full load, which is a big difference in a mobile device. This should obviously also lead to lower thermals and we should finally see PCIe 5.0 drives that don't need massive heatsinks or active cooling. In fact, 7 Watts power draw is very similar to Phison's E18 PCIe 4.0 based SSDs. Silicon Motion is still working on fine tuning the firmware for the SM2508, so performance might yet improve to reach the promised 14 GB/s write performance. Currently the random performance is also looking a bit on the weak side compared to Phison. According to Tom's hardware, we should see the first drives with the Silicon Motion SM2508 appear in the market sometime in Q4 this year.

Taiwanese Chipmakers Expand Overseas to Capitalize on Geopolitical Shifts and De-Sinicization Benefits

On June 5th, Vanguard and NXP announced plans to jointly establish VisionPower Semiconductor Manufacturing Company (VSMC) in Singapore to build a 12-inch wafer plant. TrendForce posits that this move reflects the trend of global supply chains shifting "Out of China, Out of Taiwan"(OOC/OOT), with Taiwanese companies accelerating their overseas expansion to improve regional capacity flexibility and competitiveness.

TrendForce noted that the semiconductor supply chain has been diversifying over the past two years to mitigate geopolitical and pandemic-related risks, forming two major segments: China's domestic supply chain and a non-China supply chain. Recent US tariff increases have accelerated this shift, leading to increased orders from American customers.

AMD Wants to Tap Samsung Foundry for 3 nm GAAFET Process

According to a report by KED Global, Korean chipmaking giant Samsung is ramping up its efforts to compete with global giants like TSMC and Intel. The latest partnership on the horizon is AMD's collaboration with Samsung. AMD is planning to utilize Samsung's cutting-edge 3 nm technology for its future chips. More specifically, AMD wants to utilize Samsung's gate-all-around FETs (GAAFETs). During ITF World 2024, AMD CEO Lisa Su noted that the company intends to use 3 nm GAA transistors for its future products. The only company offering GAAFETs on a 3 nm process is Samsung. Hence, this report from KED gains more credibility.

While we don't have any official information, AMD's utilization of a second foundry as a manufacturing partner would be a first for the company in years. This strategic move signifies a shift towards dual-sourcing, aiming to diversify its supply chain and reduce dependency on a single manufacturer, previously TSMC. We still don't know what specific AMD products will use GAAFETs. AMD could use them for CPUs, GPUs, DPUs, FPGAs, and even data center accelerators like Instinct MI series.

Blackwell Shipments Imminent, Total CoWoS Capacity Expected to Surge by Over 70% in 2025

TrendForce reports that NVIDIA's Hopper H100 began to see a reduction in shortages in 1Q24. The new H200 from the same platform is expected to gradually ramp in Q2, with the Blackwell platform entering the market in Q3 and expanding to data center customers in Q4. However, this year will still primarily focus on the Hopper platform, which includes the H100 and H200 product lines. The Blackwell platform—based on how far supply chain integration has progressed—is expected to start ramping up in Q4, accounting for less than 10% of the total high-end GPU market.

The die size of Blackwell platform chips like the B100 is twice that of the H100. As Blackwell becomes mainstream in 2025, the total capacity of TSMC's CoWoS is projected to grow by 150% in 2024 and by over 70% in 2025, with NVIDIA's demand occupying nearly half of this capacity. For HBM, the NVIDIA GPU platform's evolution sees the H100 primarily using 80 GB of HBM3, while the 2025 B200 will feature 288 GB of HBM3e—a 3-4 fold increase in capacity per chip. The three major manufacturers' expansion plans indicate that HBM production volume will likely double by 2025.

China Launches Massive $47.5 Billion "Big Fund" to Boost Domestic Chip Industry

Beijing has doubled down on its push for semiconductor self-sufficiency with the establishment of a new $47.5 billion investment fund to accelerate growth in the domestic chip sector. The fund, officially registered on May 24th under the name "China Integrated Circuit Industry Investment Fund Phase III", represents the largest of three state-backed vehicles aimed at cultivating China's semiconductor capabilities. The announcement comes as tensions over advanced chip technology continue to escalate between the U.S. and China. Over the past couple years, Washington has steadily ratcheted up export controls on semiconductors to Beijing over national security concerns about potential military applications. These measures have lent new urgency to China's quest for self-sufficiency in chip design and manufacturing.

With a war chest of 344 billion yuan ($47.5 billion), the "Big Fund" dwarfs the combined capital of the first two semiconductor investment vehicles launched in 2014 and 2019. Officials have outlined a multipronged strategy targeting key bottlenecks, focusing on equipment for chip fabrication plants. The fund has bankrolled major projects such as flash memory maker Yangtze Memory Technologies and leading foundries like SMIC and Huahong. China's homegrown chip industry still needs to catch up to global leaders like Intel, Samsung, and TSMC. However, the immense scale of state-directed capital illustrates Beijing's unwavering commitment to developing a self-reliant supply chain for semiconductors—a technology viewed as indispensable for economic and military competitiveness. News of the "Big Fund" sent Chinese chip stocks surging over 3% on hopes of fresh financing tailwinds.

NVIDIA's Arm-based AI PC Processor Could Leverage Arm Cortex X5 CPU Cores and Blackwell Graphics

Last week, we got confirmation from the highest levels of Dell and NVIDIA that the latter is making a client PC processor for the Windows on Arm (WoA) AI PC ecosystem that only has one player in it currently, Qualcomm. Michael Dell hinted that this NVIDIA AI PC processor would be ready in 2025. Since then, speculation has been rife about the various IP blocks NVIDIA could use in the development of this chip, the two key areas of debate have been the CPU cores and the process node.

Given that NVIDIA is gunning toward a 2025 launch of its AI PC processor, the company could implement reference Arm IP CPU cores, such as the Arm Cortex X5 "Blackhawk," and not venture out toward developing its own CPU cores on the Arm machine architecture, unlike Apple. Depending on how the market recieves its chips, NVIDIA could eventually develop its own cores. Next up, the company could use the most advanced 3 nm-class foundry node available in 2025 for its chip, such as the TSMC N3P. Given that even Apple and Qualcomm will build their contemporary notebook chips on this node, it would be a logical choice of node for NVIDIA. Then there's graphics and AI acceleration hardware.

Apple COO Meets with TSMC CEO to Reserve First Batch of 2 nm Allocation

Apple is locked in a fierce competition to stay ahead in the client AI applications race, and needs access to the latest foundry process at TSMC to built its future-generation SoCs on. The company's COO, Jeff Williams, reportedly paid a visit to TSMC CEO CC Wei to discuss Apple's foundry allocation of the Taiwanese foundry's 2 nm-class silicon fabrication process, for its next-generation M-series and A-series SoCs powering its future generations of iPhone, iPad, and Macs. Taiwan based industry observer, Economic Daily, which broke this story, says that it isn't just an edge with performance and efficiency that Apple is after, but also leadership in generative AI, and client AI applications. The company has reportedly invested over $100 billion in generative AI research and development over the past 5 years.

Apple's latest silicon, the M4 SoC, which debuted with the iPad Pro earlier this month, is built on TSMC's N3E (3 nm-class) node, and it's widely expected that the rest of the M4 line of SoCs for Macs, and the "A18," could be built on the same process, which would cover Apple for the rest of 2024, going into the first half of 2025. TSMC is expected to commence mass-production of chips on its 2 nm node in 2025, which is why Apple is in the TSMC boss's office to seek the first foundry allocation.

AMD to Present "Zen 5" Microarchitecture Deep-dive at Hot Chips 2024

AMD is slated to deliver a "Zen 5" microarchitecture deep-dive at the Hot Chips 2024 conference, on August 25. The company is widely expected to either unveil or announce its next-generation processors based on the architecture, in its 2024 Computex keynote on June 3, so it remains to be seen if the deep-dive follows a product launch, or predates it. Either way, Hot Chips talks tend to be significantly more detailed than the product launch pre-briefs that we get; and so we hope to learn a lot more about the architecture.

A lot rides on the continued success of "Zen 5" to deliver a double-digit percentage IPC increase over its predecessor, while also introducing new microarchitecture-level features; and leveraging new foundry processes at TSMC, to deliver competitive processors to Intel. Unlike Intel, which has implemented hybrid CPU cores across its product stack, AMD continues to make traditional multicore processors, and refuses to level even the chips that contain regular and high-density versions of its "Zen 4" cores as "hybrid."

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

NVIDIA "Blackwell" Successor Codenamed "Rubin," Coming in Late-2025

NVIDIA barely started shipping its "Blackwell" line of AI GPUs, and its next-generation architecture is already on the horizon. Codenamed "Rubin," after Vera Rubin, the new architecture will power NVIDIA's future AI GPUs with generational jumps in performance, but more importantly, a design focus on lowering the power draw. This will become especially important as NVIDIA's current architectures already approach the kilowatt range, and cannot scale boundlessly. TF International Securities analyst, Mich-Chi Kuo says that NVIDIA's first AI GPU based on "Rubin," the R100 (not to be confused with an ATI GPU from many moons ago); is expected to enter mass-production in Q4-2025, which means it could be unveiled and demonstrated sooner than that; and select customers could have access to the silicon sooner, for evaluations.

The R100, according to Mich-Chi Kuo, is expected to leverage TSMC's 3 nm EUV FinFET process, specifically the TSMC-N3 node. In comparison, the new "Blackwell" B100 uses the TSMC-N4P. This will be a chiplet GPU, and use a 4x reticle design compared to Blackwell's 3.3x reticle design, and use TSMC's CoWoS-L packaging, just like the B100. The silicon is expected to be among the first users of HBM4 stacked memory, and feature 8 stacks of a yet unknown stack height. The Grace Ruben GR200 CPU+GPU combo could feature a refreshed "Grace" CPU built on the 3 nm node, likely an optical shrink meant to reduce power. A Q4-2025 mass-production roadmap target would mean that customers will start receiving the chips by early 2026.

SK hynix Strengthens AI Memory Leadership & Partnership With Host at the TSMC 2024 Tech Symposium

SK hynix showcased its next-generation technologies and strengthened key partnerships at the TSMC 2024 Technology Symposium held in Santa Clara, California on April 24. At the event, the company displayed its industry-leading HBM AI memory solutions and highlighted its collaboration with TSMC involving the host's CoWoS advanced packaging technology.

TSMC, a global semiconductor foundry, invites its major partners to this annual conference in the first half of each year so they can share their new products and technologies. Attending the event under the slogan "Memory, the Power of AI," SK hynix received significant attention for presenting the industry's most powerful AI memory solution, HBM3E. The product has recently demonstrated industry-leading performance, achieving input/output (I/O) transfer speed of up to 10 gigabits per second (Gbps) in an AI system during a performance validation evaluation.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

TSMC to Introduce Location Premium for Overseas Chip Production

As a part of its Q1 earnings call discussion, one of the largest semiconductor manufacturers, TSMC, has unveiled a strategic move to charge a premium for chips manufactured at its newly established overseas fabrication plants. During an earnings call, TSMC's CEO, C.C. Wei, announced that the company will impose higher pricing for chips produced outside Taiwan to offset the higher operational costs associated with these international locations. This move aims to maintain TSMC's target gross margin of 53% amidst rising expenses such as inflation and elevated electricity costs. This decision comes as TSMC expands its global footprint with new facilities in the United States, Germany, and Japan (JAMS) to meet the increasing demand for semiconductor chips worldwide. The company's new US-based Arizona facility, known as Fab 21, has faced delays due to equipment installation issues and labor negotiations.

Chips produced at this site, utilizing TSMC's advanced N5 and N4 nodes, could cost between 20% to 30% more than those manufactured in Taiwan. TSMC's strategy to manage the cost disparities across different geographic locations involves strategic pricing, securing government support, and leveraging its manufacturing technology leadership. This approach reflects the company's commitment to maintaining its competitive edge while navigating the complexities of global semiconductor manufacturing in today's fragmented market. Introducing a location premium is expected to impact American semiconductor designers, who may need to pass these costs on to specific market segments, particularly those with lower price sensitivity, such as government-related projects. Despite these challenges, TSMC's overseas expansion underscores its adaptive strategies in the face of global economic pressures and industry demands, ensuring its continued position as a leading player in the semiconductor industry.

SK hynix Collaborates with TSMC on HBM4 Chip Packaging

SK hynix Inc. announced today that it has recently signed a memorandum of understanding with TSMC for collaboration to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass-produced from 2026, through this initiative.

SK hynix said the collaboration between the global leader in the AI memory space and TSMC, a top global logic foundry, will lead to more innovations in HBM technology. The collaboration is also expected to enable breakthroughs in memory performance through trilateral collaboration between product design, foundry, and memory provider. The two companies will first focus on improving the performance of the base die that is mounted at the very bottom of the HBM package. HBM is made by stacking a core DRAM die on top of a base die that features TSV technology, and vertically connecting a fixed number of layers in the DRAM stack to the core die with TSV into an HBM package. The base die located at the bottom is connected to the GPU, which controls the HBM.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.
Return to Keyword Browsing
Jul 15th, 2024 20:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts