News Posts matching #2030

Return to Keyword Browsing

NVIDIA Shows Future AI Accelerator Design: Silicon Photonics and DRAM on Top of Compute

During the prestigious IEDM 2024 conference, NVIDIA presented its vision for the future AI accelerator design, which the company plans to chase after in future accelerator iterations. Currently, the limits of chip packaging and silicon innovation are being stretched. However, future AI accelerators might need some additional verticals to gain the required performance improvement. The proposed design at IEDM 24 introduces silicon photonics (SiPh) at the center stage. NVIDIA's architecture calls for 12 SiPh connections for intrachip and interchip connections, with three connections per GPU tile across four GPU tiles per tier. This marks a significant departure from traditional interconnect technologies, which in the past have been limited by the natural properties of copper.

Perhaps the most striking aspect of NVIDIA's vision is the introduction of so-called "GPU tiers"—a novel approach that appears to stack GPU components vertically. This is complemented by an advanced 3D stacked DRAM configuration featuring six memory units per tile, enabling fine-grained memory access and substantially improved bandwidth. This stacked DRAM would have a direct electrical connection to the GPU tiles, mimicking the AMD 3D V-Cache on a larger scale. However, the timeline for implementation reflects the significant technological hurdles that must be overcome. The scale-up of silicon photonics manufacturing presents a particular challenge, with NVIDIA requiring the capacity to produce over one million SiPh connections monthly to make the design commercially viable. NVIDIA has invested in Lightmatter, which builds photonic packages for scaling the compute, so some form of its technology could end up in future NVIDIA accelerators

Germany Readies €2 Billion in New Semiconductor Subsidy Package

Germany is set to invest €2 billion in the semiconductor industry after recent setbacks, according to TrendForce via Liberty Times citing Bloomberg. The German government's new funding is in response to the chip sector's problems, including Intel's delay of the Magdeburg factory and global disruptions in the semiconductor supply chain. The investment will support 10 to 15 projects from wafer production to microchip assembly to strengthen Germany's and Europe's microelectronics ecosystem. This is in line with the European Chips Act which aims to increase the EU's global production capacity to 20% by 2030.

Intel's €30 billion Magdeburg factory delay and other cancelled chip projects from Wolfspeed and ZF Friedrichshafen AG have created uncertainty in the German market. The Ministry of Economic Affairs is now calling for new applications for funding, with up to €3 billion available. The timing of the semiconductor investment follows the global supply chain disruptions caused by the pandemic and the increasing geopolitical tensions between the US, China and Taiwan. Germany is following a broader trend of governments investing in local semiconductor production to increase technological independence and economic resilience. The funding is subject to budget reallocation with the new government after February 2025 elections. In the first round of subsidies from the European Chips Act, Germany allocated resources to two key initiatives: Intel's investment and a collaborative project between Infineon and TSMC in Dresden.

Japan Plans to Invest $65 Billion to Boost Its Chip Industry

Japan has proposed a $65 billion (or more) plan to strengthen the semiconductor and AI industries in the country through grants and financial support by fiscal year 2030. The government plans to present this proposal at the next parliamentary session. The draft includes support for mass production of next-generation chips, focusing on AI chipmakers such as Rapidus, the government estimates an economic impact of about 160 trillion yen from this investment. Rapidus plans to start mass production of advanced chips in Hokkaido from 2027 and will work with IBM and Belgian research organization Imec.

According to the report from Reuters, Prime Minister Shigeru Ishiba said the government would not issue deficit-financing bonds to fund the support plan, although specific financial details are not yet known. The new initiative builds on last year's 2 trillion yen investment in the chip industry, and it is part of a broader economic package. Expected to be approved by the Cabinet on November 22, the plan calls for combined public and private investment in the semiconductor industry of more than 50 trillion yen over the next decade.

Samsung Plans 400-Layer V-NAND for 2026 and DRAM Technology Advancements by 2027

Samsung is currently mass-producing its 9th generation V-NAND flash memory chips with 286 layers unveiled this April. According to the Korean Economic Daily, the company targets V-NAND memory chips with at least 400 stacked layers by 2026. In 2013, Samsung became the first company to introduce V-NAND chips with vertically stacked memory cells to maximize capacity. However, stacking beyond 300 levels proved to be a real challenge with the memory chips getting frequently damaged. To address this problem, Samsung is reportedly developing an improved 10th-generation V-NAND that is going to use the Bonding Vertical (BV) NAND technology. The idea is to manufacture the storage and peripheral circuits on separate layers before bonding them vertically. This is a major shift from the current Co-Packaged (CoP) technology. Samsung stated that the new method will increase the density of bits per unit area by 1.6 times (60%), thus leading to increased data speeds.

Samsung's roadmap is truly ambitious, with plans to launch the 11th generation of NAND in 2027 with an estimated 50% improvement in I/O rates, followed by 1,000-layer NAND chips by 2030. Its competitor, SK hynix, is also working on 400-layer NAND aiming to have the technology ready for mass production by the end of 2025, as we previously mentioned in August. Samsung, the current HBM market leader with a 36.9% market share have also plans for its DRAM sector intending to introduce the sixth-generation 10 nm DRAM, or 1c DRAM by the first half of 2025. Then we can expect to see Samsung's seventh-generation 1d nm (still on 10 nm) in 2026, and by 2027 the company hopes to release its first generation sub-10 nm DRAM, or 0a DRAM memory that will use a Vertical Channel Transistor (VCT) 3D structure similar to what NAND flash utilizes.

Japanese Scientists Develop Less Complex EUV Scanners, Significantly Cutting Costs of Chip Development

Japanese professor Tsumoru Shintake of the Okinawa Institute of Science and Technology (OIST) has unveiled a revolutionary extreme ultraviolet (EUV) lithography technology that promises to significantly push down semiconductor manufacturing costs. The new technology tackles two previously insurmountable issues in EUV lithography. First, it introduces a streamlined optical projection system using only two mirrors, a dramatic simplification from the conventional six or more. Second, it employs a novel "dual line field" method to efficiently direct EUV light onto the photomask without obstructing the optical path. Prof. Shintake's design offers substantial advantages over current EUV lithography machines. It can operate with smaller EUV light sources, consuming less than one-tenth of the power required by conventional systems. This reduction in energy consumption also reduces operating expenses (OpEx), which are usually high in semiconductor manufacturing facilities.

The simplified two-mirror design also promises improved stability and maintainability. While traditional EUV systems often require over 1 megawatt of power, the OIST model can achieve comparable results with just 100 kilowatts. Despite its simplicity, the system maintains high contrast and reduces mask 3D effects, which is crucial for attaining nanometer-scale precision in semiconductor production. OIST has filed a patent application for this technology, with plans for practical implementation through demonstration experiments. The global EUV lithography market is projected to grow from $8.9 billion in 2024 to $17.4 billion by 2030, when most nodes are expected to use EUV scanners. In contrast, ASML's single EUV scanner can cost up to $380 million without OpEx, which is very high thanks to the power consumption of high-energy light UV light emitters. Regular EUV scanners also lose 40% of the UV light going to the next mirror, with only 1% of the starting light source reaching the silicon wafer. And that is while consuming over one megawatt of power. However, with the proposed low-cost EUV system, more than 10% of the energy makes it to the wafer, and the new system is expected to use less than 100 kilowatts of power while carrying a cost of less than 100 million, a third from ASML's flagship.

ASML Unveils Plans for Next-Generation "Hyper-NA" Extreme Ultraviolet Lithography

ASML, the world's sole provider of extreme ultraviolet (EUV) lithography systems essential for manufacturing the most advanced chips, has revealed its roadmap for pushing semiconductor scaling even further. In a recent presentation, former ASML president Martin van den Brink announced the company's plans for a new "Hyper-NA" EUV technology that would succeed the High-NA EUV systems, which are just beginning to deploy. The Hyper-NA tools, still in early research stages, would increase the numerical aperture to 0.75 from High-NA's 0.55, enabling chips with transistor densities beyond the projected limits of High-NA in the early 2030s. This higher numerical aperture should reduce reliance on multi-patterning techniques that add complexity and cost.

Hyper-NA is bringing challenges of its own to commercialization. Key obstacles include light polarization effects that degrade imaging contrast, requiring polarization filters that reduce light throughput. Resist materials may also need to become thinner to maintain resolution. While leading EUV chipmakers like TSMC can likely extend scaling for several more nodes using multi-patterning with existing 0.33 NA EUV tools, Intel has adopted 0.55 High-NA to avoid these complexities. But Hyper-NA will likely become essential across the industry later this decade as High-NA's physical limits are reached. Beyond Hyper-NA, few alternative patterning solutions exist besides expensive multi-beam electron lithography, which lacks the throughput of EUV photolithography. To continue classical scaling, the industry may need to eventually transition to new channel materials with superior electron mobility properties compared to silicon, requiring novel deposition and etch capabilities.

Intel 14A Node Delivers 15% Improvement over 18A, A14-E Adds Another 5%

Intel is revamping its foundry play, and the company is set on its goals of becoming a strong contender to rivals such as TSMC and Samsung. Under Pat Gelsinger's lead, Intel recently split (virtually, under the same company) its units into Intel Product and Intel Foundry. During the SPIE 2024 conference for optics and photonics, Anne Kelleher, Intel's senior vice president, revealed that the 14A (1.4 nm) process offers a 15% performance-per-watt improvement over the company's 18A (1.8 nanometers) process. Additionally, the enhanced 14A-E process boasts a further 5% performance boost from the regular A14 node, being a small refresh. Intel's 14A process is set to be the first to utilize High-NA extreme ultraviolet (EUV) equipment, delivering a 20% increase in transistor logic density compared to the 18A node.

The company's aggressive pursuit of next-generation processes poses a significant threat to Samsung Electronics, which currently holds the second position in the foundry market. As part of its IDM 2.0 strategy, Intel hopes to reclaim its position as a leading foundry player and surpass Samsung by 2030. The company's collaboration with American companies, such as Microsoft, further solidifies its ambitions. Intel has already secured a $15 billion chip production contract with Microsoft for its 1.8 nm 18A process. The semiconductor industry is closely monitoring Intel's progress, as the company's advancements in process technology could potentially reshape the competitive landscape. With Samsung planning to mass-produce 2 nm process products next year, the race for dominance in the foundry market is heating up.

ASML High-NA EUV Twinscan EXE Machines Cost $380 Million, 10-20 Units Already Booked

ASML has revealed that its cutting-edge High-NA extreme ultraviolet (EUV) chipmaking tools, called High-NA Twinscan EXE, will cost around $380 million each—over twice as much as its existing Low-NA EUV lithography systems that cost about $183 million. The company has taken 10-20 initial orders from the likes of Intel and SK Hynix and plans to manufacture 20 High-NA systems annually by 2028 to meet demand. The High-NA EUV technology represents a major breakthrough, enabling an improved 8 nm imprint resolution compared to 13 nm with current Low-NA EUV tools. This allows chipmakers to produce transistors that are nearly 1.7 times smaller, translating to a threefold increase in transistor density on chips. Attaining this level of precision is critical for manufacturing sub-3 nm chips, an industry goal for 2025-2026. It also eliminates the need for complex double patterning techniques required presently.

However, superior performance comes at a cost - literally and figuratively. The hefty $380 million price tag for each High-NA system introduces financial challenges for chipmakers. Additionally, the larger High-NA tools require completely reconfiguring chip fabrication facilities. Their halved imaging field also necessitates rethinking chip designs. As a result, adoption timelines differ across companies - Intel intends to deploy High-NA EUV at an advanced 1.8 nm (18A) node, while TSMC is taking a more conservative approach, potentially implementing it only in 2030 and not rushing the use of these lithography machines, as the company's nodes are already developing well and on time. Interestingly, the installation process of ASML's High-NA Twinscan EXE 150,000-kilogram system required 250 crates, 250 engineers, and six months to complete. So, production is as equally complex as the installation and operation of this delicate machinery.

TSMC Allegedly Not Rushing into Adoption of High-NA EUV Machinery

DigiTimes Asia has reached out to insiders at fabrication toolmakers in an effort to delve deeper into claims made by industry analysts at the start of 2024—both SemiAnalysis and China Renaissance have proposed that TSMC is unlikely to adopt High-NA EUV production techniques within a five year period. The latest news article explores a non-upgrade approach for the next couple of years: "TSMC has not placed orders for high-numerical aperture (High-NA) extreme ultraviolet (EUV) tools and is unlikely to use the technology in 2 nm and 1.4 nm (A14) process manufacturing." Intel Foundry Services (IFS) will be one of the first semiconductor manufacturers to go online with ASML's latest and greatest machinery, although no firm timeframes have been confirmed. Team Blue's Taiwanese rival (and occasional business partner) is seemingly happy with its existing infrastructure, but industry watchdogs propose that cost considerations are key factors behind TSMC's cautious planning for the next decade.

The DigiTimes insider sources believe that TSMC will not budge until at least 2029, possibly coinciding with a 1 nm production node—analysts at China Renaissance reckon that High-NA EUV machines could be delivered in the future when facilities are readied for an "A10" codenamed process. TSMC published a very ambitious "transistor count" product timeline in early January (see below)—the first "1 nm" products are supposedly targeted for a 2030 rollout, but this schedule could change due to unforeseen circumstances. Intel is expected to "phase in" its fanciest ASML gear collection once the 18A process becomes old hat—Tom's Hardware thinks that 2026 - 2027 is a feasible timeframe.

AI Power Consumption Surge Strains US Electricity Grid, Coal-Powered Plants Make a Comeback

The artificial intelligence boom is driving a sharp rise in electricity use across the United States, catching utilities and regulators off guard. In northern Virginia's "data center alley," demand is so high that the local utility temporarily halted new data center connections in 2022. Nation-wide, electricity consumption at data centers alone could triple by 2030 to 390 TeraWatt Hours. Add in new electric vehicle battery factories, chip plants, and other clean tech manufacturing spurred by federal incentives, and demand over the next five years is forecast to rise at 1.5%—the fastest rate since the 1990s. Unable to keep pace, some utilities are scrambling to revise projections and reconsider previous plans of closing fossil fuel plants even as the Biden administration pushes for more renewable energy. Some older coal power plans will stay online, until the grid adds more power production capacity. The result could be increased emissions in the near term and risks of rolling blackouts if infrastructure continues lagging behind demand.

The situation is especially dire in Virginia, the world's largest data center hub. The state's largest utility, Dominion Energy, was forced to pause new data center connections for three months last year due to surging demand in Loudoun County. Though connections have resumed, Dominion expects load growth to almost double over the next 15 years. With data centers, EV factories, and other power-hungry tech continuing rapid expansion, experts warn the US national electricity grid is poorly equipped to handle the spike. Substantial investments in new transmission lines and generation are urgently needed to avoid businesses being turned away or blackouts in some regions. Though many tech companies aim to power operations with clean energy, factories are increasingly open to any available power source.

TSMC Plans to Put a Trillion Transistors on a Single Package by 2030

During the recent IEDM conference, TSMC previewed its process roadmap for delivering next-generation chip packages packing over one trillion transistors by 2030. This aligns with similar long-term visions from Intel. Such enormous transistor counts will come through advanced 3D packaging of multiple chipsets. But TSMC also aims to push monolithic chip complexity higher, ultimately enabling 200 billion transistor designs on a single die. This requires steady enhancement of TSMC's planned N2, N2P, N1.4, and N1 nodes, which are slated to arrive between now and the end of the decade. While multi-chipset architectures are currently gaining favor, TSMC asserts both packaging density and raw transistor density must scale up in tandem. Some perspective on the magnitude of TSMC's goals include NVIDIA's 80 billion transistor GH100 GPU—among today's largest chips, excluding wafer-scale designs from Cerebras.

Yet TSMC's roadmap calls for more than doubling that, first with over 100 billion transistor monolithic designs, then eventually 200 billion. Of course, yields become more challenging as die sizes grow, which is where advanced packaging of smaller chiplets becomes crucial. Multi-chip module offerings like AMD's MI300X and Intel's Ponte Vecchio already integrate dozens of tiles, with PVC having 47 tiles. TSMC envisions this expansion to chip packages housing more than a trillion transistors via its CoWoS, InFO, 3D stacking, and many other technologies. While the scaling cadence has recently slowed, TSMC remains confident in achieving both packaging and process breakthroughs to meet future density demands. The foundry's continuous investment ensures progress in unlocking next-generation semiconductor capabilities. But physics ultimately dictates timelines, no matter how aggressive the roadmap.

Fujitsu Details Monaka: 150-core Armv9 CPU for AI and Data Center

Ever since the creation of A64FX for the Fugaku supercomputer, Fujitsu has been plotting the development of next-generation CPU design for accelerating AI and general-purpose HPC workloads in the data center. Codenamed Monaka, the CPU is the latest creation for TSMC's 2 nm semiconductor manufacturing node. Based on Armv9-A ISA, the CPU will feature up to 150 cores with Scalable Vector Extensions 2 (SVE2), so it can process a wide variety of vector data sets in parallel. Using a 3D chiplet design, the 150 cores will be split into different dies and placed alongside SRAM and I/O controller. The current width of the SVE2 implementation is unknown.

The CPU is designed to support DDR5 memory and PCIe 6.0 connection for attaching storage and other accelerators. To bring cache coherency among application-specific accelerators, CXL 3.0 is present as well. Interestingly, Monaka is planned to arrive in FY2027, which starts in 2026 on January 1st. The CPU will supposedly use air cooling, meaning the design aims for power efficiency. Additionally, it is essential to note that Monaka is not a processor that will power the post-Fugaku supercomputer. The post-Fugaku supercomputer will use post-Monaka design, likely iterating on the design principles that Monaka uses and refining them for the launch of the post-Fugaku supercomputer scheduled for 2030. Below are the slides from Fujitsu's presentation, in Japenese, which highlight the design goals of the CPU.

US Government Announces $42 Billion Fund for Universal Access to High-Speed Broadband

The US government yesterday revealed its $42.45 billion Broadband Equity Access and Deployment (BEAD) funding program that will aim to deliver reliable, affordable high-speed internet to everyone in the nation by 2030—including all fifty states and US territories. Evidently parts of the country are lacking in terms of online access infrastructure—the briefing room statement outlines some of these issues: "High-speed internet is no longer a luxury - it is necessary for Americans to do their jobs, to participate equally in school, access health care, and to stay connected with family and friends. Yet, more than 8.5 million households and small businesses are in areas where there is no high-speed internet infrastructure, and millions more struggle with limited or unreliable internet options."

The initiative is said to be "the largest internet funding announcement in history," and the White House is readying packages valued from $27 million to $3.3 billion—White House said that it'll award sums of (starting at) $27 million going up to a maximum $3.3 billion, based on the required level of upgrades for a given state/territory. Assistant Secretary of Commerce for Communication and Information Alan Davidson stated: "This is a watershed moment for millions of people across America who lack access to a high-speed Internet connection. Access to Internet service is necessary for work, education, healthcare, and more...States can now plan their Internet access grant programs with confidence and engage with communities to ensure this money is spent where it is most needed."
Return to Keyword Browsing
Dec 22nd, 2024 01:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts