News Posts matching #N2

Return to Keyword Browsing

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

TSMC Boosts 2 nm Yields by 6%, Passing Savings to Customers

Being the leading-edge semiconductor manufacturing company, TSMC actively works on increasing the efficiency of its upcoming nodes, even when they are finalized and ready for high-volume manufacturing. According to a TSMC employee identified as Dr. Kim on X, recent test runs of the 2 nm N2 nodes show a 6% improvement in production yields compared to baseline expectations. This advancement could translate into substantial cost savings for the company's customers when mass production begins in late 2025. However, specific details about whether the gains were achieved in SRAM or logic test chips remain undisclosed. The timing is particularly noteworthy as TSMC prepares to launch its shuttle test wafer services for 2 nm technology in January. The N2 process represents a giant leap for TSMC, marking its first gate-all-around (GAA) nanosheet transistors implementation, the first step to derive from the classical FinFET design.

According to TSMC's projections, chips manufactured using the N2 process will consume 25-30% less power while maintaining the same transistor count and frequency as its N3E node. Additionally, the technology is expected to deliver 10-15% performance improvements and achieve a 15% increase in transistor density. A key innovation in the N2 process is the enhanced design of its GAA nanosheet transistors, which offers improved electrostatic control and reduced gate leakage compared to 3 nm FinFET transistors, given that the gate can be controlled from all sides. This advancement enables smaller high-density transistors to maintain reliable performance through better threshold voltage tuning capabilities. With approximately seven to eight months until full-scale volume production begins, the company has a substantial window to optimize the manufacturing process further and potentially achieve additional yield improvements, although that is less likely.

Synopsys and TSMC Pave the Path for Trillion-Transistor AI and Multi-Die Chip Design

Synopsys, Inc. today announced its continued, close collaboration with TSMC to deliver advanced EDA and IP solutions on TSMC's most advanced process and 3DFabric technologies to accelerate innovation for AI and multi-die designs. The relentless computational demands in AI applications require semiconductor technologies to keep pace. From an industry leading AI-driven EDA suite, powered by Synopsys.ai for enhanced productivity and silicon results to complete solutions that facilitate the migration to 2.5/3D multi-die architectures, Synopsys and TSMC have worked closely for decades to pave the path for the future of billion to trillion-transistor AI chip designs.

"TSMC is excited to collaborate with Synopsys to develop pioneering EDA and IP solutions tailored for the rigorous compute demands of AI designs on TSMC advanced process and 3DFabric technologies," said Dan Kochpatcharin, head of the Ecosystem and Alliance Management Division at TSMC. "The results of our latest collaboration across Synopsys' AI-driven EDA suite and silicon-proven IP have helped our mutual customers significantly enhance their productivity and deliver remarkable performance, power, and area results for advanced AI chip designs.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

Arm Launches Next-Generation Neoverse CSS V3 and N3 Designs for Cloud, HPC, and AI Acceleration

Last year, Arm introduced its Neoverse Compute Subsystem (CSS) for the N2 and V2 series of data center processors, providing a reference platform for the development of efficient Arm-based chips. Major cloud service providers like AWS with Graviton 4 and Trainuium 2, Microsoft with Cobalt 100 and Maia 100, and even NVIDIA with Grace CPU and Bluefield DPUs are already utilizing custom Arm server CPU and accelerator designs based on the CSS foundation in their data centers. The CSS allows hyperscalers to optimize Arm processor designs specifically for their workloads, focusing on efficiency rather than outright performance. Today, Arm has unveiled the next generation CSS N3 and V3 for even greater efficiency and AI inferencing capabilities. The N3 design provides up to 32 high-efficiency cores per die with improved branch prediction and larger caches to boost AI performance by 196%, while the V3 design scales up to 64 cores and is 50% faster overall than previous generations.

Both the N3 and V3 leverage advanced features like DDR5, PCIe 5.0, CXL 3.0, and chiplet architecture, continuing Arm's push to make chiplets the standard for data center and cloud architectures. The chiplet approach enables customers to connect their own accelerators and other chiplets to the Arm cores via UCIe interfaces, reducing costs and time-to-market. Looking ahead, Arm has a clear roadmap for its Neoverse platform. The upcoming CSS V4 "Adonis" and N4 "Dionysus" designs will build on the improvements in the N3 and V3, advancing Arm's goal of greater efficiency and performance using optimized chiplet architectures. As more major data center operators introduce custom Arm-based designs, the Neoverse CSS aims to provide a flexible, efficient foundation to power the next generation of cloud computing.

MSI Unveils the Custom NVIDIA GeForce RTX 40 SUPER Series

As a leading brand in True Gaming hardware, MSI is proud to announce its new line of graphics cards powered by the NVIDIA GeForce RTX 4080 SUPER, RTX 4070 Ti SUPER, and RTX 4070 SUPER series GPUs. These graphics cards combine the latest in graphics technology, advanced cooling, and high-performance circuit board design, to deliver the performance and features that enthusiast gamers and creators demand.

The new GeForce RTX SUPER GPUs are the ultimate way to experience AI on PCs. Specialized AI Tensor Cores deliver up to 836 AI TOPS to deliver transformative capabilities for AI in gaming, creating and everyday productivity. PC gamers demand the very best in visual quality, and AI-powered NVIDIA Deep Learning Super Sampling (DLSS) Super Resolution, Frame Generation and Ray Reconstruction combine with ray tracing to offer stunning worlds. With DLSS, seven out of eight pixels can be AI-generated, accelerating full ray tracing by up to 4x with better image quality.

TSMC 2 nm Node to Debut in 2025 with Apple SoCs for the iPhone 17 Pro

TSMC's 2 nm-class foundry node, dubbed N2, will enter mass production only in 2025, a report by the Financial Times says. The premier Taiwan-based foundry has been reportedly showcasing TSMC N2 to its biggest customer for advanced nodes, Apple. The node will likely power Apple's in-house silicon that drives the iPhone 17 Pro and Pro Max devices that are slated for 2025. This implies that the current 3 nm class nodes from TSMC will continue to power Apple silicon into 2024 and its iPhone 16 Pro/Pro Max.

The current Apple A17 Pro and M3 chips powering the iPhone 15 Pro/Max and the H2-2023 Macs are based on TSMC's N3 node, with a 183 MTr/mm² transistor density. TSMC has four other 3 nm-class nodes, with the N3E node that just entered mass production to offer a jump to 215.6 MTr/mm², and its 2024 successor, the N3P, pushing transistor densities further up to 224 MTr/mm². TSMC's first 2 nm-class node, the N2, offers a jump to around 259 MTr/mm², which makes the N3P a nice halfway point for Apple between the N3 and N2, for its 2024 silicon.

Microsoft Introduces 128-Core Arm CPU for Cloud and Custom AI Accelerator

During its Ignite conference, Microsoft introduced a duo of custom-designed silicon made to accelerate AI and excel in cloud workloads. First of the two is Microsoft's Azure Cobalt 100 CPU, a 128-core design that features a 64-bit Armv9 instruction set, implemented in a cloud-native design that is set to become a part of Microsoft's offerings. While there aren't many details regarding the configuration, the company claims that the performance target is up to 40% when compared to the current generation of Arm servers running on Azure cloud. The SoC has used Arm's Neoverse CSS platform customized for Microsoft, with presumably Arm Neoverse N2 cores.

The next and hottest topic in the server space is AI acceleration, which is needed for running today's large language models. Microsoft hosts OpenAI's ChatGPT, Microsoft's Copilot, and many other AI services. To help make them run as fast as possible, Microsoft's project Athena now has the name of Maia 100 AI accelerator, which is manufactured on TSMC's 5 nm process. It features 105 billion transistors and supports various MX data formats, even those smaller than 8-bit bit, for maximum performance. Currently tested on GPT 3.5 Turbo, we have yet to see performance figures and comparisons with competing hardware from NVIDIA, like H100/H200 and AMD, with MI300X. The Maia 100 has an aggregate bandwidth of 4.8 Terabits per accelerator, which uses a custom Ethernet-based networking protocol for scaling. These chips are expected to appear in Microsoft data centers early next year, and we hope to get some performance numbers soon.

TSMC Certifies Ansys Multiphysics Solutions for TSMC's N2 Silicon Process

Ansys and TSMC continue their long-standing technology collaboration to announce the certification of Ansys' power integrity software for TSMC's N2 process technology. The TSMC N2 process, which adopts nanosheet transistor structure, represents a major advancement in semiconductor technology with significant speed and power advantages for high performance computing (HPC), mobile chips, and 3D-IC chiplets. Both Ansys RedHawk-SC and Ansys Totem are certified for power integrity signoff on N2, including the effects of self-heat on long-term reliability of wires and transistors. This latest collaboration builds on the recent certification of the Ansys platform for TSMC's N4 and N3E FinFLEX processes.

"TSMC works closely with our Open Innovation Platform (OIP) ecosystem partners to help our mutual customers achieve the best design results with the full stack of design solutions on TSMC's most advanced N2 process," said Dan Kochpatcharin, head of the Design Infrastructure Management Division at TSMC. "Our latest collaboration with Ansys RedHawk-SC and Totem analysis tools allows our customers to benefit from the significant power and performance improvements of our N2 technology while ensuring predictively accurate power and thermal signoff for the long-term reliability of their designs."
Return to Keyword Browsing
Dec 21st, 2024 20:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts