News Posts matching #Silicon

Return to Keyword Browsing

NVIDIA Commercializes Silicon Photonics with InfiniBand and Ethernet Switches

NVIDIA has developed co-packaged optics (CPO) technology with TSMC for its upcoming Quantum-X InfiniBand and Spectrum-X Ethernet switches, integrating silicon photonics directly onto switch ASICs. The engineering approach reduces power consumption by 3.5x. It decreases signal loss from 22 dB to 4 dB compared to traditional pluggable optics, addressing critical power and connectivity limitations in large-scale GPU deployments, especially in 10,000+ GPU systems. The architecture incorporates continuous wave laser sources within the switch chassis, consuming 2 W per port, compared to the 10 W required by conventional externally modulated lasers in pluggable modules. This configuration, combined with integrated optical engines that use 7 W versus 20 W for traditional digital signal processors, reduces total optical interconnect power from approximately 72 MW to 21.6 MW in a 400,000 GPU data center scenario.

Specifications for the Quantum 3450-LD InfiniBand model include 144 ports running at 800 Gb/s, delivering 115 Tb/s of aggregate bandwidth using four Quantum-X CPO sockets in a liquid-cooled chassis. The Spectrum-X lineup features the SN6810 with 128 ports at 800 Gb/s (102.4 Tb/s) and the higher-density SN6800 providing 512 ports at 800 Gb/s for 409.6 Tb/s total throughput. The Quantum-X InfiniBand implementation uses a monolithic switch ASIC with six CPO modules supporting 36 ports at 800 Gb/s, while the Spectrum-X Ethernet design employs a multi-chip approach with a central packet processing engine surrounded by eight SerDes chiplets. Both architectures utilize 224 Gb/s signaling per lane with four lanes per port. NVIDIA's Quantum-X switches are scheduled for availability in H2 2025, with Spectrum-X models following in H2 2026.

GUC Launches First 32 Gbps per Lane UCIe Silicon Using TSMC 3nm and CoWoS Technology

Global Unichip Corp. (GUC), the Advanced ASIC Leader, today announced the successful launch of industry's first Universal Chiplet Interconnect Express (UCIe) PHY silicon, achieving a data rate of 32 Gbps per lane, the highest speed defined in the UCIe specification. The 32G UCIe IP, supporting UCIe 2.0, delivers an impressive bandwidth density of 10 Tbps per 1 mm of die edge (5 Tbps/mm full-duplex). This milestone was achieved using TSMC's advanced N3P process and CoWoS packaging technologies, targeting AI, high-performance computing (HPC), xPU, and networking applications.

In this test chip, several dies with North-South and East-West IP orientations are interconnected through CoWoS interposer. The silicon measurements show robust 32 Gbps operation with wide horizontal and vertical eye openings. GUC is working aggressively on the full-corner qualification, and the complete silicon report is expected to be available in the coming quarter.

Chinese Researchers Develop No-Silicon 2D GAAFET Transistor Technology

Scientists from Beijing University have developed the world's first two-dimensional gate-all-around field-effect transistor (GAAFET), establishing a new performance benchmark in domestic semiconductor design. The design, documented in Nature, represents a difference in transistor architecture that could reshape the future of Chinese microelectronics design. Given the reported characteristic of 40% higher performance and 10% improved efficiency compared to the TSMC 3 nm N3 node, it looks rather promising. The research team, headed by Professors Peng Hailin and Qiu Chenguang, engineered a "wafer-scale multi-layer-stacked single-crystalline 2D GAA configuration" that demonstrated superior performance metrics when benchmarked against current industry leaders. The innovation leverages bismuth oxyselenide (Bi₂O₂Se), a novel semiconductor material that maintains exceptional carrier mobility at sub-nanometer dimensions—a critical advantage as the industry struggles to push angstrom-era semiconductor nodes.

"Traditional silicon-based transistors face fundamental physical limitations at extreme scales," explained Professor Peng, who characterized the technology as "the fastest, most efficient transistor ever developed." The 2D GAAFET architecture circumvents the mobility degradation that plagues silicon in ultra-small geometries, allowing for continued performance scaling beyond current nodes. The development comes during China's intensified efforts to achieve semiconductor self-sufficiency, as trade restrictions have limited access to advanced lithography equipment and other critical manufacturing technologies. Even with China developing domestic EUV technology, it is still not "battle" proven. Rather than competing directly with established fabrication processes, the Beijing team has pioneered an entirely different technological approach—what Professor Peng described as "changing lanes entirely" rather than seeking incremental improvements, where China can not compete in the near term.

Marvell Demonstrates Industry's Leading 2nm Silicon for Accelerated Infrastructure

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, has demonstrated its first 2 nm silicon IP for next-generation AI and cloud infrastructure. Produced on TSMC's 2 nm process, the working silicon is part of the Marvell platform for developing custom XPUs, switches and other technology to help cloud service providers elevate the performance, efficiency, and economic potential of their worldwide operations.

Given a projected 45% TAM growth annually, custom silicon is expected to account for approximately 25% of the market for accelerated compute by 20281.

GlobalFoundries and MIT Collaborate on Photonic AI Chips

GlobalFoundries (GF) and the Massachusetts Institute of Technology (MIT) today announced a new master research agreement to jointly pursue advancements and innovations for enhancing the performance and efficiency of critical semiconductor technologies. The collaboration will be led by MIT's Microsystems Technology Laboratories (MTL) and GF's research and development team, GF Labs.

With an initial research focus on AI and other applications, the first projects are expected to leverage GF's differentiated silicon photonics technology, which monolithically integrates RF SOI, CMOS and optical features on a single chip to realize power efficiencies for datacenters, and GF's 22FDX platform, which delivers ultra-low power consumption for intelligent devices at the edge.

Chinese Mature Nodes Undercut Western Silicon Pricing, to Capture up to 28% of the Market This Year

Chinese manufacturers have seized significant market share in legacy chip production, driving prices down and creating intense competitive pressure that Western competitors cannot match. The so-called "China shock" in the semiconductor sector appears as mature node production shifts East at accelerating rates. Legacy process nodes, which are usually 16/20/22/24 nm and larger, form the backbone of consumer electronics and automotive applications while providing established manufacturers with stable revenue streams for R&D investment. However, this economic framework now faces structural disruption as Chinese fabs leverage domestic demand and government support to expand capacity. By Q4 2025, Chinese facilities will control 28% of global mature chip production, with projections indicating further expansion to 39% by 2027.

This rapid capacity growth directly results from Beijing's strategic pivot following US export controls on advanced semiconductor equipment, which redirected investment toward mature nodes where technological barriers remain lower. This is happening in parallel with companies like SMIC, although isolated, which are developing lithography solutions for cutting-edge 5 nm and 3 nm wafer production. For older nodes, The market impact appears most pronounced in specialized materials like silicon carbide (SiC). Industry benchmark 6-inch SiC wafers from Wolfspeed were previously $1,500, compared to current $500 pricing from Guangzhou Summit Crystal Semiconductor—representing a 67% price compression that Western manufacturers cannot profitably match. Multiple semiconductor firms report significant financial strain from this pricing pressure. Wolfspeed has implemented 20% workforce reductions following a 96% market capitalization decline, while Onsemi recently announced 9% staff cuts. With more Chinese expansion into the mature node category, Western companies can't keep up with the lowered costs of what is now becoming a commodity.

PsiQuantum Announces Omega, a Manufacturable Chipset for Photonic Quantum Computing

PsiQuantum today announces Omega, a quantum photonic chipset purpose-built for utility-scale quantum computing. Featured in a newly published paper in Nature, the chipset contains all the advanced components required to build million-qubit-scale quantum computers and deliver on the profoundly world-changing promise of this technology. Every photonic component is demonstrated with beyond-state-of-the-art performance. The paper shows high-fidelity qubit operations, and a simple, long-range chip-to-chip qubit interconnect - a key enabler to scale that has remained challenging for other technologies. The chips are made in a high-volume semiconductor fab, representing a new level of technical maturity and scale in a field that is often thought of as being confined to research labs. PsiQuantum will break ground this year on two datacenter-sized Quantum Compute Centers in Brisbane, Australia and Chicago, Illinois.

"For more than 25 years it has been my conviction that in order for us to realize a useful quantum computer in my lifetime, we must find a way to fully leverage the unmatched capabilities of the semiconductor industry. This paper vindicates that belief."—Prof. Jeremy O'Brien, PsiQuantum Co-founder & CEO.

Apple to Spend More Than $500 Billion in the U.S. Over the Next Four Years

Apple today announced its largest-ever spend commitment, with plans to spend and invest more than $500 billion in the U.S. over the next four years. This new pledge builds on Apple's long history of investing in American innovation and advanced high-skilled manufacturing, and will support a wide range of initiatives that focus on artificial intelligence, silicon engineering, and skills development for students and workers across the country.

"We are bullish on the future of American innovation, and we're proud to build on our long-standing U.S. investments with this $500 billion commitment to our country's future," said Tim Cook, Apple's CEO. "From doubling our Advanced Manufacturing Fund, to building advanced technology in Texas, we're thrilled to expand our support for American manufacturing. And we'll keep working with people and companies across this country to help write an extraordinary new chapter in the history of American innovation."

NVIDIA to Consume 77% of Silicon Wafers Dedicated to AI Accelerators in 2025

Investment bank Morgan Stanley has estimated that an astonishing 77% of all globally produced silicon wafers dedicated to AI accelerators will be consumed by none other than NVIDIA. Often, investment research by large investment banks like Morgan Stanley includes information from the semiconductor supply chain, which is constantly expanding to meet NVIDIA's demands. When looking at wafer volume for AI accelerators, it is estimated that in 2024, NVIDIA captured nearly 51% of wafer consumption for its chips, more than half of all demand. With NVIDIA's volume projected to grow to 77%, this represents more than a 50% year-over-year increase, which is incredible for a company of NVIDIA's size. Right now, NVIDIA is phasing out its H100 accelerators in favor of Blackwell 100/200 and the upcoming 300 series of GPUs paired with Grace CPUs.

NVIDIA is accelerating its product deployment timeline and investing a lot in its internal research and development. Morgan Stanley also projects that NVIDIA will invest almost $16 billion in its R&D budget, enough to endure four to five years of development cycles running three design teams sequentially and still delivering new products on an 18-24 month cadence. The scale of this efficiency and development rivals everyone else in the industry. With all this praise, NVIDIA's Q4 revenue report is coming in exactly a week on February 26, so we have to see what its CEO, Jensen Huang, will deliver and show some estimates for the coming months.

Huawei Delivers Record $118 Billion Revenue with 22% Yearly Growth Despite US Sanctions

Huawei Technologies reported a robust 22% year-over-year revenue increase for 2024, reaching 860 billion yuan ($118.27 billion), demonstrating remarkable resilience amid continued US-imposed trade restrictions. The Chinese tech giant's resurgence was primarily driven by its revitalized smartphone division, which captured 16% of China's domestic market share, overtaking Apple in regional sales. This achievement was notably accomplished by deploying domestically produced chipsets, marking a significant milestone for the company. In collaboration with Chinese SMIC, Huawei delivers in-house silicon solutions to integrate with HarmonyOS for complete vertical integration. The company's strategic diversification into automotive technology has emerged as a crucial growth vector, with its smart car solutions unit delivering autonomous driving software and specialized chips to Chinese EV manufacturers.

In parallel, Huawei's Ascend AI 910B/C platform recently announced compatibility with DeepSeek's R1 large language model and announced availability on Chinese AI cloud providers like SiliconFlow. Through a strategic partnership with AI infrastructure startup SiliconFlow, Huawei is enhancing its Ascend cloud service capabilities, further strengthening its competitive position in the global AI hardware market despite ongoing international trade challenges. Even if the company can't compete on performance versus the latest solutions from NVIDIA and AMD due to the lack of advanced manufacturing required for AI accelerators, it can compete on costs and deliver solutions that are typically much more competitive with the price/performance ratio. Huawei's Ascend AI solutions deliver modest performance. Still, the pricing makes AI model inference very cheap, with API costs of around one Yaun per million input tokens and four Yuan per one million output tokens on DeepSeek R1.

Osaka Scientists Unveil 'Living' Electrodes That Can Enhance Silicon Devices

Shrinking components was (and still is) the main way to boost the speed of all electronic devices; however, as devices get tinier, making them becomes trickier. A group of scientists from SANKEN (The Institute of Scientific and Industrial Research), at Osaka University has discovered another method to enhance performance: putting a special metal layer known as a metamaterial on top of a silicon base to make electrons move faster. This approach shows promise, but the tricky part is managing the metamaterial's structure so it can adapt to real-world needs.

To address this, the team looked into vanadium dioxide (VO₂). When heated, VO₂ changes from non-conductive to metallic, allowing it to carry electric charge like small adjustable electrodes. The researchers used this effect to create 'living' microelectrodes, which made silicon photodetectors better at spotting terahertz light. "We made a terahertz photodetector with VO₂ as a metamaterial. Using a precise method, we created a high-quality VO₂ layer on silicon. By controlling the temperature, we adjusted the size of the metallic regions—much larger than previously possible—which affected how the silicon detected terahertz light," says lead author Ai I. Osaka.

Apple's Upcoming M5 SoC Enters Mass Production

Apple's M4 SoC was released to overwhelmingly positive reviews, particularly regarding the commendable performance and efficiency benefits it brought to the table. The chip first appeared in the OLED iPad Pro lineup last May, arriving in the company's MacBook Pro lineup only much later, giving Intel's Lunar Lake and AMD's Strix Point a run for their money. Now, it appears that the company is cognizant of the heat brought by AMD's Strix Halo, and has already commenced mass production for the first SoC in the M5 family - the vanilla M5, according to Korean news outlet ET News.

Just like last time, the M5 SoC has been repeatedly rumored to first arrive in the next-generation iPad Pro, scheduled to enter production sometime in the second half of this year. The MacBook Pro will likely be next-in-line for the M5 treatment, followed the rest of the lineup as per tradition. Interestingly, although Apple decided against using TSMC's 2 nm process for this year's chips, the higher-tier variants, including the M5 Pro and M5 Max are expected to utilize TSMC's SoIC-mH technology, allowing for vertical stacking of chips that should ideally benefit thermals, and possibly even allow for better and larger GPUs thanks to the separation of the CPU and GPU portions. Consequently, yields will also improve, which will allow Apple to bring costs down.

Huawei Ascend 910B Accelerators Power Cloud Infrastructure for DeepSeek R1 Inference

When High-Flyer, the hedge fund behind DeepSeek, debuted its flagship model, DeepSeek R1, the tech world went downward. No one expected Chinese AI companies can produce high-quality AI model that rivals the best from OpenAI and Anthropic. While there are rumors that DeepSeek has access to 50,000 NVIDIA "Hopper" GPUs, including H100, H800, and H20, it seems like Huawei is ready to power Chinese AI infrastructure with its AI accelerators. According to the South China Morning Post, Chinese cloud providers like SiliconFlow.cn are offering DeepSeek AI models for inference on Huawei Ascend 910B accelerators. For the price of only one Yuan for one million input tokens, and four Yuan for one million output tokens, this economic model of AI hosting is fundamentally undercutting competition like US-based cloud providers that offer DeepSeek R1 for $7 per million tokens.

Not only is running on the Huawei Ascend 910B cheaper for cloud providers, but we also reported that it is cheaper for DeepSeek itself, which serves its chat app on the Huawei Ascend 910C. Using domestic accelerators lowers the total cost of ownership, with savings passed down to users. If Western clients prefer AI inference to be served by Western companies, they will have to pay a heftier price tag, often backed by the high prices of GPUs like NVIDIA H100, B100, and AMD Instinct MI300X.

Patriot Unveils the New PDP31 Portable PSSD

Patriot proudly announces the launch of the PDP31, a portable PSSD designed specifically for the mobile lifestyle. Supporting the USB 3.2 Gen 2 standard, the PDP31 boasts a high-speed transfer rate of up to 10 Gbps, with exceptional sequential read and write speeds of up to 1000 MB/s, making it the ideal choice for tech enthusiasts and professionals alike.

To ensure ample storage for all your important files, the PDP31 is available in capacities ranging from 120 GB to 2 TB, making it perfect for storing everything from creative video projects to large work files. It also features a versatile USB Type-C interface for easy connection to desktops, MacBooks, Windows laptops, tablets, and smartphones.

GlobalFoundries Announces New York Advanced Packaging and Photonics Center

GlobalFoundries (Nasdaq: GFS) (GF) today announced plans to create a new center for advanced packaging and testing of U.S.-made essential chips within its New York manufacturing facility. Supported by investments from the State of New York and the U.S. Department of Commerce, the first-of-its-kind center aims to enable semiconductors to be securely manufactured, processed, packaged and tested entirely onshore in the United States to meet the growing demand for GF's silicon photonics and other essential chips needed for critical end markets including AI, automotive, aerospace and defense, and communications.

Growth in AI is driving the adoption of silicon photonics and 3D and heterogeneously integrated (HI) chips to meet power, bandwidth and density requirements in datacenters and edge devices. Silicon photonics chips are also positioned to address power and performance needs in automotive, communications, radar, and other critical infrastructure applications.

Apple Silicon Macs Gain x86 Emulation Capability, Run x86 Windows Apps on macOS

Parallels has announced the introduction of x86 emulation support in Parallels Desktop 20.2.0 for Apple Silicon Macs. This new feature enables users to run x86-based virtual machines on their M-series Mac computers, addressing a longstanding limitation since Apple's transition to its custom Arm-based processors. The early technology preview allows users to run Windows 10, Windows 11 (with some restrictions), Windows Server 2019/2022, and various Linux distributions through a proprietary emulation engine. This development particularly benefits developers and users who need to run 32-bit Windows applications or prefer x86-64 Linux virtual machines as an alternative to Apple Rosetta-based solutions.

However, Parallels is transparent about the current limitations of this preview release. Performance is notably slow, with Windows boot times ranging from 2 to 7 minutes, and overall system responsiveness remains low. The emulation only supports 64-bit operating systems, though it can run 32-bit applications. Additionally, USB device support is not available, and users must rely on Apple's hypervisor as the Parallels hypervisor isn't compatible. Despite these constraints, the release is a crucial step forward in bridging the compatibility gap for Apple Silicon Mac users so legacy software can still be used. The feature has been implemented with the option to start virtual machines hidden in the user interface to manage expectations, as it is still imperfect.

Ultra Accelerator Link Consortium (UALink) Welcomes Alibaba, Apple and Synopsys to Board of Directors

Ultra Accelerator Link Consortium (UALink) has announced the expansion of its Board of Directors with the election of Alibaba Cloud Computing Ltd., Apple Inc., and Synopsys Inc. The new Board members will leverage their industry knowledge to advance development and industry adoption of UALink - a high-speed, scale-up interconnect for next-generation AI cluster performance.

"Alibaba Cloud believes that driving AI computing accelerator scale-up interconnection technology by defining core needs and solutions from the perspective of cloud computing and applications has significant value in building the competitiveness of intelligent computing supernodes," said Qiang Liu, VP of Alibaba Cloud, GM of Alibaba Cloud Server Infrastructure. "The UALink consortium, as a leader in the interconnect field of AI accelerators, has brought together key members from the AI infrastructure industry to work together to define interconnect protocol which is natively designed for AI accelerators, driving innovation in AI infrastructure. This will strongly promote the innovation of AI infrastructure and improve the execution efficiency of AI workloads, contributing to the establishment of an open and innovative industry ecosystem."

Apple M4 MacBook Air Enters Production, M5 MacBook Pro on Track for 2025 Sans Redesign

The Apple M4 hardly needs any introduction - the latest desktop-class SoC from the Cupertino giant is remarkably fast, while being impressively efficient. Its recently unveiled Pro and Max variants are equally praiseworthy, although none of the 4th generation Apple Silicon goodness is available on the extremely popular MacBook Air as of right now. However, that is about to change soon according to a reliable recent report.

According to Bloomberg's Mark Gurman, the M4-powered MacBook Air has already entered production, and is scheduled to witness the light of day by the spring of next year, possibly even earlier. However, it is certainly worth noting that unlike the MacBook Pro, the MacBook Air does not feature active cooling, making its performance rather limited in demanding, sustained scenarios as compared to the MacBook Pro. Even then, the M4 is likely to be much snappier than its primary x86 rival, Intel's Lunar Lake, if the M4 iPad Pro's performance is anything to go by.

Alphawave Semi Scales UCIe to 64 Gbps for 3nm Die-to-Die Chiplet Connectivity

Alphawave Semi (LSE: AWE), a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure, proudly introduces the industry's first 64 Gbps Universal Chiplet Interconnect Express (UCIe) Die-to-Die (D2D) IP Subsystem to deliver unprecedented chiplet interconnect data rates, setting a new standard for ultra-high-performance D2D connectivity solutions in the industry. The third generation, 64 Gbps IP Subsystem builds on the successes of the most recent Gen 2 36 Gbps IP subsystem and silicon-proven Gen 1 24 Gbps and is available in TSMC's 3 nm Technology for both Standard and Advanced packaging. The silicon proven success and tapeout milestones pave the way for Alphawave Semi's Gen 3 UCIe IP subsystem offering.

Alphawave Semi is set to revolutionize connectivity with its Gen 3 64 Gbps UCIe IP, delivering a bandwidth density of over 20 Tbps/mm, with ultra-low power and latency. This solution is highly configurable supporting multiple protocols, including AXI-4, AXI-S, CXS, CHI and CHI-C2C to address the growing demands for high-performance connectivity across disaggregated systems in High-Performance Computing (HPC), Data Centers, and Artificial Intelligence (AI) applications.

EU Approves €1.3B Italian Subsidy for Silicon Box Chiplet Plant

Silicon Box, a global leader in advanced semiconductor packaging and system integration, welcomes the European Commission's approval of approximately €1.3 billion for its new manufacturing facility in Italy. The project, representing a total investment of €3.2 billion, will create 1,600 high-skilled jobs and establish Europe's most advanced semiconductor packaging facilities.

The investment supports the EU's strategic goal to produce 20% of the world's semiconductors by 2030 and marks Silicon Box's first expansion beyond Singapore. With its proprietary large format panel-level process lines, the factory can scale up the packaging of chips 6 to 8 times more than traditional wafer-level packaging.

GlobalWafers Awarded $406M via U.S. CHIPS Act to Boost 300mm Wafer Supply

The U.S. Department of Commerce will award GlobalWafers America and MEMC, LLC, U.S. subsidiaries of Taiwan-based GlobalWafers Co., Ltd., up to $406 million in direct funding under the CHIPS Incentives Program's Funding Opportunity for Commercial Fabrication Facilities.

The award will support planned investments of $4 billion in advanced semiconductor wafer manufacturing facilities in Sherman, Texas and St. Peters, Missouri. The Department will disburse the funds based on GWA's and MEMC's completion of project milestones over a multi-year timeframe.

RPCS3 PlayStation 3 Emulator Gets Native arm64 Support on Linux, macOS, and Windows

The RPCS3 team has announced the successful implementation of arm64 architecture support for their PlayStation 3 emulator. This development enables the popular emulator to run on a broader range of devices, including Apple Silicon machines, Windows-on-Arm, and even some smaller Arm-based SBC systems like the Raspberry Pi 5. The journey to arm64 support began in late 2021, following the release of Apple's M1 processors, with initial efforts focused on Linux platforms. After overcoming numerous technical hurdles, the development team, led by core developer Nekotekina and graphics specialist kd-11, achieved a working implementation by mid-2024. One of the primary challenges involved adapting the emulator's just-in-time (JIT) compiler for arm64 systems.

The team developed a solution using LLVM's intermediate representation (IR) transformer, which allows the emulator to generate code once for x86-64 and then transform it for arm64 platforms. This approach eliminated the need to maintain separate codebases for different architectures. A particular technical challenge emerged from the difference in memory management between x86 and arm64 systems. While the PlayStation 3 and traditional x86 systems use 4 KB memory pages, modern arm64 platforms typically operate with 16 KB pages. Though this larger page size can improve memory performance in native applications, it presented unique challenges for emulating the PS3's graphics systems, particularly when handling smaller textures and buffers. While the emulator now runs on arm64 devices, performance varies significantly depending on the hardware. Simple applications and homebrew software show promising results, but more demanding commercial games may require substantial computational power beyond what current affordable Arm devices can provide.

AMD Introduces Versal RF Series Adaptive SoCs With Integrated Direct RF-Sampling Converters

AMD today announced the expansion of the AMD Versal adaptive system-on-chip (SoC) portfolio with the introduction of the Versal RF Series that includes the industry's highest compute performance in a single-chip device with integrated direct radio frequency (RF)-sampling data converters.

Versal RF Series offers precise, wideband-spectrum observability and up to 80 TOPS of digital signal processing (DSP) performance in a size, weight, and power (SWaP)-optimized design, targeting RF systems and test equipment applications in the aerospace and defense (A&D) and test and measurement (T&M) markets, respectively.

Quobly Announces Key Milestone for Fault-tolerant Quantum Computing

Quobly, a leading French quantum computing startup, has reported that FD-SOI technology can serve as a scalable platform for commercial quantum computing, leveraging traditional semiconductor manufacturing fabs and CEA-Leti's R&D pilot line.

The semiconductor industry has played a pivotal role in enabling classical computers to scale at cost; it has the same transformative potential for quantum computers, making them commercially scalable and cost competitive. Silicon spin qubits are excellent for achieving fault-tolerant, large-scale quantum computing, registering clock speeds in the µsec range, fidelity above 99% for one and two-qubit gate operations and incomparably small unit cell sizes (in the hundredths of 100 nm²).

NVIDIA Shows Future AI Accelerator Design: Silicon Photonics and DRAM on Top of Compute

During the prestigious IEDM 2024 conference, NVIDIA presented its vision for the future AI accelerator design, which the company plans to chase after in future accelerator iterations. Currently, the limits of chip packaging and silicon innovation are being stretched. However, future AI accelerators might need some additional verticals to gain the required performance improvement. The proposed design at IEDM 24 introduces silicon photonics (SiPh) at the center stage. NVIDIA's architecture calls for 12 SiPh connections for intrachip and interchip connections, with three connections per GPU tile across four GPU tiles per tier. This marks a significant departure from traditional interconnect technologies, which in the past have been limited by the natural properties of copper.

Perhaps the most striking aspect of NVIDIA's vision is the introduction of so-called "GPU tiers"—a novel approach that appears to stack GPU components vertically. This is complemented by an advanced 3D stacked DRAM configuration featuring six memory units per tile, enabling fine-grained memory access and substantially improved bandwidth. This stacked DRAM would have a direct electrical connection to the GPU tiles, mimicking the AMD 3D V-Cache on a larger scale. However, the timeline for implementation reflects the significant technological hurdles that must be overcome. The scale-up of silicon photonics manufacturing presents a particular challenge, with NVIDIA requiring the capacity to produce over one million SiPh connections monthly to make the design commercially viable. NVIDIA has invested in Lightmatter, which builds photonic packages for scaling the compute, so some form of its technology could end up in future NVIDIA accelerators
Return to Keyword Browsing
Mar 23rd, 2025 09:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts