News Posts matching #Silicon

Return to Keyword Browsing

Arteris Accelerates AI-Driven Silicon Innovation with Expanded Multi-Die Solution

In a market reshaped by the compute demands of AI, Arteris, Inc. (Nasdaq: AIP), a leading provider of system IP for accelerating semiconductor creation, today announced an expansion of its multi-die solution, delivering a foundational technology for rapid chiplet-based innovation. "In the chiplet era, the need for computational power increasingly exceeds what is available by traditional monolithic die designs," said K. Charles Janac, president and CEO of Arteris. "Arteris is leading the transition into the chiplet era with standards-based, automated and silicon-proven solutions that enable seamless integration across IP cores, chiplets, and SoCs."

Moore's Law, predicting the doubling of transistor count on a chip every two years, is slowing down. As the semiconductor industry accelerates efforts to increase performance and efficiency, especially driven by AI workloads, architectural innovation through multi-die systems has become critical. Arteris' expanded multi-die solution addresses this shift with a suite of enhanced technologies that are purpose-built for scalable and faster time-to-silicon, high-performance computing, and automotive-grade mission-critical designs.

Synopsys Accelerates AI and Multi-Die Design Innovation on Advanced Samsung Foundry Processes

Synopsys, Inc. announced today its ongoing close collaboration with Samsung Foundry to power the next generation of designs for advanced edge AI, HPC, and AI applications. The collaboration between the companies is helping mutual customers achieve successful tape-outs of their complex designs using Synopsys' 3DIC Compiler and Samsung's advanced packaging technologies with fast turnaround time. Mutual customers can improve power, performance and area (PPA) with certified EDA flows for SF2P process, and minimize IP integration risk with the high-quality portfolio of IP on Samsung's most advanced process technologies.

"The adoption of Edge AI applications is driving the need for advancements in semiconductor technologies to enable complex computational tasks, improve efficiency, and expand AI capabilities across various industries and applications," said John Koeter, senior vice president for the Synopsys IP Group. "Together with Samsung Foundry, we're enabling the most advanced AI processors across a broad spectrum of use cases from high-performance AI inference engines for data centers to ultra-efficient Edge AI devices like cameras and drones, all optimized for development on sub-2 nm Samsung Foundry process technologies."

Alphawave Semi Tapes Out New UCIe IP on TSMC 2nm Supporting 36G Die-to-Die Data Rates

Alphawave Semi, a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure, announced the successful tape out of one of the industry's first UCIe IP subsystem on TSMC's N2 process, supporting 36G die-to-die data rates. The solution is fully integrated with TSMC's Chip-on-Wafer-on-Substrate (CoWoS ) advanced packaging technology, unlocking breakthrough bandwidth density and scalability for next-generation chiplet architectures.

This milestone builds on the recent release of the Alphawave Semi AI Platform, proving readiness to support the future of disaggregated SoCs and scale-up infrastructure for hyperscale AI and HPC workloads. With this tape-out, Alphawave Semi becomes one of the industry's first to enable UCIe connectivity on 2 nm nanosheet technology, marking a major step forward for the open chiplet ecosystem.

NVIDIA Plans 800 V Power Infrastructure to Drive 1 MW AI Racks

AI infrastructure buildout is pushing data center desings beyond the limits of conventional power delivery. Traditional in-rack 54 V DC distribution was designed for racks drawing tens of kilowatts and cannot scale to the megawatt requirements for next-generation AI facilities. At GTC and Computex 2025, NVIDIA introduced a comprehensive solution: an end-to-end 800-volt high-voltage DC (HVDC) infrastructure that will support 1-megawatt AI racks and beyond, with deployments planned to begin in 2027. Cooling and cabling already place immense strain on rack designs. NVIDIA's current GB200 and GB300 NVL72 systems can draw up to 132 kW per rack—significantly more than the 50 to 80 kW that most data halls were built to handle. If rack power rises to the 700 kW to 1 MW range under a 54 V distribution, it would require roughly 64 U of chassis space devoted solely to copper busbars, which is almost the entire rack, and about 200 kg of copper per rack. For a 1 GW installation, that adds up to nearly half a million metric tons of copper.

NVIDIA's 800 V HVDC architecture eliminates multiple AC-to-DC and DC-to-DC conversion stages by consolidating them into a single grid-edge rectifier. From a 13.8 kV AC feed, power is converted directly to 800 V DC and then routed through row-level busways to each rack. Compact DC-DC modules in the rack step down the voltage for the GPUs. Fewer power supply units mean fewer fans, lower heat output, and a simpler electrical footprint. Beyond scalability, 800 V HVDC offers up to 5 percent gains in end-to-end efficiency and a 45 percent reduction in copper usage. This results in lower electricity costs and reduced infrastructure buildout costs. To drive industry adoption, NVIDIA has partnered with leaders across the power ecosystem. Silicon and power-electronics specialists such as Infineon, MPS, Navitas, ROHM, STMicroelectronics, and Texas Instruments are contributing components. System integrators, including Delta, Flex Power, Lead Wealth, LiteOn, and Megmeet, are developing power shelves. Data-center infrastructure companies Eaton, Schneider Electric, and Vert iv are standardizing protective devices at every boundary from the power room to the rack. Below you can compare the traditional rack system in the top to the newly proposed variation in the middle and the bottom part of the image. Thanks to HardwareLuxx, we can even see how it looks in reality.

Intel Arc Xe3 "Celestial" GPU Reaches Pre-Silicon Validation, Tapeout Next

In December, we reported that Intel's next‑generation Arc graphics cards, based on the Xe3 "Celestial" IP, are finished. Tom Petersen of Intel confirmed that the Xe3 IP is baked, meaning that basic media engines, Xe cores, XMX matrix engines, ray‑tracing engines, and other parts of the gaming GPU are already designed and most likely awaiting trial fabrication. Today, we learn that Intel has reached pre‑silicon validation, meaning that trial production is imminent. According to the X account @Haze2K1, which shared a snippet of Intel's milestones, a pre‑silicon hardware model of the Intel Arc Xe3 Celestial IP is being used to map out frequency and power usage in firmware. As a reminder, Intel's pre‑silicon validation platform enables OEM and IBV partners to boot and test new chip architectures months before any physical silicon is available, catching design issues much earlier in the development cycle.

Intel provides OEMs and IBVs access to a secure, cloud‑based environment that faithfully emulates hardware‑representative systems, allowing developers to validate firmware and software stacks from anywhere without the need for physical labs. Most likely, Intel is running massive emulations of hardware on FPGAs, which act as an ASIC chip—an Arc Xe3 GPU in this case. The pre‑silicon validation team is now optimizing the power‑frequency curve and the voltage in sleep, rest, and boost states, as well as their respective frequencies. With the Xe3 IP taking many forms, engineers are experimenting with every possible form factor, from mobile to discrete graphics. Additionally, data pathways depend on these frequency curves, which in turn rely on power states that allow voltage to spike up and down as the application requires. As this work is now complete, engineers are moving on to other areas for optimization, and once the silicon returns from volume production, it will be fully optimized. We expect the first trial of silicon soon, with volume production by the end of the year or in early 2026.

China's Semiconductor Equipment Market Share Rises as Taiwan, Korea and Japan Decline

The global semiconductor industry is experiencing notable shifts, largely influenced by the rapid expansion of the Mainland China market. From 2010 to 2024, China's share of global semiconductor equipment sales rose significantly, from just 6% in 2010 to 38% in 2024. On the other side McKinsey reports that market shares in Taiwan, Korea and Japan are declining. Taiwan has started to build semiconductor fabs in the US and Europe, while Japan has seen few new fab projects despite TSMC's upcoming Kumamoto plant. At the same time, the US and Europe, the Middle East, and Africa have kept their market shares steady.

Globalization helped the semiconductor industry grow from 2010 to 2019, during this time Chinese semiconductor companies expanded, with local firms growing by about 21% each year. But growth slowed from 2019 to 2023 because of US sanctions on Huawei which affected its chip division HiSilicon. Even without HiSilicon, China's semiconductor industry still grew by 9-10% in that period. Experts think this growth will continue in the future, a trend that the current US tariffs are only accentuating. China's growing importance in industries like electric vehicles (EVs) and commercial drones is pushing its semiconductor goals even further. In 2023, China accounted for 60% of all new EV registrations around the world. At the same time, political tensions between countries have made China more eager to build a self-reliant domestic semiconductor ecosystem. China is testing a domestic extreme ultraviolet (EUV) lithography system at Huawei's Dongguan facility. The system uses laser-induced discharge plasma technology and is scheduled for trial production in Q3 2025, with mass manufacturing planned for 2026.

JEDEC and Industry Leaders Collaborate to Release JESD270-4 HBM4 Standard

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of its highly anticipated High Bandwidth Memory (HBM) DRAM standard: HBM4. Designed as an evolutionary step beyond the previous HBM3 standard, JESD270-4 HBM4 will further enhance data processing rates while maintaining essential features such as higher bandwidth, power efficiency, and increased capacity per die and/or stack, because the higher bandwidth enables the higher data processing rate.

The advancements introduced by HBM4 are vital for applications that require efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing, high-end graphics cards, and servers. HBM4 introduces numerous improvements to the prior version of the standard, including:

Cadence to Acquire Arm Artisan Foundation IP Business

Cadence today announced that it has entered into a definitive agreement with Arm to acquire Arm's Artisan foundation IP business, consisting of standard cell libraries, memory compilers, and general-purpose I/Os (GPIOs) optimized for advanced process nodes at the leading foundries. The transaction will augment Cadence's expanding design IP offerings, anchored by a leading portfolio of protocol and interface IP, memory interface IP, SerDes IP at the most advanced nodes, and embedded security IP from the pending Secure-IC acquisition.

By increasing its footprint in SoC designs, Cadence is reinforcing its commitment to continuously accelerate customers' time to market and to optimize their cost, power and performance on the world's leading foundry processes. Cadence will acquire the Arm Artisan foundation IP business through an asset purchase agreement with a concurrent technology license agreement, to be signed at closing and subject to any existing rights. As part of the transaction, Cadence will acquire a highly talented and experienced engineering team that is well respected in the industry and can help accelerate development of both related and new IP products.

Chinese SiCarrier Shows a Complete Silicon Manufacturing Flow: Deposition, Etching, Metrology, Inspection, and Electrical Testing

SiCarrier, a Huawei-backed Chinese semiconductor tool manufacturer, has launched a comprehensive suite of semiconductor manufacturing tools at this year's Semicon China. These tools are strategically essential to China's semiconductor self-sufficiency and a major step towards competitive nodes from the mainland. The new lineup spans multiple categories: optical inspection, deposition, etch, metrology, and electrical performance testing. Until now, Chinese chipmakers often depended on older-generation foreign equipment, but SiCarrier's new lineup promises domestic alternatives tailored to modern manufacturing processes. The tools address every stage of semiconductor fabrication, from inspecting microscopic defects to etching intricate circuits.

For quality control, SiCarrier's Color Mountain series functions like a high-powered microscope, using intense lighting and advanced imaging algorithms to examine both sides of silicon wafers for flaws as small as dust particles. Complementing this, the Sky Mountain series ensures the perfect alignment of circuit layers, which need perfect stacking, using diffraction-based measurements (analyzing light patterns) and direct image comparisons. The New Mountain suite combines specialized tools to analyze materials at the atomic level. One standout is the atomic force microscope (AFM), which maps surface topography with a nanoscale "finger," while X-ray techniques (XPS, XRD, XRF) act like forensic tools, revealing the chemical composition, crystal structure, and elemental makeup.

NVIDIA Commercializes Silicon Photonics with InfiniBand and Ethernet Switches

NVIDIA has developed co-packaged optics (CPO) technology with TSMC for its upcoming Quantum-X InfiniBand and Spectrum-X Ethernet switches, integrating silicon photonics directly onto switch ASICs. The engineering approach reduces power consumption by 3.5x. It decreases signal loss from 22 dB to 4 dB compared to traditional pluggable optics, addressing critical power and connectivity limitations in large-scale GPU deployments, especially in 10,000+ GPU systems. The architecture incorporates continuous wave laser sources within the switch chassis, consuming 2 W per port, compared to the 10 W required by conventional externally modulated lasers in pluggable modules. This configuration, combined with integrated optical engines that use 7 W versus 20 W for traditional digital signal processors, reduces total optical interconnect power from approximately 72 MW to 21.6 MW in a 400,000 GPU data center scenario.

Specifications for the Quantum 3450-LD InfiniBand model include 144 ports running at 800 Gb/s, delivering 115 Tb/s of aggregate bandwidth using four Quantum-X CPO sockets in a liquid-cooled chassis. The Spectrum-X lineup features the SN6810 with 128 ports at 800 Gb/s (102.4 Tb/s) and the higher-density SN6800 providing 512 ports at 800 Gb/s for 409.6 Tb/s total throughput. The Quantum-X InfiniBand implementation uses a monolithic switch ASIC with six CPO modules supporting 36 ports at 800 Gb/s, while the Spectrum-X Ethernet design employs a multi-chip approach with a central packet processing engine surrounded by eight SerDes chiplets. Both architectures utilize 224 Gb/s signaling per lane with four lanes per port. NVIDIA's Quantum-X switches are scheduled for availability in H2 2025, with Spectrum-X models following in H2 2026.

GUC Launches First 32 Gbps per Lane UCIe Silicon Using TSMC 3nm and CoWoS Technology

Global Unichip Corp. (GUC), the Advanced ASIC Leader, today announced the successful launch of industry's first Universal Chiplet Interconnect Express (UCIe) PHY silicon, achieving a data rate of 32 Gbps per lane, the highest speed defined in the UCIe specification. The 32G UCIe IP, supporting UCIe 2.0, delivers an impressive bandwidth density of 10 Tbps per 1 mm of die edge (5 Tbps/mm full-duplex). This milestone was achieved using TSMC's advanced N3P process and CoWoS packaging technologies, targeting AI, high-performance computing (HPC), xPU, and networking applications.

In this test chip, several dies with North-South and East-West IP orientations are interconnected through CoWoS interposer. The silicon measurements show robust 32 Gbps operation with wide horizontal and vertical eye openings. GUC is working aggressively on the full-corner qualification, and the complete silicon report is expected to be available in the coming quarter.

Chinese Researchers Develop No-Silicon 2D GAAFET Transistor Technology

Scientists from Beijing University have developed the world's first two-dimensional gate-all-around field-effect transistor (GAAFET), establishing a new performance benchmark in domestic semiconductor design. The design, documented in Nature, represents a difference in transistor architecture that could reshape the future of Chinese microelectronics design. Given the reported characteristic of 40% higher performance and 10% improved efficiency compared to the TSMC 3 nm N3 node, it looks rather promising. The research team, headed by Professors Peng Hailin and Qiu Chenguang, engineered a "wafer-scale multi-layer-stacked single-crystalline 2D GAA configuration" that demonstrated superior performance metrics when benchmarked against current industry leaders. The innovation leverages bismuth oxyselenide (Bi₂O₂Se), a novel semiconductor material that maintains exceptional carrier mobility at sub-nanometer dimensions—a critical advantage as the industry struggles to push angstrom-era semiconductor nodes.

"Traditional silicon-based transistors face fundamental physical limitations at extreme scales," explained Professor Peng, who characterized the technology as "the fastest, most efficient transistor ever developed." The 2D GAAFET architecture circumvents the mobility degradation that plagues silicon in ultra-small geometries, allowing for continued performance scaling beyond current nodes. The development comes during China's intensified efforts to achieve semiconductor self-sufficiency, as trade restrictions have limited access to advanced lithography equipment and other critical manufacturing technologies. Even with China developing domestic EUV technology, it is still not "battle" proven. Rather than competing directly with established fabrication processes, the Beijing team has pioneered an entirely different technological approach—what Professor Peng described as "changing lanes entirely" rather than seeking incremental improvements, where China can not compete in the near term.

Marvell Demonstrates Industry's Leading 2nm Silicon for Accelerated Infrastructure

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, has demonstrated its first 2 nm silicon IP for next-generation AI and cloud infrastructure. Produced on TSMC's 2 nm process, the working silicon is part of the Marvell platform for developing custom XPUs, switches and other technology to help cloud service providers elevate the performance, efficiency, and economic potential of their worldwide operations.

Given a projected 45% TAM growth annually, custom silicon is expected to account for approximately 25% of the market for accelerated compute by 20281.

GlobalFoundries and MIT Collaborate on Photonic AI Chips

GlobalFoundries (GF) and the Massachusetts Institute of Technology (MIT) today announced a new master research agreement to jointly pursue advancements and innovations for enhancing the performance and efficiency of critical semiconductor technologies. The collaboration will be led by MIT's Microsystems Technology Laboratories (MTL) and GF's research and development team, GF Labs.

With an initial research focus on AI and other applications, the first projects are expected to leverage GF's differentiated silicon photonics technology, which monolithically integrates RF SOI, CMOS and optical features on a single chip to realize power efficiencies for datacenters, and GF's 22FDX platform, which delivers ultra-low power consumption for intelligent devices at the edge.

Chinese Mature Nodes Undercut Western Silicon Pricing, to Capture up to 28% of the Market This Year

Chinese manufacturers have seized significant market share in legacy chip production, driving prices down and creating intense competitive pressure that Western competitors cannot match. The so-called "China shock" in the semiconductor sector appears as mature node production shifts East at accelerating rates. Legacy process nodes, which are usually 16/20/22/24 nm and larger, form the backbone of consumer electronics and automotive applications while providing established manufacturers with stable revenue streams for R&D investment. However, this economic framework now faces structural disruption as Chinese fabs leverage domestic demand and government support to expand capacity. By Q4 2025, Chinese facilities will control 28% of global mature chip production, with projections indicating further expansion to 39% by 2027.

This rapid capacity growth directly results from Beijing's strategic pivot following US export controls on advanced semiconductor equipment, which redirected investment toward mature nodes where technological barriers remain lower. This is happening in parallel with companies like SMIC, although isolated, which are developing lithography solutions for cutting-edge 5 nm and 3 nm wafer production. For older nodes, The market impact appears most pronounced in specialized materials like silicon carbide (SiC). Industry benchmark 6-inch SiC wafers from Wolfspeed were previously $1,500, compared to current $500 pricing from Guangzhou Summit Crystal Semiconductor—representing a 67% price compression that Western manufacturers cannot profitably match. Multiple semiconductor firms report significant financial strain from this pricing pressure. Wolfspeed has implemented 20% workforce reductions following a 96% market capitalization decline, while Onsemi recently announced 9% staff cuts. With more Chinese expansion into the mature node category, Western companies can't keep up with the lowered costs of what is now becoming a commodity.

PsiQuantum Announces Omega, a Manufacturable Chipset for Photonic Quantum Computing

PsiQuantum today announces Omega, a quantum photonic chipset purpose-built for utility-scale quantum computing. Featured in a newly published paper in Nature, the chipset contains all the advanced components required to build million-qubit-scale quantum computers and deliver on the profoundly world-changing promise of this technology. Every photonic component is demonstrated with beyond-state-of-the-art performance. The paper shows high-fidelity qubit operations, and a simple, long-range chip-to-chip qubit interconnect - a key enabler to scale that has remained challenging for other technologies. The chips are made in a high-volume semiconductor fab, representing a new level of technical maturity and scale in a field that is often thought of as being confined to research labs. PsiQuantum will break ground this year on two datacenter-sized Quantum Compute Centers in Brisbane, Australia and Chicago, Illinois.

"For more than 25 years it has been my conviction that in order for us to realize a useful quantum computer in my lifetime, we must find a way to fully leverage the unmatched capabilities of the semiconductor industry. This paper vindicates that belief."—Prof. Jeremy O'Brien, PsiQuantum Co-founder & CEO.

Apple to Spend More Than $500 Billion in the U.S. Over the Next Four Years

Apple today announced its largest-ever spend commitment, with plans to spend and invest more than $500 billion in the U.S. over the next four years. This new pledge builds on Apple's long history of investing in American innovation and advanced high-skilled manufacturing, and will support a wide range of initiatives that focus on artificial intelligence, silicon engineering, and skills development for students and workers across the country.

"We are bullish on the future of American innovation, and we're proud to build on our long-standing U.S. investments with this $500 billion commitment to our country's future," said Tim Cook, Apple's CEO. "From doubling our Advanced Manufacturing Fund, to building advanced technology in Texas, we're thrilled to expand our support for American manufacturing. And we'll keep working with people and companies across this country to help write an extraordinary new chapter in the history of American innovation."

NVIDIA to Consume 77% of Silicon Wafers Dedicated to AI Accelerators in 2025

Investment bank Morgan Stanley has estimated that an astonishing 77% of all globally produced silicon wafers dedicated to AI accelerators will be consumed by none other than NVIDIA. Often, investment research by large investment banks like Morgan Stanley includes information from the semiconductor supply chain, which is constantly expanding to meet NVIDIA's demands. When looking at wafer volume for AI accelerators, it is estimated that in 2024, NVIDIA captured nearly 51% of wafer consumption for its chips, more than half of all demand. With NVIDIA's volume projected to grow to 77%, this represents more than a 50% year-over-year increase, which is incredible for a company of NVIDIA's size. Right now, NVIDIA is phasing out its H100 accelerators in favor of Blackwell 100/200 and the upcoming 300 series of GPUs paired with Grace CPUs.

NVIDIA is accelerating its product deployment timeline and investing a lot in its internal research and development. Morgan Stanley also projects that NVIDIA will invest almost $16 billion in its R&D budget, enough to endure four to five years of development cycles running three design teams sequentially and still delivering new products on an 18-24 month cadence. The scale of this efficiency and development rivals everyone else in the industry. With all this praise, NVIDIA's Q4 revenue report is coming in exactly a week on February 26, so we have to see what its CEO, Jensen Huang, will deliver and show some estimates for the coming months.

Huawei Delivers Record $118 Billion Revenue with 22% Yearly Growth Despite US Sanctions

Huawei Technologies reported a robust 22% year-over-year revenue increase for 2024, reaching 860 billion yuan ($118.27 billion), demonstrating remarkable resilience amid continued US-imposed trade restrictions. The Chinese tech giant's resurgence was primarily driven by its revitalized smartphone division, which captured 16% of China's domestic market share, overtaking Apple in regional sales. This achievement was notably accomplished by deploying domestically produced chipsets, marking a significant milestone for the company. In collaboration with Chinese SMIC, Huawei delivers in-house silicon solutions to integrate with HarmonyOS for complete vertical integration. The company's strategic diversification into automotive technology has emerged as a crucial growth vector, with its smart car solutions unit delivering autonomous driving software and specialized chips to Chinese EV manufacturers.

In parallel, Huawei's Ascend AI 910B/C platform recently announced compatibility with DeepSeek's R1 large language model and announced availability on Chinese AI cloud providers like SiliconFlow. Through a strategic partnership with AI infrastructure startup SiliconFlow, Huawei is enhancing its Ascend cloud service capabilities, further strengthening its competitive position in the global AI hardware market despite ongoing international trade challenges. Even if the company can't compete on performance versus the latest solutions from NVIDIA and AMD due to the lack of advanced manufacturing required for AI accelerators, it can compete on costs and deliver solutions that are typically much more competitive with the price/performance ratio. Huawei's Ascend AI solutions deliver modest performance. Still, the pricing makes AI model inference very cheap, with API costs of around one Yaun per million input tokens and four Yuan per one million output tokens on DeepSeek R1.

Osaka Scientists Unveil 'Living' Electrodes That Can Enhance Silicon Devices

Shrinking components was (and still is) the main way to boost the speed of all electronic devices; however, as devices get tinier, making them becomes trickier. A group of scientists from SANKEN (The Institute of Scientific and Industrial Research), at Osaka University has discovered another method to enhance performance: putting a special metal layer known as a metamaterial on top of a silicon base to make electrons move faster. This approach shows promise, but the tricky part is managing the metamaterial's structure so it can adapt to real-world needs.

To address this, the team looked into vanadium dioxide (VO₂). When heated, VO₂ changes from non-conductive to metallic, allowing it to carry electric charge like small adjustable electrodes. The researchers used this effect to create 'living' microelectrodes, which made silicon photodetectors better at spotting terahertz light. "We made a terahertz photodetector with VO₂ as a metamaterial. Using a precise method, we created a high-quality VO₂ layer on silicon. By controlling the temperature, we adjusted the size of the metallic regions—much larger than previously possible—which affected how the silicon detected terahertz light," says lead author Ai I. Osaka.

Apple's Upcoming M5 SoC Enters Mass Production

Apple's M4 SoC was released to overwhelmingly positive reviews, particularly regarding the commendable performance and efficiency benefits it brought to the table. The chip first appeared in the OLED iPad Pro lineup last May, arriving in the company's MacBook Pro lineup only much later, giving Intel's Lunar Lake and AMD's Strix Point a run for their money. Now, it appears that the company is cognizant of the heat brought by AMD's Strix Halo, and has already commenced mass production for the first SoC in the M5 family - the vanilla M5, according to Korean news outlet ET News.

Just like last time, the M5 SoC has been repeatedly rumored to first arrive in the next-generation iPad Pro, scheduled to enter production sometime in the second half of this year. The MacBook Pro will likely be next-in-line for the M5 treatment, followed the rest of the lineup as per tradition. Interestingly, although Apple decided against using TSMC's 2 nm process for this year's chips, the higher-tier variants, including the M5 Pro and M5 Max are expected to utilize TSMC's SoIC-mH technology, allowing for vertical stacking of chips that should ideally benefit thermals, and possibly even allow for better and larger GPUs thanks to the separation of the CPU and GPU portions. Consequently, yields will also improve, which will allow Apple to bring costs down.

Huawei Ascend 910B Accelerators Power Cloud Infrastructure for DeepSeek R1 Inference

When High-Flyer, the hedge fund behind DeepSeek, debuted its flagship model, DeepSeek R1, the tech world went downward. No one expected Chinese AI companies can produce high-quality AI model that rivals the best from OpenAI and Anthropic. While there are rumors that DeepSeek has access to 50,000 NVIDIA "Hopper" GPUs, including H100, H800, and H20, it seems like Huawei is ready to power Chinese AI infrastructure with its AI accelerators. According to the South China Morning Post, Chinese cloud providers like SiliconFlow.cn are offering DeepSeek AI models for inference on Huawei Ascend 910B accelerators. For the price of only one Yuan for one million input tokens, and four Yuan for one million output tokens, this economic model of AI hosting is fundamentally undercutting competition like US-based cloud providers that offer DeepSeek R1 for $7 per million tokens.

Not only is running on the Huawei Ascend 910B cheaper for cloud providers, but we also reported that it is cheaper for DeepSeek itself, which serves its chat app on the Huawei Ascend 910C. Using domestic accelerators lowers the total cost of ownership, with savings passed down to users. If Western clients prefer AI inference to be served by Western companies, they will have to pay a heftier price tag, often backed by the high prices of GPUs like NVIDIA H100, B100, and AMD Instinct MI300X.

Patriot Unveils the New PDP31 Portable PSSD

Patriot proudly announces the launch of the PDP31, a portable PSSD designed specifically for the mobile lifestyle. Supporting the USB 3.2 Gen 2 standard, the PDP31 boasts a high-speed transfer rate of up to 10 Gbps, with exceptional sequential read and write speeds of up to 1000 MB/s, making it the ideal choice for tech enthusiasts and professionals alike.

To ensure ample storage for all your important files, the PDP31 is available in capacities ranging from 120 GB to 2 TB, making it perfect for storing everything from creative video projects to large work files. It also features a versatile USB Type-C interface for easy connection to desktops, MacBooks, Windows laptops, tablets, and smartphones.

GlobalFoundries Announces New York Advanced Packaging and Photonics Center

GlobalFoundries (Nasdaq: GFS) (GF) today announced plans to create a new center for advanced packaging and testing of U.S.-made essential chips within its New York manufacturing facility. Supported by investments from the State of New York and the U.S. Department of Commerce, the first-of-its-kind center aims to enable semiconductors to be securely manufactured, processed, packaged and tested entirely onshore in the United States to meet the growing demand for GF's silicon photonics and other essential chips needed for critical end markets including AI, automotive, aerospace and defense, and communications.

Growth in AI is driving the adoption of silicon photonics and 3D and heterogeneously integrated (HI) chips to meet power, bandwidth and density requirements in datacenters and edge devices. Silicon photonics chips are also positioned to address power and performance needs in automotive, communications, radar, and other critical infrastructure applications.

Apple Silicon Macs Gain x86 Emulation Capability, Run x86 Windows Apps on macOS

Parallels has announced the introduction of x86 emulation support in Parallels Desktop 20.2.0 for Apple Silicon Macs. This new feature enables users to run x86-based virtual machines on their M-series Mac computers, addressing a longstanding limitation since Apple's transition to its custom Arm-based processors. The early technology preview allows users to run Windows 10, Windows 11 (with some restrictions), Windows Server 2019/2022, and various Linux distributions through a proprietary emulation engine. This development particularly benefits developers and users who need to run 32-bit Windows applications or prefer x86-64 Linux virtual machines as an alternative to Apple Rosetta-based solutions.

However, Parallels is transparent about the current limitations of this preview release. Performance is notably slow, with Windows boot times ranging from 2 to 7 minutes, and overall system responsiveness remains low. The emulation only supports 64-bit operating systems, though it can run 32-bit applications. Additionally, USB device support is not available, and users must rely on Apple's hypervisor as the Parallels hypervisor isn't compatible. Despite these constraints, the release is a crucial step forward in bridging the compatibility gap for Apple Silicon Mac users so legacy software can still be used. The feature has been implemented with the option to start virtual machines hidden in the user interface to manage expectations, as it is still imperfect.
Return to Keyword Browsing
Jul 14th, 2025 15:11 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts