News Posts matching #Arm

Return to Keyword Browsing

European HPC Processor "Rhea1" Tapes Out, Launch Delayed to 2026

European Processor Initiative (EPI) is nearing completion of its first goal. SiPearl, the leading developer behind the Rhea1 processor, has finally reached the tapeout stage after a string of delays, but it will not be ready for delivery until 2026 at the earliest. When the project launched in 2020, SiPearl planned to begin production in 2023; however, the 61 billion-transistor chip only entered tapeout this summer. The design, built on TSMC's N6 process, features 80 Arm Neoverse V1 cores alongside 64 GB of HBM2E memory and a DDR5 interface. While these specifications once looked cutting‑edge, the industry has already moved on, and Rhea1's raw performance may seem dated by the time samples are available. SiPearl initially explored a RISC‑V architecture back in 2019 but abandoned it after early feedback and comments highlighted the instruction set's immaturity for exascale computing.

Development was further interrupted by shifting core‑count debates, with teams alternately considering 72 cores, then 64, before finally settling on 80 cores by 2022. Those back‑and‑forth decisions, combined with evolving performance expectations, helped push the timeline back by years. Despite missing its original schedule, Rhea1 remains vital to European ambitions for high‑performance computing sovereignty and serves as the intended CPU for the Jupiter supercomputer. Thanks to Jupiter's modular design, the system was not left idle; its GPU booster module, running NVIDIA Grace Hopper accelerators, is already operational and approximately 80 percent complete. With the CPU clusters slated for mid-2026 deployment, full system readiness is expected by the end of 2026. To support this effort, SiPearl has recently secured €130 million in new financing from the French government, industry partners, and Taiwan's Cathay Venture. As Rhea1 finishes its goal, work on Rhea2 is already underway, and we can expect more updates about Rhea2 in a year or two.
Rhea1

Cadence Introduces Industry-First LPDDR6/5X 14.4 Gbps Memory IP to Power Next-Generation AI Infrastructure

Cadence today announced the tapeout of the industry's first LPDDR6/5X memory IP system solution optimized to operate at 14.4 Gbps, up to 50% faster than the previous generation of LPDDR DRAM. The new Cadence LPDDR6/5X memory IP system solution is a key enabler for scaling up the AI infrastructure to accommodate the memory bandwidth and capacity demands of next-generation AI LLMs, agentic AI and other compute-heavy workloads for various verticals. Multiple engagements are currently underway with leading AI, high-performance computing (HPC) and data center customers.

The Cadence IP for the JEDEC LPDDR6/5X standard consists of an advanced PHY architecture and a high-performance controller designed to maximize power, performance and area (PPA) while supporting both LPDDR6 and LPDDR5X DRAM protocols for optimal flexibility. The solution supports native integration into traditional monolithic SoCs as well as multi-die system architectures by leveraging the Cadence chiplet framework, enabling heterogeneous chiplet integration. The chiplet framework, including the previous LPDDR generation, was successfully taped out in 2024.

Steam Deck & Nintendo Switch Dominate Among Gamers Who Use Handhelds

TechPowerUp's team conducted research to find out how the market for handheld consoles performs and where its users are mostly going. The large community poll of 22,649 PC gamers, asking a simple "Do you game on a handheld console?" paints a solid picture of the customer base that a handheld console maker can expect. The majority, at 65.3% of the polled gamers, have chosen the option "No," indicating that two-thirds of PC gamers spend time on their main desktop or notebook PCs without using an additional handheld console. Among the 34.7% of respondents (7,852 votes) who game on the go, Valve's Steam Deck leads with 2,798 votes (35.6%), narrowly edging out Nintendo's Switch at 2,785 votes (35.5%).

ASUS's ROG Ally follows with 913 votes (11.6%), while "Other" devices, including Android emulators, retro‑focused units like the Analogue Pocket, and various mini‑PC handhelds, account for 810 votes (10.3%). Boutique Windows handhelds trail further behind, with the Lenovo Legion Go claiming 280 votes (3.6%) and the MSI Claw 266 votes (3.4%). Out of the entire fleet of these handhelds, only the Nintendo Switch is a real console. Others are mini portable PCs, which can serve functionality far beyond those of a console. Gamers are fond of the added functionality, which is why the Steam Deck, running Linux and Windows-based handhelds from ASUS, MSI, Lenovo, and others, are so popular.

Intel's Server Share Slips to 67% as AMD and Arm Widen the Gap

In just a few years, AMD has gone from the underdog to Intel's most serious challenger in the server world. Thanks to its EPYC processors, AMD now captures about a third of every dollar spent on server CPUs, up from essentially zero in 2017. Over that same period, Intel's share has slipped from nearly 100% to roughly 63%, signaling a significant shift in what companies choose to power their data centers. The real inflection point came with AMD's Zen architecture: by mid-2020, EPYC had already claimed more than 10% of server-CPU revenues. Meanwhile, Intel's rollout of Sapphire Rapids Xeons encountered delays and manufacturing issues, leaving customers to look elsewhere. By late 2022, AMD was over the 20% mark, and Intel found itself under 75% for the first time in years.

Looking ahead, analysts at IDC and Mercury Research, with data compiled by Bank of America, expect AMD's slice of the revenue pie to grow to about 36% by 2025, while Intel drops to around 55%. Arm-based server chips are also starting to make real inroads, forecast to account for roughly 9% of CPU revenue next year as major cloud providers seek more energy- and cost-efficient options. By 2027, AMD could approach a 40% revenue share, Intel may fall below half the market, and Arm designs could capture 10-12%. Remember that these figures track revenue rather than unit sales: AMD's gains come primarily from high-end, high-core-count processors, whereas Intel still shifts plenty of lower-priced models. With AMD poised to launch its Genoa and Bergamo EPYCs and Intel banking on the upcoming E-core Xeon 6 series to regain its footing, the fight for server-CPU supremacy is far from over. Still, Intel's once-unbeatable lead is clearly under threat.

Samsung Exynos 2500 Benchmarks Put New SoC Close to Qualcomm Competition but Still Slower

Samsung's Exynos 2500 SoC has appeared on Geekbench, this time giving us a clearer indication of what to expect from the upcoming SoC that will power the next generation of Samsung flagship smartphones. There are three total runs that have appeared on Geekbench, putting forward anywhere between 2303 and 2356 points in the single-core Geekbench 6 benchmark and 8062 and 8076 points in the multicore benchmark. Meanwhile, the Qualcomm Snapdragon 8 Elite in the current-generation Samsung Galaxy S25 Ultra manages a single-core score of 2883 and a multicore score or 9518 on the same Geekbench 6 benchmark. Samsung recently made the Exynos 2500 public, with the spec sheet revealing a Samsung Xclipse 950 GPU paired with 10 Arm Cortex CPUs (1× Cortex-X5, 2× Cortex-A725 at 2.74 GHz, 5× Cortex-A725 at 2.36 GHz, and 2× Cortex A520 at 1.8 GHz).

The new SoC is reportedly the first chip to use Samsung's 3 nm GAA process, and leaks suggest that Samsung may be using the new SoC across its entire next-gen global smartphone line-up, starting with the launch of the Galaxy Z Flip 7. This would be a stark departure from previous releases, where the US versions of the Galaxy S line-up featured Qualcomm Snapdragon processors, with the international Galaxy S smartphones packing the in-house Exynos designs. In recent years, however, Samsung has pivoted to using Snapdragon SoCs across all regions.

ASUS Announces Key Milestone with Nebius and Showcases NVIDIA GB300 NVL72 System at GTC Paris 2025

ASUS today joined GTC Paris at VivaTech 2025 as a Gold Sponsor, highlighting its latest portfolio of AI infrastructure solutions and reinforcing its commitment to advancing the AI Factory vision with a full range of NVIDIA Blackwell Ultra solutions, delivering breakthrough performance from large-scale datacenter to personal desktop.

ASUS is also excited to announce a transformative partnership milestone in its partnership with Nebius. Together, the two companies are enabling a new era of AI innovation built on NVIDIA's advanced platforms. Building on the success of the NVIDIA GB200 NVL72 platform deployment, ASUS and Nebius are now moving forward with strategic collaborations featuring the next-generation NVIDIA GB300 NVL72 platform. This ongoing initiative underscores ASUS's role as a key enabler in AI infrastructure, committed to delivering scalable, high-performance solutions that help enterprises accelerate AI adoption and innovation.

NVIDIA N1x is the Company's Arm Notebook Superchip

We've known since 2023 that NVIDIA is working on an Arm-based notebook SoC, and now we're seeing the first signs of the chip. A processor labelled "NVIDIA N1x" surfaced on the Geekbench 6.2.2 online database, where it scored 3096 points in the single-threaded benchmark, and 18837 points in the multithreaded benchmark. This chip is shown powering an HP-branded prototype notebook, labelled "HP 8EA3," which is running Geekbench on Ubuntu 24.04.1 LTS. The processor is identified by Geekbench as having 20 logical processors, which means it has a core-count of 20. This could be a multi-tiered big.LITTLE configuration making up those 20 cores. The clock speed being reported is 2.81 GHz. The company could implement reference Arm cores, such as the Cortex-X925 P-cores, and Cortex A725 E-cores. The HP testbed used for the Geekbench run has a whopping 128 GB of RAM.

NVIDIA has been eyeing a specific slice of the PC pie that's addressed by Qualcomm with its Snapdragon Elite line of processors for Windows-on-Arm notebooks, complete with an NPU accelerating Microsoft Copilot+ on device. The N1x could also compete with Apple's M3 or M4 chips powering its iPad Pro and MacBooks. For now, Microsoft has confined Arm-based Copilot+ to Snapdragon processors, but NVIDIA will probably work with Microsoft to open up this platform to its chips. NVIDIA has been an Arm SoC maker for decades, its first rodeo with Arm-based client-segment SoCs has been under the Tegra brand, powering Android smartphones and tablets. The company has been making Arm CPUs all this while, but for the enterprise segment (eg: Grace CPU).

ASUS Announces the New ROG Xbox Ally and ROG Xbox Ally X Gaming Handhelds

ASUS Republic of Gamers (ROG) is proud to announce an all-new series of Ally handhelds built from the ground up with improved ergonomics and a seamless player-first user experience. Developed in partnership with the incredible team at Xbox, the new ROG Xbox Ally and ROG Xbox Ally X offer best-in-class ergonomics and a full-screen Xbox experience that marries the best of Xbox and PC gaming in one cohesive package.

"We wanted to take our handheld to the next level, but we could not do it alone." said Shawn Yen, Head of the Consumer product team at ASUS. "This revolutionary partnership with Microsoft allowed us to forge a brand new device with ROG muscle and the soul of Xbox." The ROG Xbox Ally sports an AMD Ryzen Z2 A Processor with incredible power efficiency, while the ROG Xbox Ally X offers the new AMD Ryzen AI Z2 Extreme Processor for next-level gaming performance. Both launch holiday 2025 in select markets, with additional markets to follow.

EdgeCortix SAKURA-II Enables GenAI on Raspberry Pi 5 and Arm Systems

EdgeCortix Inc., a leading fabless semiconductor company specializing in energy-efficient Artificial Intelligence (AI) processing at the edge, today announced that its industry leading AI accelerator, SAKURA-II M.2 Module is now available with Arm-based platforms, including Raspberry Pi 5 and AETINA's Rockchip (RK3588) platform, delivering unprecedented performance and efficiency for edge AI computing applications.

This powerful integration marks a major leap in democratizing real-time Generative AI capabilities at the edge. Designed with a focus on low power consumption and high AI throughput, the EdgeCortix SAKURA-II M.2 module enables developers to run advanced deep learning models directly on compact, affordable platforms like the Raspberry Pi 5—without relying on cloud infrastructure.

Arm's Accuracy Super Resolution (ASR) Upscaler Lands in Fortnite

Delivering a good visual experience on mobile devices remains a significant engineering challenge: limited GPU power, stricter memory bandwidth, and tighter thermal constraints all threaten to undermine the game's signature smooth frame rates and high-fidelity visuals. To combat these challenges, Epic Games is partnering with Arm to integrate Arm's Accuracy Super Resolution (ASR) upscaling technology into Fortnite Mobile, also being the first ASR-enhanced title. Rather than overhauling Fortnite's existing rendering pipeline, Epic is embedding ASR through a dedicated Unreal Engine 5 plug-in, which will be compatible with both Android and iOS devices. By leveraging a temporal upscaling approach, which is rooted in AMD's FidelityFX Super Resolution 2 framework, ASR analyzes multiple frames to reconstruct a higher-quality image.

Early demonstrations at GDC 2025 showed that devices using Arm's Immortalis-G720 GPU can achieve up to a 53% increase in frame rates while reducing power consumption by approximately 20%. Consequently, gamers can look forward to longer play sessions without worrying about overheating or excessive battery drain. For Fortnite players, ASR's integration translates into noticeably sharper textures in fast-paced encounters, crisper detail when surveying distant environments, and fewer visible artifacts overall. Importantly, these improvements are achieved without sacrificing artistic intent: Epic's artists and engineers retain full control over color accuracy and visual effects, even as the game renders at a lower internal resolution. Tests in collaboration with MediaTek further confirmed similar power savings on Dimensity 9300 chipsets, addressing one of the most pressing mobile concerns: battery life.

NVIDIA's Arm-Based Gaming SoC to Debut in Alienware Laptops

NVIDIA plans to introduce its first Arm-based "N1/N1x" gaming SoC in Dell's Alienware laptops later this year or early 2026, according to Taiwanese reports. The SoC is being developed with MediaTek, combining an Arm-derived CPU core and NVIDIA's Blackwell GPU architecture. Early rumors suggest that NVIDIA's new SoC will operate within an 80 W to 120 W power range, positioning it among existing high-performance laptop chips. When Qualcomm entered the Arm-based laptop design market with its Snapdragon X-series, it faced challenges because many titles required emulation through Microsoft's Prism framework, leading to compatibility issues and lower frame rates on Arm-based Windows devices. NVIDIA plans to work closely with Microsoft and game developers to ensure that Arm compatibility is present from day one, so every Arm SoC maker will benefit.

Rumors of an Arm-centric NVIDIA chip first appeared in 2023, and recent leaks suggest an engineering prototype already exists. During an earnings presentation earlier this year, NVIDIA CEO Jensen Huang announced that the company plans to integrate Arm CPU blocks into AI-oriented hardware, specifically mentioning the Digits compute system. Dell's CEO, Michael Dell, also hinted at a future AI-capable PC collaboration with NVIDIA, fueling speculation that Alienware will be the first to use the new chip. Beyond gaming, the partnership with MediaTek could lead to broader Arm solutions for both desktops and mobile devices. MediaTek is reportedly working on its own Arm-based PC processors, and AMD is exploring Arm architectures for future Surface devices. NVIDIA's entry into this space could turn Dell's Alienware laptops into a practical testbed for high-performance Arm technology in a market long dominated by x86 workforce.

Xiaomi XRING 01 SoC Die Shot Analyzed by Chinese Tech YouTuber

Three weeks ago, Kurnal and Geekerwan dived deep into Nintendo's alleged Switch 2 chipset. The very brave Chinese leakers are notorious for their acquiring of pre-release and early silicon samples. Last week, their collective attention turned to a brand-new Xiaomi mobile chip: the XRING 01. After months of insider murmurs and official teasers, the smartphone giant recently unveiled its proprietary flagship SoC. According to industry moles, Xiaomi has invested a lot of manpower into a special chip design entity—leadership likely wants to avoid a repeat of prior first-party developed disappointments. Despite rumors of disappointing prototype performance figures, mid-May Geekbench results pointed to the emergent XRING 01 mobile chip being up there with Qualcomm's dominant Snapdragon 8 Elite platform. Die shot analysis has confirmed Xiaomi's selection of a TSMC 3 nm "N3E" node process; also utilized by the latest Apple, Qualcomm and MediaTek flagships. Overall die size is 114.48 mm² (10.8 x 10.6 mm), with 109.5 mm² of used area; comparable to Apple's A18 Pro SoC footprint (refer to Geekerwan's comparison shot, below).

Unlike nearby rivals, the XRING 01 seems to not sport an integrated 5G modem. Notebookcheck surmised: "it is rumored to use an external radio from MediaTek. It isn't located on the actual die itself, and likely a contributing factor to why its size is so small." Annotations indicate the presence of off-the-shelf/licensed Arm CPU cores (ten in total): two Cortex-X925 units, four Cortex-A725 units, two Cortex-A725 units, and two Cortex-A520 units. Additionally, an Arm Immortalis-G925 MP16 iGPU was identified. A 6-core NPU—with 16 MB of cache—was highlighted, but it is not clear whether this is a proprietary effort or something bought in. Observers have noted the absence of SLC cache. GSMArena posited: "the Geekerwan team speculates that (Xiaomi's) omission of the SLC has hurt GPU efficiency—it's pretty fast, but it uses more power than the Dimensity GPU at peak performance. The more efficient CPU combined with the fact that the GPU rarely runs at full tilt makes for pretty good overall efficiency in real-life gaming tests." The XRING department's debut product is impressive, but industry watchdogs are looking forward to refined variants or full-fledged successors.

Xiaomi Envisions Proprietary Chipset Designs Being Deployed in non-Flagship Mobile Devices

Last Thursday, Xiaomi revealed its proprietary XRING O1 3 nm mobile chipset. After months of rumors, the Chinese firm's highly anticipated first-party chip design was introduced during their special "A New Beginning" event—held in Beijing. During this multipronged product launch celebration, company leadership disclosed the underpinnings of their first-ever flagship processor. According to official descriptions, Xiaomi's pivotal XRING O1 SoC is built on: "a cutting-edge second-gen 3 nm process with 19 billion transistors, features a 10-core CPU and 16-core Immortalis-G925 GPU, delivering flagship performance with industry-leading power efficiency. It also integrates Xiaomi's fourth-gen ISP and a 6-core NPU offering 44 TOPS for advanced AI processing." Days prior to important ceremonies in China, a joint statement—issued by Qualcomm—detailed an extended Snapdragon chipset supply agreement. The XRING O1 processor line will drive forthcoming Xiaomi 15S Pro smartphones and Pad 7 Ultra tablets; reserved for initial "domestic market" launches. Qualcomm's current flagship offerings are technically superior to Xiaomi's fresh effort, but an ever-shifting political landscape could affect future shipments.

Lu Weibing—a company president and partner—has outlined a vision for XRING's eventual expansion beyond a flagship/high-end product tier. Last week's intro firmly positioned the 3 nm part as premium option that will power suitably expensive Android-based mobile devices. Weibing acknowledged that his team has jumped into the deep end: "(for) this platform capability, it is most difficult to work on smartphone flagship SoC, it has high power consumption demand and its technology is extremely complicated. If you can, then you should have the ability to work on flagship smartphone SoC. (Once you) move to work on other chips, it won't be that difficult." Industry moles posit that Xiaomi's XRING division is already a formidable force, in terms of staff headcounts and experience. The department could be absorbing some inspiration from Apple; namely their custom C1 modem chip. The firm's president painted a picture of things to come: "so we want to focus on the flagship SoC, and then we want to make a capable modem for the future. We have to work on 4G and 5G parts—together with 3G—leading to a complete matrix. So that is what we need to do at this stage." Early leaks have indicated the existence of a binned version of the XRING O1 SoC; present within early Xiaomi Pad 7 Ultra tablet samples. In theory, these compromised chips could be deployed in unannounced cheaper products.

Palit Showcases Pandora NXNano with 157 AI TOPS at Computex 2025

At Computex 2025, Palit showed off its latest Pandora NXNano mini PC, bringing the power of NVIDIA Jetson Orin NX Super to a truly pocket-sized AI platform. At its heart sits an eight-core Arm Cortex-A78AE CPU paired with a 1,024-core Ampere-architecture GPU (32 Tensor Cores), delivering up to 157 TOPS of sparse AI throughput and 78 TOPS dense performance. Up to 16 GB of LPDDR5 memory (102.4 GB/s bandwidth) and a pre-installed 128 GB PCIe Gen 4 SSD ensure both data-intensive models and large datasets run smoothly. Connectivity is generous: dual 10/100/1000 Mb Ethernet, two USB 3.2 Gen 2 Type-A ports, a USB 3.2 Gen 2 Type-C OTG port, plus USB 2.0 and HDMI 2.0 outputs. Four M.2 slots (for storage, Wi-Fi, 5G/LTE or video-capture cards), an 8-lane MIPI CSI-2 camera interface, and headers for I²C, SPI, UART, GPIO, and CAN Bus round out a very flexible I/O package.

Housed in a sleek 145×123 ×66 mm chassis, the NXNano balances performance and thermals with "superior thermal design" that includes two 50 mm high-efficiency fans, ensuring sustained workloads remain cool even under heavy AI inference. Removable base and side panels allow easy customization, whether adding 3D-printed shells or extra modules. The rugged aluminium frame accommodates DC input from 12 to 36 V. At just 470 g, this DIY-friendly unit is ideal for edge deployments in retail, digital signage, robotics, and education, putting enterprise-grade AI in a truly compact form factor.

Xsight Labs Announced Availability of Its Arm-Based E1-SoC for Cloud and Edge AI Data Centers

Xsight Labs, a leading fabless semiconductor company providing end-to-end connectivity for next-generation hyperscale, edge and AI data center networks, today announced availability of its Arm -based E1-SoC for cloud and edge AI data centers. The E-Series is the only product of its kind to provide full control plane and data path programmability and is the industry's highest performance software-defined DPU (Data Processing Unit). Xsight Labs is taking orders now for its E1-SoC and the E1-Server, the first-to-market 800G DPU.

E1 is the first SoC in the E-Series, Xsight Labs' SDN (Software Defined Network) Infrastructure Processor product family of fully programmable network accelerators. Built on TSMC's advanced 5 nm process technology, the E1-SoC will begin shipping to customers and ecosystem partners.

Qualcomm Job Advert Alludes to Snapdragon-powered "Xbox Adjacent" Products

Late last week, tech news headlines were generated by a curious Qualcomm/NUVIA job advertisement. The presence of Xbox-related activities—at the US firm's Redmond, Washington office—has set off watchdog alarm bells. Microsoft's HQ is also located in this Seattle Metropolitan Area business hub. The job description outlined a sales director position—including interesting tidbits: "support sell-in activities for the next generation of Surface and Xbox products built on Snapdragon solutions" and "help define the next generation Surface and Xbox portfolios." Older leaks have suggested Microsoft's weighing up of ARM64 processor architecture, with next-gen Xbox designs in mind. Since the publication of widespread reportage, Qualcomm has edited out any mention of Xbox from the offending job ad. Given the latest evidence, fresh speculation has emerged from online media outlets. In theory, the company's hardware engineers could be formulating a next-gen Arm-based handheld—not directly related to "Project Kennan."

Jez Corden, executive editor of Windows Central, has dismissed many next-gen "handheld" or "home console" projections. Insiders believe that in-progress first-party development centers around AMD (x86) solutions. Similarly, Sony is reportedly collaborating with Team Red. The speculated PlayStation 6 (and a handheld offshoot) has been linked to Zen 6 and UDNA/RDNA 5 IPs. In response to initial claims, Corden reached out to shadowy industry figures. As disclosed in his opinion piece: "sources confirmed to me this morning that the next Xbox systems are not based on Qualcomm chips. There might be some third-party "Designed for Xbox" Arm-based offerings, like the Logitech G Cloud. But, the main plan from Microsoft, at least for now, is for the next-gen Xbox systems to have as much compatibility with your current library as possible. The overheads required to emulate games built for Microsoft's AMD-based systems are beyond what the Snapdragon line up is currently capable of." Today, Digital Foundry pointed out that Microsoft's "Xbox Play Anywhere" marketing campaign has created a looser categorization of related hardware. Thus providing extra scope for adjacent and supplemental devices (in the near future).

GIGAIPC Unveils Jetson Orin Series at Computex 2025

As COMPUTEX 2025, one of the most anticipated global tech events, prepares to open its doors in Taipei, AI applications are set to reach new heights. GIGAIPC, the industrial computing and edge AI subsidiary of GIGABYTE Technology, will unveil three innovative AI edge computing solutions at COMPUTEX 2025, showcasing its expertise in industrial-grade system design. Powered by NVIDIA Jetson Orin modules, the flagship QN-ORAX32-A1, QN-ORNX16GH-A1, and QN-ORNX16 deliver exceptional AI performance for smart manufacturing, intelligent surveillance, smart healthcare, smart retail, and AIoT applications, setting new benchmarks for edge computing.

The QN-ORAX32-A1, built on the NVIDIA Jetson AGX Orin 32 GB module, features an 8-core Arm v8.2 64-bit CPU and an NVIDIA Ampere GPU with 1792 CUDA cores and 56 Tensor cores, delivering up to 200 TOPS of AI performance—six times faster than its predecessor. This enhanced computing power makes it well-suited for high-load data processing and complex AI models. Beyond its computing performance, the QN-ORAX32-A1 is also built for real-world deployment. To ensure long-term durability in harsh industrial environments, the system incorporates a fanless thermal design and wide-range DC power input.

Arm Introduces New Product Naming for PC, Infrastructure, Mobile, and More

Arm today announced a simpler, more intuitive naming scheme for its compute platforms to help developers and manufacturers better understand which solutions suit their needs. Under the new naming structure, infrastructure-grade server CPU products will be known as Arm Neoverse, the name previously reserved for Arm's core IP for server CPUs. The PC lineup will adopt the name Arm Niva, while Arm Lumex will convey its focus on mobile performance to smartphones and tablets. Automotive applications, which require both safety certification and high compute capacity, will fall under Arm Zena. Finally, Arm Orbis will cover IoT and embedded devices, offering a tailored edge AI platform for everything from sensors to earbuds.

In addition to the market-specific names, Arm is overhauling its IP numbering system to align with generational releases. Future cores will carry labels such as Ultra, Premium, Pro, Nano, and Pico to indicate relative performance and power characteristics. Combining a clear platform identity with a descriptive performance tier, this two-tier approach should make it easier for partners to plan long-term roadmaps and pick the right building blocks for their designs. Arm's GPU technology will continue under the well-known Mali brand, but Mali will now be presented explicitly as a component within each platform rather than a separate product. By integrating Mali GPUs into Neoverse, Niva, Lumex, Zena, and Orbis, Arm aims to deliver fully validated subsystems instead of standalone IP pieces.

Final Nintendo Switch 2 Specifications Surface: CPU, GPU, Memory, and System Reservation

With the launch scheduled for June 5, Nintendo has quietly confirmed the final technical details for its next-generation hybrid console, the Switch 2, clarifying the specifications of the "custom NVIDIA processor" at its core and specifying exactly how much horsepower developers can access. The Switch 2's SoC is officially labeled the NVIDIA T239, a custom iteration of the Ampere architecture rather than a repurposed Tegra. It contains eight Arm Cortex‑A78C cores running a 64‑bit ARMv8 instruction set, with cryptography extensions enabled and no support for 32‑bit code. Each core features 64 KB of L1 instruction cache and 64 KB of L1 data cache. Six cores are available for game development, while two are reserved for system tasks. Clock speeds reach 998 MHz in handheld mode and 1,101 MHz when docked, and the CPU can theoretically burst to 1,700 MHz for demanding operations or future updates.

Graphics are powered by a full Ampere‑based GPU with 1,536 CUDA cores. Clock speeds top out at 1,007 MHz in docked mode and 561 MHz in handheld mode, delivering approximately 3.07 TeraFLOPS when docked and 1.71 TeraFLOPS in portable use. As with the CPU, a portion of GPU resources is allocated to operating system functions, slightly reducing the amount available for applications. Memory capacity has increased from 4 GB of LPDDR4 in the original Switch to 12 GB of LPDDR5X in the new model, split across two 6 GB modules. Peak bandwidth measures 102 GB/s docked and 68 GB/s handheld. Of the total, 3 GB are reserved for system functions and 9 GB are dedicated to games and applications. Nintendo has also introduced a dedicated File Decompression Engine for LZ4‑compressed data, offloading asset unpacking from the CPU to improve loading times without overheating the chipset. The console ships with 256 GB of UFS storage, expandable via microSD Express up to 2 TB, and features a 7.9‑inch, 1080p LCD that supports HDR10 and up to 120 Hz variable refresh rate in handheld mode. Although HDMI VRR is not yet available, the internal display fully supports it.

NVIDIA's GB10 Arm Superchip Looks Promising in Leaked Benchmark Results

Recent benchmark leaks from Geekbench have revealed that NVIDIA's first Arm-based "superchip," the GB10 Grace Blackwell, is on the verge of its market launch as reported by Notebookcheck. This processor is expected to be showcased at Computex 2025 later this month, where NVIDIA may also roll out the N1 and N1X (MediaTek confirmed in April that their CEO—Dr. Rick Tsai—will be delivering a big keynote speech at Computex 2025 trade show) alternatives tailored for desktop and laptop use. ASUS and Dell have already put the GB10 in their upcoming products while NVIDIA has also used it in its Project DIGITS AI supercomputer. The company announced this machine at CES 2025 saying it would cost around $2,999 and be ready to buy this month.

The benchmark listings show some inconsistencies, like identifying the chipset as Armv8 instead of Armv9. However, they point out that the GB10's Cortex-X925 cores can reach speeds up to 3.9 GHz. The performance results show that the GB10 can compete with high-end Arm and x86 processors in single-core metrics. Yet, Apple's M4 Max processors still leads in this area. The GB10 marks NVIDIA's move into the workstation-grade Arm processor market and could shake up the established players in the high-performance computing field.

Ampere Quietly Introduces 192-Core Arm CPU with 12-Channel DDR5 Memory

On Tuesday, Ampere Computing expanded its AmpereOne lineup by introducing six new AmpereOne M processors without much official press coverage or any news. The M-series chips employ a 7228-pin FCLGA socket and house between 96 and 192 single-threaded Armv8.6 plus cores operating at up to 3.60 GHz. Each core includes 2 MB of L2 cache, while a shared 64 MB system cache feeds both compute units and memory controllers. Unlike its predecessors, the new family features a 12-channel DDR5-5600 memory subsystem that supports one ECC-protected DIMM per channel and up to 3 TB of RAM. This design aims to meet the growing demands of cloud and AI workloads that rely heavily on large in-memory processing. Power consumption ranges from 239 W in entry-level models up to 348 W in the flagship A192-32M, which delivers 192 cores at 3.2 GHz. All variants incorporate dynamic voltage and frequency scaling and adaptive voltage control to regulate power draw and maintain efficiency.

On the I/O side, the processors provide 96 PCIe 5.0 lanes with flexible bifurcation options and offer 24 dedicated device controllers to connect accelerators, NVMe storage, and high-speed network adapters. While AMD's EPYC 9965 delivers similar core counts, simultaneous multithreading, and a mature x86-64 ecosystem, Ampere's focus is on memory capacity and bandwidth. By releasing the AmpereOne M series with minimal news coverage, Ampere appears to be laying the groundwork for its next-generation AmpereOne MX platform, which is expected to feature 256 cores, the same 12-channel DDR5 architecture, and a shift to TSMC's 3 nm process. According to Ampere, shipments of the M series began in the fourth quarter of 2024. Softbank, which acquired Ampere Computing in March of this year, is paying $6.5 billion in an all-cash transaction and wants to grab a piece of the enterprise AI deployments. And with CSPs requiring more cores and more bandwidth, Ampere is on the right track.

NVIDIA & MediaTek Reportedly Readying "N1" Arm-based SoC for Introduction at Computex

Around late April, MediaTek confirmed that their CEO—Dr. Rick Tsai—will be delivering a big keynote speech—on May 20—at this month's Computex 2025 trade show. The company's preamble focuses on their "driving of AI innovation—from edge to cloud," but industry moles propose a surprise new product introduction during proceedings. MediaTek and NVIDIA have collaborated on a number of projects; the most visible being automative solutions. Late last year, intriguing Arm-based rumors emerged online—with Team Green allegedly working on a first time attempt at breaking into the high-end CPU consumer market segment; perhaps with the leveraging of "Blackwell" GPU architecture. MediaTek was reportedly placed in the equation, due to expertise accumulated from their devising of modern Dimensity "big core" mobile processor designs. At the start of 2025, data miners presented evidence of Lenovo seeking new engineering talent. Their job description mentioned a mysterious NVIDIA "N1x" SoC.

Further conjecture painted a fanciful picture of forthcoming "high-end N1x and mid-tier N1 (non-X)" models—with potential flagship devices launching later on this year. According to ComputerBase.de, an unannounced "GB10" PC chip could be the result of NVIDIA and MediaTek's rumored "AI PC" joint venture. Yesterday's news article divulged: "currently (this) product (can be) found in NVIDIA DGX Spark (platforms), and similarly equipped partner solutions. The systems, available starting at $3000, are aimed at AI developers who can test LLMs locally before moving them to the data center. The chip combines a 'Blackwell' GPU with a 'Grace' Arm CPU (in order) to create an SoC with 128 GB LPDDR5X, and a 1 TB or 4 TB SSD. The 'GB10' offers a GPU with one petaflop of FP4 performance (with sparsity)." ComputerBase reckons that the integrated graphics solution makes use of familiar properties—namely "5th-generation Tensor Cores and 4th-generation RT Cores"—from GeForce RTX 50-series graphics cards. When discussing the design's "Grace CPU" setup, the publication's report outlined a total provision of: "20 Arm cores, including 10 Cortex-X925 and 10 Cortex-A725. The whole thing sits on a board measuring around 150 × 150 mm—for comparison: the classic NUC board format is 104 × 101 mm."

Raspberry Pi Lower Prices for 4 GB and 8 GB Compute Module 4

At Raspberry Pi, our mission is to make computing accessible and affordable for everyone and for businesses at every scale, so today we're delighted to announce a reduction in the price of some of the most popular variants of Raspberry Pi Compute Module 4. From now, if you buy a standard operating temperature Compute Module 4 from a Raspberry Pi Approved Reseller, it will cost you $5 less for a 4 GB RAM variant, and $10 less for an 8 GB RAM variant.

Broader access to a proven platform
Raspberry Pi Compute Module 4 is the cornerstone of an astonishing variety of applications, from medical equipment to energy services infrastructure and from concrete monitoring to retro gaming. There is a vast number of embedded use cases that don't require the processing heft of our new (ish) Compute Module 5; by lowering the cost of the higher-memory-density variants of its predecessor, we aim to make these projects more cost-effective, and to unlock new ones that previously weren't viable. We hope the price drop will introduce new possibilities both for businesses and for enthusiasts, helping you bring into existence products and projects we'd never even imagined.

Marvell Announces Successful Interoperability of Structera CXL Portfolio with AMD EPYC CPU and 5th Gen Intel Xeon Scalable Platforms

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced the successful interoperability of the Marvell Structera portfolio of Compute Express Link (CXL) with AMD EPYC CPUs and 5th Gen Intel Xeon platforms. This achievement underscores the commitment of Marvell to advancing an open and interoperable CXL ecosystem, addressing the growing demands for memory bandwidth and capacity in next-generation cloud data centers.

Marvell collaborated with AMD and Intel to extensively test Structera CXL products with AMD EPYC and 5th Gen Intel Xeon Scalable platforms across various configurations, workloads, and operating conditions. The results demonstrated seamless interoperability, delivering stability, scalability, and high-performance memory expansion that cloud data center providers need for mass deployment.

Cadence to Acquire Arm Artisan Foundation IP Business

Cadence today announced that it has entered into a definitive agreement with Arm to acquire Arm's Artisan foundation IP business, consisting of standard cell libraries, memory compilers, and general-purpose I/Os (GPIOs) optimized for advanced process nodes at the leading foundries. The transaction will augment Cadence's expanding design IP offerings, anchored by a leading portfolio of protocol and interface IP, memory interface IP, SerDes IP at the most advanced nodes, and embedded security IP from the pending Secure-IC acquisition.

By increasing its footprint in SoC designs, Cadence is reinforcing its commitment to continuously accelerate customers' time to market and to optimize their cost, power and performance on the world's leading foundry processes. Cadence will acquire the Arm Artisan foundation IP business through an asset purchase agreement with a concurrent technology license agreement, to be signed at closing and subject to any existing rights. As part of the transaction, Cadence will acquire a highly talented and experienced engineering team that is well respected in the industry and can help accelerate development of both related and new IP products.
Return to Keyword Browsing
Jul 12th, 2025 03:25 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts