News Posts matching #Arm

Return to Keyword Browsing

GEEKOM to Reveal High-performance Mini PCs at CES 2025

GEEKOM, a Taiwanese tech company famous for making high quality mini PCs, is heading to CES for the second consecutive year in 2025 with an exciting lineup of new products. Known as the Green Mini PC Global Leader, GEEKOM always focuses on improving the quality and reliability of its products, and it also spares no effort in cutting down carbon emissions and making the world a greener place.

Among the many mini PCs that GEEKOM plans to put on show at CES 2025, there are many industry firsts. The GEEKOM QS1, for instance, is the world's first mini PC powered by a Qualcomm chipset. The tiny computer sports an Arm-based Qualcomm Snapdragon X1E-80-100 processor with twelve 4.0 GHz Oryon CPU cores, a 3.8 TFLOPS Adreno X1-85 GPU and a 45 TOPS Hexagon NPU. It is smart and fast enough to breeze through all of your daily home and office computing chores, yet energy-efficient enough to significantly cut down your electric bill.

Qualcomm Argues Less Than 1% of Arm IP is Inside Nuvia Cores in Snapdragon X Chips

Days of Arm-Qualcomm legal disputes continue, and with new day we get new updates. Gerard Williams III, CEO and founder of Nuvia, also one of the main brains behind Qualcomm's Oryon cores inside Snapdragon X processors, testified before the court that the chip design contains minimal Arm IP despite using the company's instruction set architecture. Williams estimated that "one percent or less" of the final design originated from Arm's IP. Despite Qualcomm using Arm ISA license, the company has very little Arm IP in its SoCs. Most of the Snapdragon X design has been done within Qualcomm's labs, in addition to Nuvia. Williams, who co-founded Nuvia in 2019, explained that while their processors use Arm's Armv8 instruction set, the core design was largely developed from scratch. Nuvia initially secured two non-transferable licenses from Arm: a Technology License Agreement (TLA) and an Architecture License Agreement (ALA).

These agreements allowed the company to develop custom cores while implementing Arm's instruction set. The development team created their own proprietary microarchitecture, including custom data paths and cache systems, rather than using Arm's existing designs. The controversy erupted when Qualcomm acquired Nuvia and announced plans to use the cores in PC processors rather than the initially intended datacenter applications. Arm demanded a renegotiation of licensing terms following the acquisition, which Qualcomm refused, arguing that its existing ALA covered Nuvia's designs. The dispute escalated when Arm revoked Nuvia's licenses in 2022 and terminated Qualcomm's Architecture License Agreement this October. Arm is now seeking the destruction of all Nuvia designs developed before the merger, arguing that the licensing agreements couldn't be transferred through acquisition. Qualcomm builds a case on TLA not being violated since the designs are mostly custom, so we have to see how the ruling proceeds. Arm wants to "hurt" Qualcomm with ALA revoking, and perhaps the final case ends with a settlement, given that Qualcomm is one of Arm's biggest customers.

NVIDIA Unveils New Jetson Orin Nano Super Developer Kit

NVIDIA is taking the wraps off a new compact generative AI supercomputer, offering increased performance at a lower price with a software upgrade. The new NVIDIA Jetson Orin Nano Super Developer Kit, which fits in the palm of a hand, provides everyone from commercial AI developers to hobbyists and students, gains in generative AI capabilities and performance. And the price is now $249, down from $499.

Available today, it delivers as much as a 1.7x leap in generative AI inference performance, a 70% increase in performance to 67 INT8 TOPS, and a 50% increase in memory bandwidth to 102 GB/s compared with its predecessor. Whether creating LLM chatbots based on retrieval-augmented generation, building a visual AI agent, or deploying AI-based robots, the Jetson Orin Nano Super is an ideal solution to fetch.

Arm Refutes Custom Chip Production Ambitions, Wants to Destroy Qualcomm's Nuvia IP

A high-stakes trial between technology giants Arm and Qualcomm has revealed deeper tensions in the semiconductor industry, as Arm seeks the destruction of chip designs from Qualcomm's $1.4 billion Nuvia acquisition. The case, being heard in Delaware federal court, centers on a licensing dispute that could impact the future of AI-powered Windows PCs. Arm CEO Rene Haas took the stand Monday, adding allegations that Qualcomm violated licensing agreements following its 2021 acquisition of chip startup Nuvia. The issue is whether Qualcomm should pay Nuvia's higher royalty rates for using Arm's intellectual property rather than its own lower rates. Internal documents revealed Nuvia's rates were "many multiples" higher than Qualcomm's, with the acquisition potentially reducing Arm's revenue by $50 million.

During cross-examination, Qualcomm's legal team challenged Arm's motives, suggesting the dispute is part of a broader strategy to confront a customer increasingly viewed as a competitor. When presented with documents outlining potential plans for Arm to design its own chips, Haas downplayed these ambitions, emphasizing that Arm has never entered chip manufacturing. Allegedly, Arm sent letters to Qualcomm's customers, including Samsung, warning about possible disruption if Nuvia's IP design before acquisition in 2021 must be destroyed. Haas defended these communications, citing frequent inquiries from industry partners.

Intel and Qualcomm Clash Over Arm-based PC Return Rates, Qualcomm Notes It's "Within Industry Norm"

In an interesting exchange about product stance between Intel's interim co-CEO Michelle Johnston Holthaus and Qualcomm, both have offered conflicting statements about the market performance of Arm-based PCs. The dispute centers on customer satisfaction and return rates for PCs powered by Qualcomm's Snapdragon X processors. During the Barclays 22nd Annual Global Technology Conference, Holthaus claimed that retailers are experiencing high return rates for Arm PCs, mainly citing software compatibility issues. According to her, customers are finding that typical applications don't work as expected on these devices. "I mean, if you look at the return rate for Arm PCs, you go talk to any retailer, their number one concern is, wow, I get a large percentage of these back. Because you go to set them up, and the things that we just expect don't work," said Holthaus.

"Our devices continue to have greater than 4+ stars across consumer reviews and our products have received numerous accolades across the industry including awards from Fast Company, TechRadar, and many consumer publications. Our device return rates are within industry norm," said Qualcomm representative for CRN. Qualcomm projects that up to 50% of laptops will transition to non-x86 platforms within five years, signaling their confidence in Arm-based solutions. While software compatibility remains a challenge for Arm PCs, with not all Windows applications fully supported, Qualcomm and Microsoft have implemented an emulation layer to address these limitations. Holthaus acknowledged that Apple's successful transition to Arm-based processors has helped pave the way for broader Arm adoption in the PC market. "Apple did a lot of that heavy lift for Arm to make that ubiquitous with their iOS and their whole walled garden stack. So I'm not going to say Arm will get more, I'm sure, than it gets today. But there are certainly, I think, some real barriers to getting there," noted Holthaus.

Advantech Unveils Hailo-8 Powered AI Acceleration Modules for High-Efficiency Vision AI Applications

Advantech, a leading provider of AIoT platforms and services, proudly unveils its latest AI acceleration modules: the EAI-1200 and EAI-3300, powered by Hailo-8 AI processors. These modules deliver AI performance of up to 52 TOPS while achieving more than 12 times the power efficiency of comparable AI modules and GPU cards. Designed in standard M.2 and PCIe form factors, the EAI-1200 and EAI-3300 can be seamlessly integrated with diverse x86 and Arm-based platforms, enabling quick upgrades of existing systems and boards to incorporate AI capabilities. With these AI acceleration modules, developers can run inference efficiently on the Hailo-8 NPU while handling application processing primarily on the CPU, optimizing resource allocation. The modules are paired with user-friendly software toolkits, including the Edge AI SDK for seamless integration with HailoRT, the Dataflow Compiler for converting existing models, and TAPPAS, which offers pre-trained application examples. These features accelerate the development of edge-based vision AI applications.

EAI-1200 M.2 AI Module: Accelerating Development for Vision AI Security
The EAI-1200 is an M.2 AI module powered by a single Hailo-8 VPU, delivering up to 26 TOPS of computing performance while consuming approximately 5 watts of power. An optional heatsink supports operation in temperatures ranging from -40 to 65°C, ensuring easy integration. This cost-effective module is especially designed to bundle with Advantech's systems and boards, such as the ARK-1221L, AIR-150, and AFE-R770, enhancing AI applications including baggage screening, workforce safety, and autonomous mobile robots (AMR).

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

RPCS3 PlayStation 3 Emulator Gets Native arm64 Support on Linux, macOS, and Windows

The RPCS3 team has announced the successful implementation of arm64 architecture support for their PlayStation 3 emulator. This development enables the popular emulator to run on a broader range of devices, including Apple Silicon machines, Windows-on-Arm, and even some smaller Arm-based SBC systems like the Raspberry Pi 5. The journey to arm64 support began in late 2021, following the release of Apple's M1 processors, with initial efforts focused on Linux platforms. After overcoming numerous technical hurdles, the development team, led by core developer Nekotekina and graphics specialist kd-11, achieved a working implementation by mid-2024. One of the primary challenges involved adapting the emulator's just-in-time (JIT) compiler for arm64 systems.

The team developed a solution using LLVM's intermediate representation (IR) transformer, which allows the emulator to generate code once for x86-64 and then transform it for arm64 platforms. This approach eliminated the need to maintain separate codebases for different architectures. A particular technical challenge emerged from the difference in memory management between x86 and arm64 systems. While the PlayStation 3 and traditional x86 systems use 4 KB memory pages, modern arm64 platforms typically operate with 16 KB pages. Though this larger page size can improve memory performance in native applications, it presented unique challenges for emulating the PS3's graphics systems, particularly when handling smaller textures and buffers. While the emulator now runs on arm64 devices, performance varies significantly depending on the hardware. Simple applications and homebrew software show promising results, but more demanding commercial games may require substantial computational power beyond what current affordable Arm devices can provide.

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

AMD Introduces Versal RF Series Adaptive SoCs With Integrated Direct RF-Sampling Converters

AMD today announced the expansion of the AMD Versal adaptive system-on-chip (SoC) portfolio with the introduction of the Versal RF Series that includes the industry's highest compute performance in a single-chip device with integrated direct radio frequency (RF)-sampling data converters.

Versal RF Series offers precise, wideband-spectrum observability and up to 80 TOPS of digital signal processing (DSP) performance in a size, weight, and power (SWaP)-optimized design, targeting RF systems and test equipment applications in the aerospace and defense (A&D) and test and measurement (T&M) markets, respectively.

The Raspberry Pi 500 and Raspberry Pi Monitor Goes On Sale

Just in time for Christmas, we're delighted to announce the release of two hotly anticipated products that we think will look great under the tree. One of them might even fit in a stocking if you push hard enough. Introducing Raspberry Pi 500, available now at $90, and the Raspberry Pi Monitor, on sale at $100: together, they're your complete Raspberry Pi desktop setup.

Integral calculus
Our original mission at Raspberry Pi was to put affordable, programmable personal computers in the hands of young people all over the world. And while we've taken some detours along the way - becoming one of the world's largest manufacturers of industrial and embedded computers - this mission remains at the heart of almost everything we do. It drives us to make lower-cost products like the $15 Raspberry Pi Zero 2 W, and more powerful products, like our flagship Raspberry Pi 5 SBC. These products provide just the essential processing element of a computer, which can be combined with the family television, and second-hand peripherals, to build a complete and cost-effective system.

GEEKOM QS1 Pro Mini PC Specs Leak Reveals 12-core Snapdragon X Elite SoC, up to 64GB of Memory

Just a few days ago, we reported on a leaked teaser for GEEKOM's upcoming QS1 Pro mini PC. The system is set to mark GEEKOM's foray into the world of Arm-based PCs, likely in a bid to take on Apple's Mac mini. However, if a recent leak is to be believed, the QS1 Pro may have a tough time pulling that off.

The leaked specifications, courtesy of a Spanish publication, reveal that the QS1 Pro will feature the Snapdragon X1E-80-100 SoC - the second-fastest member of the X Elite family, slotting in below the 84-100 SKU. The X1E-80-100 boasts 12 Oryon cores, along with a 3.8 TFLOPs Adreno GPU. Interestingly, the leaked specs claim GPU performance of up to 4.6 TFLOPs, which is either a typo, or an indication that an X1E-84-100 variant will be available.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.

GEEKOM Teases World's First Snapdragon X Elite Desktop Mini PC

This was bound to happen sooner rather than later—desktop mini-PC designer GEEKOM, which specializes in mini-PCs powered by mobile processors, teased its first product powered by a Qualcomm Snapdragon X Elite processor. This marks one of the first consumer desktop with Windows 11 Arm. The company hasn't put out specs for the desktop, but it should go up against the base model of the Apple Mac Mini M4 in use-case—as a slick and efficient everyday desktop for Internet and office productivity. The GEEKOM desktop has a very Mac Mini-like product design. The front features a power button in the right place, next to a 4-pole headset jack, and a couple of type-A USB 3.x ports. The side appears to have a multi-format card reader. There are no pics of the rear I/O.

Raspberry Pi Compute Module 5 Officially Launches With Broadcom BCM2712 Quad-Core SoC

Today we're happy to announce the much-anticipated launch of Raspberry Pi Compute Module 5, the modular version of our flagship Raspberry Pi 5 single-board computer, priced from just $45.

An unexpected journey
We founded the Raspberry Pi Foundation back in 2008 with a mission to give today's young people access to the sort of approachable, programmable, affordable computing experience that I benefitted from back in the 1980s. The Raspberry Pi computer was, in our minds, a spiritual successor to the BBC Micro, itself the product of the BBC's Computer Literacy Project. But just as the initially education-focused BBC Micro quickly found a place in the wider commercial computing marketplace, so Raspberry Pi became a platform around which countless companies, from startups to multi-billion-dollar corporations, chose to innovate. Today, between seventy and eighty percent of Raspberry Pi units go into industrial and embedded applications.

TOP500: El Capitan Achieves Top Spot, Frontier and Aurora Follow Behind

The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier and Aurora. Both systems have since moved down to No. 2 and No. 3 spots, respectively. Additionally, new systems have found their way onto the Top 10.

The new El Capitan system at the Lawrence Livermore National Laboratory in California, U.S.A., has debuted as the most powerful system on the list with an HPL score of 1.742 EFlop/s. It has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. El Capitan relies on a Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 58.89 GigaFLOPS/watt. This power efficiency rating helped El Capitan achieve No. 18 on the GREEN500 list as well.

SC24: Supercomputer Fugaku Retains First Place Worldwide in HPCG and Graph500 Rankings

The supercomputer Fugaku, jointly developed by RIKEN and Fujitsu, has successfully retained the top spot for 10 consecutive terms in two major high-performance computer rankings, HPCG and Graph500 BFS (Breadth-First Search), and has also taken sixth place for the TOP500 and fourth place for the HPL-MxP rankings. The HPCG is a performance ranking for computing methods often used for real-world applications, and the Graph500 ranks systems based on graph analytic performance, an important element in data-intensive workloads. The results of the rankings were announced on November 19 at SC24, which is currently being held at Georgia World Congress Center in Atlanta, Georgia, USA.

The top ranking on Graph500 was won by a collaboration involving RIKEN, Institute of Science Tokyo, Fixstars Corporation, Nippon Telegraph and Telephone Corporation, and Fujitsu. It earned a score of 204.068 TeraTEPS with Fugaku's 152,064 nodes, an improvement of 38.038 TeraTEPS in performance from the previous measurement. This is the first time that a score of over 200 TeraTEPS has been recorded on the Graph500 benchmark.

Microsoft Releases Official ISO for Windows 11 on Arm

Microsoft's Windows-on-Arm (WoA) project has been going through an expansion phase, with the recent range of Snapdragon X SoCs powering many laptops. However, as we are about to get bombed with WoA devices in 2025, Microsoft has prepared an official ISO image of the Windows 11 operating system, available for users to download on the official website. The download size is about 5 GB and requires an Arm-based system to work, as expected. The need for Windows 11 ISO image for WoA comes from the increased number of desktop builds shipped to developers worldwide based on Arm. There are many workstations like the ones offered by ODMs, with an Ampere Altra or Altra Max processor inside.

This is also good news for enthusiasts waiting for the NVIDIA-MediaTek collaboration to drop its first goodies next year, and we expect to see some interesting solutions arise. With Microsoft investing its developer resources into producing Windows 11 Arm builds, it signals that the adoption of Arm-based devices is about to get much higher interest from the consumer standpoint.

ECS CubeSat On-Board Computer Ready for 2025 Space Mission

Elitegroup Computer Systems, with its long-standing expertise in computer motherboard design, has successfully developed the CubeSat On-Board Computer (OBC). This groundbreaking product will carry a payload and is set to launch aboard the Lilium3 CubeSat from National Cheng Kung University, expected to lift off in Q4 2025, initiating space experiments.

In parallel, ECS has developed the high-performance OBCC6M7R motherboard specifically designed for CubeSats, which will officially begin accepting orders for sale starting in November this year. The introduction of this product will accelerate ECS's commercialization of space industry technologies, injecting strong momentum into the company's future growth.

AMD and Fujitsu to Begin Strategic Partnership to Create Computing Platforms for AI and High-Performance Computing (HPC)

AMD and Fujitsu Limited today announced that they have signed a memorandum of understanding (MOU) to form a strategic partnership to create computing platforms for AI and high-performance computing (HPC). The partnership, encompassing aspects from technology development to commercialization, will seek to facilitate the creation of open source and energy efficient platforms comprised of advanced processors with superior power performance and highly flexible AI/HPC software and aims to accelerate open-source AI and/or HPC initiatives.

Due to the rapid spread of AI, including generative AI, cloud service providers and end-users are seeking optimized architectures at various price and power per performance configurations. From end-to-end, AMD supports an open ecosystem, and strongly believes in giving customers choice. Fujitsu has worked to develop FUJITSU-MONAKA, a next-generation Arm-based processor that aims to achieve both high performance and low power consumption. With FUJITSU-MONAKA, together with AMD Instinct accelerators, customers have an additional choice to achieve large-scale AI workload processing to whilst attempting to reduce the data center total cost of ownership.

New Arm CPUs from NVIDIA Coming in 2025

According to DigiTimes, NVIDIA is reportedly targeting the high-end segment for its first consumer CPU attempt. Slated to arrive in 2025, NVIDIA is partnering with MediaTek to break into the AI PC market, currently being popularized by Qualcomm, Intel, and AMD. With Microsoft and Qualcomm laying the foundation for Windows-on-Arm (WoA) development, NVIDIA plans to join and leverage its massive ecosystem of partners to design and deliver regular applications and games for its Arm-based processors. At the same time, NVIDIA is also scheduled to launch "Blackwell" GPUs for consumers, which could end up in these AI PCs with an Arm CPU at its core.

NVIDIA's partner, MediaTek, has recently launched a big core SoC for mobile called Dimensity 9400. NVIDIA could use something like that as a base for its SoC and add its Blackwell IP to the mix. This would be similar to what Apple is doing with its Apple Silicon and the recent M4 Max chip, which is apparently the fastest CPU in single-threaded and multithreaded workloads, as per recent Geekbench recordings. For NVIDIA, the company already has a team of CPU designers that delivered its Grace CPU to enterprise/server customers. Using off-the-shelf Arm Neoverse IP, the company's customers are acquiring systems with Grace CPUs as fast as they are produced. This puts a lot of hope into NVIDIA's upcoming AI PC, which could offer a selling point no other WoA device currently provides, and that is tried and tested gaming-grade GPU with AI accelerators.

Google's Upcoming Tensor G5 and G6 Specs Might Have Been Revealed Early

Details of what is claimed to be Google's upcoming Tensor G5 and G6 SoCs have popped up over on Notebookcheck.net and the site claims to have found the specs on a public platform, without going into any further details. Those that were betting on the Tensor G5—codenamed Laguna—delivering vastly improved performance over the Tensor G4, are likely to be disappointed, at least on the CPU side of things. As previous rumours have suggested, the chip is expected to be manufactured by TSMC, using its N3E process node, but the Tensor G5 will retain the single Arm Cortex-X4 core, although it will see a slight upgrade to five Cortex-A725 cores vs. the three Cortex-A720 cores of the Tensor G4. The G5 loses two Cortex-A520 cores in favour of the extra Cortex-A725 cores. The Cortex-X4 will also remain clocked at the same peak 3.1 GHz as that of the Tensor G4.

Interestingly it looks like Google will drop the Arm Mali GPU in favour of an Imagination Technologies DXT GPU, although the specs listed by Notebookcheck doesn't add up with any of the specs listed by Imagination Technologies. The G5 will continue to support 4x 16-bit LPDDR5 or LPDDR5X memory chips, but Google has added support for UFS 4.0 memory, something that's been a point of complaint for the Tensor G4. Other new additions is support for 10 Gbps USB 3.2 Gen 2 and PCI Express 4.0. Some improvements to the camera logic has also been made, with support for up to 200 Megapixel sensors or 108 Megapixels with zero shutter lag, but if Google will use such a camera or not is anyone's guess at this point in time.

Arm Plans to Cancel Qualcomm's License, Issues 60-Day Notice

According to Bloomberg, Arm Holding PLC, the holding company behind the Arm instruction set and Arm chip designs, just issued a 60-day notice period of license retirement to Qualcomm, its long-time partner. The UK-based ISA provider has notified Qualcomm that it will cancel the Arm ISA architectural license agreement after the contract-mandated 60-day notice. The issues between the two arose in 2022, just a year after Qualcomm acquired Nuvia and its IP. Arm filed a lawsuit claiming that the reason was "Qualcomm attempted to transfer Nuvia licenses without Arm's consent, which is a standard restriction under Arm's license agreements." To transfer Nuvia core licensing, Qualcomm would need to ask Arm first and create a new licensing deal.

The licensing reworking came just in time when Qualcomm experienced its biggest expansion. The new Snapdragon 8 Elite is being used in the mobile sector, the Snapdragon X Elite/Plus is being used in Copilot+ PCs, and the automotive sector is also getting the new Snapdragon Cockpit/Ride Elite chipsets. Most of that is centered around Nuvia Oryon core IP, a high-performance, low-power design. Arm's representatives declined to comment on this move for Bloomberg, while a Qualcomm spokesman noted that the British company was trying to "strong-arm a longtime partner."

Arm and Partners Develop AI CPU: Neoverse V3 CSS Made on 2 nm Samsung GAA FET

Yesterday, Arm has announced significant progress in its Total Design initiative. The program, launched a year ago, aims to accelerate the development of custom silicon for data centers by fostering collaboration among industry partners. The ecosystem has now grown to include nearly 30 participating companies, with recent additions such as Alcor Micro, Egis, PUF Security, and SEMIFIVE. A notable development is a partnership between Arm, Samsung Foundry, ADTechnology, and Rebellions to create an AI CPU chiplet platform. This collaboration aims to deliver a solution for cloud, HPC, and AI/ML workloads, combining Rebellions' AI accelerator with ADTechnology's compute chiplet, implemented using Samsung Foundry's 2 nm Gate-All-Around (GAA) FET technology. The platform is expected to offer significant efficiency gains for generative AI workloads, with estimates suggesting a 2-3x improvement over the standard CPU design for LLMs like Llama3.1 with 405 billion parameters.

Arm's approach emphasizes the importance of CPU compute in supporting the complete AI stack, including data pre-processing, orchestration, and advanced techniques like Retrieval-augmented Generation (RAG). The company's Compute Subsystems (CSS) are designed to address these requirements, providing a foundation for partners to build diverse chiplet solutions. Several companies, including Alcor Micro and Alphawave, have already announced plans to develop CSS-powered chiplets for various AI and high-performance computing applications. The initiative also focuses on software readiness, ensuring that major frameworks and operating systems are compatible with Arm-based systems. Recent efforts include the introduction of Arm Kleidi technology, which optimizes CPU-based inference for open-source projects like PyTorch and Llama.cpp. Notably, as Google claims, most AI workloads are being inferenced on CPUs, so creating the most efficient and most performant CPUs for AI makes a lot of sense.

What the Intel-AMD x86 Ecosystem Advisory Group is, and What it's Not

AVX-512 was proposed by Intel more than a decade ago—in 2013 to be precise. A decade later, the implementation of this instruction set on CPU cores remains wildly spotty—Intel implemented it first on an HPC accelerator, then its Xeon server processors, then its client processors, before realizing that hardware hasn't caught up with the technology to execute AVX-512 instructions in an energy-efficient manner, before deprecating it on the client. AMD implemented it just a couple of years ago with Zen 4 with a dual-pumped 256-bit FPU on 5 nm, before finally implementing a true 512-bit FPU on 4 nm. AVX-512 is a microcosm of what's wrong with the x86 ecosystem.

There are only two x86 CPU core vendors, the IP owner Intel, and its only surviving licensee capable of contemporary CPU cores, AMD. Any new additions to the ISA introduced by either of the two have to go through the grind of their duopolistic competition before software vendors could assume that there's a uniform install base to implement something new. x86 is a net-loser of this, and Arm is a net-winner. Arm Holdings makes no hardware of its own, except continuously developing the Arm machine architecture, and a first-party set of reference-design CPU cores that any licensee can implement. Arm's great march began with tiny embedded devices, before its explosion into client computing with smartphone SoCs. There are now Arm-based server processors, and the architecture is making inroads to the last market that x86 holds sway over—the PC. Apple's M-series processors compete with all segments of PC processors—right from the 7 W class, to the HEDT/workstation class. Qualcomm entered this space with its Snapdragon Elite family, and now Dell believes NVIDIA will take a swing at client processors in 2025. Then there's RISC-V. Intel finally did something it should have done two decades ago—set up a multi-brand Ecosystem Advisory Group. Here's what it is, and more importantly, what it's not.
Return to Keyword Browsing
Dec 21st, 2024 09:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts