News Posts matching #Arm

Return to Keyword Browsing

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

Qualcomm Snapdragon X "Copilot+" AI PCs Only Accounted for 0.3% of PassMark Benchmark Runs

The much-anticipated revolution in AI-powered personal computing seems to be off to a slower start than expected. Qualcomm's Snapdragon X CPUs, touted as game-changers in the AI PC market, have struggled to gain significant traction since their launch. Recent data from PassMark, a popular benchmarking software, reveals that Snapdragon X CPUs account for a mere 0.3% of submissions in the past 30 days. This is a massive contrast to the 99.7% share held by traditional x86 processors from Intel and AMD, which raises questions about the immediate future of ARM-based PCs. The underwhelming adoption comes despite bold predictions from industry leaders. Qualcomm CEO Cristiano Amon had projected that ARM-based CPUs could capture up to 50% of the Windows PC market by 2029. Similarly, ARM's CEO anticipated a shift away from x86's long-standing dominance.

However, it turns out that these PCs are primarily bought for the battery life, not their AI capabilities. Of course, it's premature to declare Arm's Windows venture a failure. The AI PC market is still in its infancy, and upcoming mid-tier laptops featuring Snapdragon X Elite CPUs could boost adoption rates. A lot of time still needs to pass before the volume of these PCs reaches millions of units shipped by x86 makers. The true test will come with the launch of AMD's Ryzen AI 300 and Intel's Lunar Lake CPUs, providing a clearer picture of how ARM-based options compare in AI performance. As the AI PC landscape evolves, Qualcomm faces mounting pressure. NVIDIA's anticipated entry into the market and significant performance improvements in next-generation x86 processors from Intel and AMD pose a massive challenge. The coming months will be crucial in determining whether Snapdragon X CPUs can live up to their initial hype and carve out a significant place in the AI PC ecosystem.

Arm Unveils "Accuracy Super Resolution" Based on AMD FSR 2

In a community blog post, Arm has announced its new Accuracy Super Resolution (ASR) upscaling technology. This open-source solution aims to transform mobile gaming by offering best-in-class upscaling capabilities for smartphones and tablets. Arm ASR addresses a critical challenge in mobile gaming: delivering high-quality graphics while managing power consumption and heat generation. By rendering games at lower resolutions and then intelligently upscaling them, Arm ASR promises to significantly boost performance without sacrificing visual quality. The technology builds upon AMD's FidelityFX Super Resolution 2 (FSR 2) and adapts it specifically for mobile devices. Arm ASR utilizes temporal upscaling, which combines information from multiple frames to produce higher-quality images from lower-resolution inputs. Even though temporal upscaling is more complicated to implement than spatial frame-by-frame upscaling, it delivers better results and gives developers more freedom.

This approach allows for more ambitious graphics while maintaining smooth gameplay. In benchmark tests using a complex scene, Arm demonstrated impressive results. Devices featuring the Arm Immortalis-G720 GPU showed substantial framerate improvements when using Arm ASR compared to native resolution rendering and Qualcomm's Game Super Resolution (GSR). Moreover, the technology helped maintain stable temperatures, preventing thermal throttling that can compromise user experience. Collaboration with MediaTek revealed significant power savings when using Arm ASR on a Dimensity 9300 handset. This translates to extended battery life for mobile gamers, addressing key concerns. Arm is releasing ASR under an MIT open-source license, encouraging widespread adoption and experimentation among developers. Below you can see the comparison of various upscalers.

Samsung Galaxy Z Fold6 and Z Flip6 Elevate Galaxy AI to New Heights

Samsung Electronics today announced its all-new Galaxy Z Fold6 and Galaxy Z Flip6, along with Galaxy Buds3 and Galaxy Buds3 Pro at Galaxy Unpacked in Paris.

Earlier this year, Samsung ushered in the era of mobile AI through the power of Galaxy AI. With the introduction of the new Galaxy Z series, Samsung is opening the next chapter of Galaxy AI by leveraging its most versatile and flexible form factor perfectly designed to enable a range of unique mobile experiences. Whether using Galaxy Z Fold's large screen, Galaxy Z Flip's FlexWindow or making the most of the iconic FlexMode, Galaxy Z Fold6 and Flip6 will provide more opportunities to maximize AI capabilities. Built on the foundation of Samsung's history of form factor innovation, Galaxy AI uses powerful, intelligent, and durable foldable experience to accelerate a new era of communication, productivity, and creativity.

Battery Life is Driving Sales of Qualcomm Snapdragon Copilot+ PCs, Not AI

The recent launch of Copilot+ PCs, a collaboration between Microsoft and Qualcomm, has taken an unexpected turn in the market. While these devices were promoted for their artificial intelligence capabilities, a Bloomberg report reveals that consumers are primarily drawn to them for their impressive battery life. The Snapdragon X-powered Copilot+ PCs have made a significant impact, securing 20% of global PC sales during their launch week. However, industry analyst Avi Greengart points out that the extended battery life, not the AI features, is driving these sales. Microsoft introduced three AI-powered features exclusive to these PCs: Cocreator, Windows Studio Effects, and Live Captions with Translation. Despite these innovations, many users find these features non-essential for daily use. The delay of the anticipated Recall feature due to privacy concerns has further dampened enthusiasm for the AI aspects of these devices.

The slow reception of on-device AI capabilities extends beyond consumer preferences to the software industry. Major companies like Adobe, Salesforce, and SentinelOne declined Microsoft's request to optimize their apps for the new hardware, citing resource constraints and the limited market share of AI-capable PCs. Gregor Steward, SentinelOne's VP for AI, suggests it could take years before AI PCs are widespread enough to justify app optimization. Analysts project that by 2028, only 40% of new computers will be AI-capable. Despite these challenges, Qualcomm remains optimistic about the future of AI PCs. While the concept may currently be more on the marketing side, the introduction of Arm-based Windows laptops offers a welcome alternative to the Intel-AMD duopoly. As the technology evolves and adoption increases, on-device AI features may become more prevalent and useful. The imminent arrival of AMD Ryzen AI 300 series and Intel Lunar Lake chips promises to expand the Copilot+ PC space further. For now, however, it appears that superior battery life remains the primary selling point for consumers.

Microsoft Pulls Windows 11 24H2 from Release Preview Channel, Build Riddled with Bugs

Microsoft has unexpectedly halted the rollout of the upcoming Windows 11 24H2 update to Windows Insiders on the Release Preview Channel. The pause was quietly announced through an update to the original release blog post, which had initially touted the preview's new features like Wi-Fi 7 support, Sudo for Windows, Rust in the Windows kernel, and various UI enhancements. Microsoft has not provided an official reason for hitting the brakes on the 24H2 preview rollout. Brandon LeBlanc, the Windows Insider Senior Program Manager, simply stated, "We are working to get it rolling out again shortly."

However, a glimpse at the Microsoft Feedback Hub reveals a multitude of issues reported by Insiders testing the 24H2 build. Complaints range from application freezes and performance degradation to VPN connectivity problems. Some users have even taken to social media to voice their frustrations, with one describing the Arm version as a "disastrous, worst 'release' preview I can remember." The Release Preview Channel is typically recommended for commercial users and those wanting to test upcoming Windows releases before general availability. Meanwhile, the Dev Channel caters to users who are comfortable with instability and rough edges. As The Register notes, the current situation echoes Microsoft's troubled rollout of the Windows 10 October 2018 Update, which contained a data deletion bug.

Netgear Introduces New Additions to Industry-leading WiFi 7 Lineup of Home Networking Products

NETGEAR, Inc., the leading provider of innovative and secure solutions for people to connect and manage their digital lives, today expanded its WiFi 7 mesh and standalone router lines with the new Orbi 770 Tri-band Mesh System and Nighthawk RS300 Router. NETGEAR's most affordable WiFi 7 products to date build on the company's promise to provide powerful WiFi performance and secure connectivity.

WiFi 7 Changes the Game
WiFi 7 unlocks 2.4 times faster speeds than WiFi 6, delivers low latency and better handles WiFi interference for families to seamlessly enjoy next gen 4K/8K streaming, video conferencing, gaming, and more. Since the launch of NETGEAR's first WiFi 7 offerings - Nighthawk RS700 and Orbi 970 - multi-gig internet speeds have become more affordable, work from home demands have remained steady, and more devices such as AR/VR headsets or AI-focused platforms like CoPilot+ have been introduced that require extreme low latency and higher throughput.

Nightmare Fuel for Intel: Arm CEO Predicts Arm will Take Over 50% Windows PC Market-share by 2029

Arm CEO Rene Haas predicts that SoCs based on the Arm CPU machine architecture will beat x86 in the Windows PC space in the next 5 years (by 2029). Haas is bullish about the current crop of Arm SoCs striking the right balance of performance and power efficiency, along with just the right blend of on-chip acceleration for AI and graphics, to make serious gains in this market, which has traditionally been dominated by the x86 machine architecture, with chips from just two manufacturers—Intel and AMD. On the other hand, Arm has a vibrant ecosystem of SoC vendors. "Arm's market share in Windows - I think, truly, in the next five years, it could be better than 50%." Haas said, in an interview with Reuters.

Currently, Microsoft has an exclusive deal with Qualcomm to power Windows-on-Arm (WoA) Copilot+ AI PCs. Qualcomm's chip lineup spans the Snapdragon Elite X and Snapdragon Elite Plus. This exclusivity, however, could change, with a recent interview of Michael Dell and Jensen Huang hinting at NVIDIA working on a chip for the AI PC market. The writing is on the wall for Intel and AMD—they need to compete with Arm on its terms: to make leaner PC processors with the kinds of performance/Watt and chip costs that Arm SoCs offer to PC OEMs. Intel has taken a big step in this direction with its "Lunar Lake" processor, you can read all about the architecture here.

MediaTek Joins Arm Total Design to Shape the Future of AI Computing

MediaTek announced today at COMPUTEX 2024 that the company has joined Arm Total Design, a fast-growing ecosystem that aims to accelerate and simplify the development of products based on Arm Neoverse Compute Subsystems (CSS). Arm Neoverse CSS is designed to meet the performance and efficiency needs of AI applications in the data center, infrastructure systems, telecommunications, and beyond.

"Together with Arm, we're enabling our customers' designs to meet the most challenging workloads for AI applications, maximizing performance per watt," said Vince Hu, Corporate Vice President at MediaTek. "We will be working closely with Arm as we expand our footprint into data centers, utilizing our expertise in hybrid computing, AI, SerDes and chiplets, and advance packaging technologies to accelerate AI innovation from the edge to the cloud."

Arm Also Announces Three New GPUs for Consumer Devices

In addition to its two new CPU cores, Arm has announced three new GPU cores, namely the Immortalis-G925, Mali-G725 and Mali-G625. Starting from the top, the Immortalis-G925 is said to bring up to 37 percent better performance at 30 percent lower power usage compared to last year's Immortalis-G720 GPU core, whilst having two additional GPU cores in the test scenario. It's also said to bring up to 52 percent better ray tracing performance and up to 36 percent improved inference in AI/ML workloads. It's also been given a big overhaul when it comes to ray tracing—due to it being aimed towards gaming phones—and Arm claims that it can either offer up to 52 percent increased performance by reducing the accuracy in scenes with intricate objects, or 27 percent more performance with maintained accuracy.

The Immortalis-G925 supports 50 percent more shader cores and it supports configurations of up to 24 cores, compared to 16 cores for the Immortalis-G720. The Mali-G725 will be available with between six and nine cores, whereas the Mali-G625 will sport between one and five cores. The Mali-G625 is intended for smartwatches and entry-level mobile devices where a more complex GPU might not be suitable due to power draw. The Mali-G725 on the other hand is targeting upper mid-range devices and the Immortalis-G925 is aimed towards flagship devices or gaming phones as mentioned above. In related news, Arm said it's working with Epic Games to get its Unreal Engine 5 desktop renderer up and running on Android, which could lead to more complex games on mobile devices.

Arm Announces the Cortex-X925 and Cortex-A725 Armv9 CPU Cores

Arm has announced a pair of new Armv9 CPU cores today, alongside a refresh of a third. The new additions are the Cortex-X925—which is a huge model number jump from the previous Cortex-X4—and the Cortex-A725 which should be an upgraded Cortex-A720. Finally the Cortex-A520 has been refreshed to bring a 15 percent power efficiency improvement as well as support for 3 nm production nodes. Arm claims that the Cortex-A925 delivers its highest performance improvement ever over a previous generation with a single core uplift of up to 36 percent and an AI performance improvement of up to 46 percent compared to the Cortex-X4. The Cortex-X925 will support up to 3 MB private L2 cache and is tape-out ready for 3 nm production nodes.

The Cortex-A725 is said to offer a 35 percent performance efficiency improvement over the Cortex-A720 and it's been given performance boosts both when it comes to AI and gaming workloads. It's said to be up to 25 percent more power efficient than the Cortex-A720 and L3 cache traffic has been improved by up to 20 percent. Again, the Cortex-A720 is ready for production on a 3 nm node. Finally, Arm has also updated its DynamIQ Shared Unit to the DS-120 and here Arm has managed to reduce the typical workload power by up to 50 percent and the cache miss power by up to 60 percent. The DSU-120 scales to up to 14 Arm cores, suggesting that we might get to see some interesting new SoC implementations in the coming years from Arm's partners, although Arm's reference platform is a 2-4-2 configuration of the new cores.

ZOTAC to Debut Limit-Pushing Handheld Gaming PC and Showcase AI-Centric Computing Solutions at Computex 2024

ZOTAC Technology, a global manufacturer focused on innovative and high-performance hardware solutions, will return to COMPUTEX 2024 to showcase its biggest push yet into brand-new product categories. At this year's exhibition, ZOTAC will unveil its first attempt at creating a unique Handheld Gaming PC with advanced controls and features, allowing gamers to enjoy their favorite games on the go like never before with maximum competitive advantage.

Also in ZOTAC's extensive lineup is a full-fledged selection of AI-focused computational hardware, including a new workstation-grade External GPU Box series for hassle-free GPU compute and AI acceleration, ZBOX mini PCs powered by Intel Core Ultra CPUs equipped with integrated neural processing units (NPU), as well as other enterprise-grade solutions, such as GPU Servers and Arm-based NVIDIA Jetson systems, offering users a broad selection of AI accelerators in applications big and small.

NVIDIA's Arm-based AI PC Processor Could Leverage Arm Cortex X5 CPU Cores and Blackwell Graphics

Last week, we got confirmation from the highest levels of Dell and NVIDIA that the latter is making a client PC processor for the Windows on Arm (WoA) AI PC ecosystem that only has one player in it currently, Qualcomm. Michael Dell hinted that this NVIDIA AI PC processor would be ready in 2025. Since then, speculation has been rife about the various IP blocks NVIDIA could use in the development of this chip, the two key areas of debate have been the CPU cores and the process node.

Given that NVIDIA is gunning toward a 2025 launch of its AI PC processor, the company could implement reference Arm IP CPU cores, such as the Arm Cortex X5 "Blackhawk," and not venture out toward developing its own CPU cores on the Arm machine architecture, unlike Apple. Depending on how the market recieves its chips, NVIDIA could eventually develop its own cores. Next up, the company could use the most advanced 3 nm-class foundry node available in 2025 for its chip, such as the TSMC N3P. Given that even Apple and Qualcomm will build their contemporary notebook chips on this node, it would be a logical choice of node for NVIDIA. Then there's graphics and AI acceleration hardware.

Qualcomm's Success with Windows AI PC Drawing NVIDIA Back to the Client SoC Business

NVIDIA is eying a comeback to the client processor business, reveals a Bloomberg interview with the CEOs of NVIDIA and Dell. For NVIDIA, all it takes is a simple driver update that exposes every GeForce GPU with tensor cores as an NPU to Windows 11, with translation layers to get popular client AI apps to work with TensorRT. But that would need you to have a discrete NVIDIA GPU. What about the vast market of Windows AI PCs powered by the likes of Qualcomm, Intel, and AMD, who each sell 15 W-class processors with integrated NPUs capable of 50 AI TOPS, which is all that Copilot+ needs? NVIDIA held an Arm license for decades now, and makes Arm-based CPUs to this day, with the NVIDIA Grace, however, that is a large server processor meant for its AI GPU servers.

NVIDIA already made client processors under the Tegra brand targeting smartphones, which it winded down last decade. It's since been making Drive PX processors for its automotive self-driving hardware division; and of course there's Grace. NVIDIA hinted that it might have a client CPU for the AI PC market in 2025. In the interview Bloomberg asked NVIDIA CEO Jensen Huang a pointed question on whether NVIDIA has a place in the AI PC market. Dell CEO Michael Dell, who was also in the interview, interjected "come back next year," to which Jensen affirmed "exactly." Dell would be in a front-and-center position to know if NVIDIA is working on a new PC processor for launch in 2025, and Jensen's nod almost confirms this

ASUS Leaks its own Snapdragon X Elite Notebook

Courtesy of ASUS Vietnam (via @rquandt on X/Twitter), we now have an idea of what ASUS' first Qualcomm Snapdragon X Elite notebook will look like, but also what the main specifications are. It will share the Vivobook S 15 OLED branding with other notebooks from ASUS, although the leaked model carries the model number S5507QA-MA089WS. At its core is a Qualcomm Snapdragon X Elite X1E-78-100 SoC which is the base model from Qualcomm. The SoC consists of 12 Oryon cores, of which eight are performance cores and four are energy efficient cores. A peak, multi-threaded clock speed of 3.4 GHz and 42 MB of cache, as well as a 75 TOPs AI engine rounds off the SoC specs. The SoC is also home to a Qualcomm Adreno GPU, but so far Qualcomm hasn't released any useful specs about the GPU in the Snapdragon X Elite series of chips.

ASUS has paired the SoC with 32 GB of LPDDR5X memory of an unknown clock speed, although Qualcomm officially supports speed of up to 8,448 MT/s in a to PC users unusual configuration of eight channels at 16-bit wide, for a bandwidth of up to 135 GB/s. For comparison, Intel's latest Core Ultra processors max out at LPDDR5X 7,467 MT/s and up to 120 GB/s memory bandwidth. Other features include a 1 TB PCIe 4.0 NVMe SSD, a glossy 15.6-inch 2,880 x 1,620 resolution, 120 Hz OLED display with 600 nits peak brightness and a 70 WHr battery. It's unclear what connectivity options will be on offer, but judging by the screenshot below, we can at least expect an HDMI out as well as a pair of USB Type-C ports, a micro SD card slot and a headphone jack. As far as pricing goes, Roland Quandt is suggesting a €1,500 base price on X/Twitter, but we'll have to wait for the official launch to find out what these Arm based laptops will retail for. ASUS Vietnam has already removed the page from its website.

Ampere Scales AmpereOne Product Family to 256 Cores

Ampere Computing today released its annual update on upcoming products and milestones, highlighting the company's continued innovation and invention around sustainable, power efficient computing for the Cloud and AI. The company also announced that they are working with Qualcomm Technologies, Inc. to develop a joint solution for AI inferencing using Qualcomm Technologies' high-performance, low power Qualcomm Cloud AI 100 inference solutions and Ampere CPUs.

Semiconductor industry veteran and Ampere CEO Renee James said the increasing power requirements and energy challenge of AI is bringing Ampere's silicon design approach around performance and efficiency into focus more than ever. "We started down this path six years ago because it is clear it is the right path," James said. "Low power used to be synonymous with low performance. Ampere has proven that isn't true. We have pioneered the efficiency frontier of computing and delivered performance beyond legacy CPUs in an efficient computing envelope."

European Supercomputer Chip SiPearl Rhea Delayed, But Upgraded with More Cores

The rollout of SiPearl's much-anticipated Rhea processor for European supercomputers has been pushed back by a year to 2025, but the delay comes with a silver lining - a significant upgrade in core count and potential performance. Originally slated to arrive in 2024 with 72 cores, the homegrown high-performance chip will now pack 80 cores when it eventually launches. This decisive move by SiPearl and its partners is a strategic choice to ensure the utmost quality and capabilities for the flagship European processor. The additional 12 months will allow the engineering teams to further refine the chip's architecture, carry out extensive testing, and optimize software stacks to take full advantage of Rhea's computing power. Now called the Rhea1, the chip is a crucial component of the European Processor Initiative's mission to develop domestic high-performance computing technologies and reduce reliance on foreign processors. Supercomputer-scale simulations spanning climate science, drug discovery, energy research and more all require astonishing amounts of raw compute grunt.

By scaling up to 80 cores based on the latest Arm Neoverse V1, Rhea1 aims to go toe-to-toe with the world's most powerful processors optimized for supercomputing workloads. The SiPearl wants to utilize TSCM's N6 manufacturing process. The CPU will have 256-bit DDR5 memory connections, 104 PCIe 5.0 lanes, and four stacks of HBM2E memory. The roadmap shift also provides more time for the expansive European supercomputing ecosystem to prepare robust software stacks tailored for the upgraded Rhea silicon. Ensuring a smooth deployment with existing models and enabling future breakthroughs are top priorities. While the delay is a setback for SiPearl's launch schedule, the substantial upgrade could pay significant dividends for Europe's ambitions to join the elite ranks of worldwide supercomputer power. All eyes will be on Rhea's delivery in 2025, mainly from Europe's governments, which are funding the project.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

ASUS Announces Chromebook CZ Series Laptops

ASUS today announced the ASUS Chromebook CZ series of laptops tailored for K12 students, which offer a portable and durable design for enduring value and empower engaged learning - anywhere. Featuring a 180° lay-flat or 360°-flippable design and an 11.6-inch (CZ1104C/F) or 12.2-inch (CZ1204C/F) 16:10 WUXGA display or touchscreen with TÜV-certified eye-care technology, the series not only enhances the visual experience but also prioritizes eye protection. ASUS Chromebook CZ series laptops are built for easy IT maintenance, with a modular design that minimizes downtime. The series boasts an extended battery life for uninterrupted use throughout an entire day of classes, empowering students to extend their learning beyond traditional limits.

A reliable study companion
For K12 students, an everyday-use laptop should be invincible. With lively and active users, scratches and knocks are an almost-inevitable part of their daily routine, so the ASUS Chromebook CZ series features an all-round rubber bumper for extra peace of mind. The laptops also feature a rugged design that's tested to meet or exceed the MIL-STD-810H US military-grade standard, and use tough Corning Gorilla Glass to protect the screen from scratches. Additionally, the spill-resistant keyboard can cope with minor water spills without harm, so minor splashes on the desk or at the dinner table can be easily drained, cleaned, and dried. Finally, the special fingerprint-resistant finish keeps the laptop cleaner for longer.

US Weighs National Security Risks of China's RISC-V Chip Development Involvement

The US government is investigating the potential national security risks associated with China's involvement in the development of open-source RISC-V chip technology. According to a letter obtained by Reuters, the Department of Commerce has informed US lawmakers that it is actively reviewing the implications of China's work in this area. RISC-V, an open instruction set architecture (ISA) created in 2014 at the University of California, Berkeley, offers an alternative to proprietary and licensed ISAs like those developed by Arm. This open-source ISA can be utilized in a wide range of applications, from AI chips and general-purpose CPUs to high-performance computing applications. Major Chinese tech giants, including Alibaba and Huawei, have already embraced RISC-V, positioning it as a new battleground in the ongoing technological rivalry between the United States and China over cutting-edge semiconductor capabilities.

In November, a group of 18 US lawmakers from both chambers of Congress urged the Biden administration to outline its strategy for preventing China from gaining a dominant position in RISC-V technology, expressing concerns about the potential impact on US national and economic security. While acknowledging the need to address potential risks, the Commerce Department noted in its letter that it must proceed cautiously to avoid unintentionally harming American companies actively participating in international RISC-V development groups. Previous attempts to restrict the transfer of 5G technology to China have created obstacles for US firms involved in global standards bodies where China is also a participant, potentially jeopardizing American leadership in the field. As the review process continues, the Commerce Department faces the delicate task of balancing national security interests with the need to maintain the competitiveness of US companies in the rapidly evolving landscape of open-source chip technologies.

Google Launches Axion Arm-based CPU for Data Center and Cloud

Google has officially joined the club of custom Arm-based, in-house-developed CPUs. As of today, Google's in-house semiconductor development team has launched the "Axion" CPU based on Arm instruction set architecture. Using the Arm Neoverse V2 cores, Google claims that the Axion CPU outperforms general-purpose Arm chips by 30% and Intel's processors by a staggering 50% in terms of performance. This custom silicon will fuel various Google Cloud offerings, including Compute Engine, Kubernetes Engine, Dataproc, Dataflow, and Cloud Batch. The Axion CPU, designed from the ground up, will initially support Google's AI-driven services like YouTube ads and Google Earth Engine. According to Mark Lohmeyer, Google Cloud's VP and GM of compute and machine learning infrastructure, Axion will soon be available to cloud customers, enabling them to leverage its performance without overhauling their existing applications.

Google's foray into custom silicon aligns with the strategies of its cloud rivals, Microsoft and Amazon. Microsoft recently unveiled its own AI chip for training large language models and an Arm-based CPU called Cobalt 100 for cloud and AI workloads. Amazon, on the other hand, has been offering Arm-based servers through its custom Graviton CPUs for several years. While Google won't sell these chips directly to customers, it plans to make them available through its cloud services, enabling businesses to rent and leverage their capabilities. As Amin Vahdat, the executive overseeing Google's in-house chip operations, stated, "Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the world's information."

Intel Xeon Scalable Gets a Rebrand: Intel "Xeon 6" with Granite Rapids and Sierra Forest Start a New Naming Scheme

During the Vision 2024 event, Intel announced that its upcoming Xeon processors will be branded under the new "Xeon 6" moniker. This rebranding effort aims to simplify the company's product stack and align with the recent changes made to its consumer CPU naming scheme. In contrast to the previous "x Generation Xeon Scalable", the new branding aims to simplify the product family. The highly anticipated Sierra Forest and Granite Ridge chips will be the first processors to bear the Xeon 6 branding, and they are set to launch in the coming months. Intel has confirmed that Sierra Forest, designed entirely with efficiency cores (E-cores), remains on track for release this quarter. Supermicro has already announced early availability and remote testing programs for these chips. Intel's Sierra Forest is set to deliver a substantial leap in performance. According to the company, it will offer a 2.4X improvement in performance per watt and a staggering 2.7X better performance per rack compared to the previous generation. This means that 72 Sierra Forest server racks will provide the same performance as 200 racks equipped with older second-gen Xeon CPUs, leading to significant power savings and a boost in overall efficiency for data centers upgrading their system.

Intel has also teased an exciting feature in its forthcoming Granite Ridge processors-support for the MXFP4 data format. This new precision format, backed by the Open Compute Project (OCP) and major industry players like NVIDIA, AMD, and Arm, promises to revolutionize performance. It could reduce next-token latency by up to 6.5X compared to fourth-gen Xeons using FP16. Additionally, Intel stated that Granite Ridge will be capable of running 70 billion parameter Llama-2 models, a capability that could open up new possibilities in data processing. Intel claims that 70 billion 4-bit models run entirely on Xeon in just 86 milliseconds. While Sierra Forest is slated for this quarter, Intel has not provided a specific launch timeline for Granite Ridge, stating only that it will arrive "soon after" its E-core counterpart. The Xeon 6 branding aims to simplify the product stack and clarify customer performance tiers as the company gears up for these major releases.

AMD Extends Leadership Adaptive SoC Portfolio with New Versal Series Gen 2 Devices Delivering End-to-End Acceleration for AI-Driven Embedded Systems

AMD today announced the expansion of the AMD Versal adaptive system on chip (SoC) portfolio with the new Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2 adaptive SoCs, which bring preprocessing, AI inference, and postprocessing together in a single device for end-to-end acceleration of AI-driven embedded systems.

These initial devices in the Versal Series Gen 2 portfolio build on the first generation with powerful new AI Engines expected to deliver up to 3x higher TOPs-per-watt than first generation Versal AI Edge Series devicesi, while new high-performance integrated Arm CPUs are expected to offer up to 10x more scalar compute than first gen Versal AI Edge and Prime series devicesii.

AIO Workstation Combines 128-Core Arm Processor and Four NVIDIA GPUs Totaling 28,416 CUDA Cores

All-in-one computers are often traditionally seen as lower-powered alternatives to traditional desktop workstations. However, a new offering from Alafia AI, a startup focused on medical imaging appliances, aims to shatter that perception. The company's upcoming Alafia Aivas SuperWorkstation packs serious hardware muscle, demonstrating that all-in-one systems can match the performance of their more modular counterparts. At the heart of the Aivas SuperWorkstation lies a 128-core Ampere Altra processor, running at 3.0 GHz clock speed. This CPU is complemented by not one but three NVIDIA L4 GPUs for compute, and a single NVIDIA RTX 4000 Ada GPU for video output, delivering a combined 28,416 CUDA cores for accelerated parallel computing tasks. The system doesn't skimp on other components, either. It features a 4K touch display with up to 360 nits of brightness, an extensive 2 TB of DDR4 RAM, and storage options up to an 8 TB solid-state drive. This combination of cutting-edge CPU, GPU, memory, and storage is squarely aimed at the demands of medical imaging and AI development workloads.

The all-in-one form factor packs this incredible hardware into a sleek, purposefully designed clinical research appliance. While initially targeting software developers, Alafia AI hopes that institutions that can optimize their applications for the Arm architecture can eventually deploy the Aivas SuperWorkstation for production medical imaging workloads. The company is aiming for application integration in Q3 2024 and full ecosystem device integration by Q4 2024. With this powerful new offering, Alafia AI is challenging long-held assumptions about the performance limitations of all-in-one systems. The Aivas SuperWorkstation demonstrates that the right hardware choices can transform these compact form factors into true powerhouse workstations. Especially with a combined total output of three NVIDIA L4 compute units, alongside RTX 4000 Ada graphics card, the AIO is more powerful than some of the high-end desktop workstations.
Return to Keyword Browsing
Nov 21st, 2024 11:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts