News Posts matching #acceleration

Return to Keyword Browsing

NVIDIA App Update Adds Project G-Assist, DLSS Super Resolution Custom Scaling & New Control Panel Features

NVIDIA app is the essential companion for users with NVIDIA GPUs in their PCs and laptops. Whether you're a gaming enthusiast or a content creator, NVIDIA app simplifies the process of keeping your PC updated with the latest GeForce Game Ready and NVIDIA Studio Drivers, and enables quick discovery and installation of NVIDIA applications like GeForce NOW and NVIDIA Broadcast. In a new NVIDIA app update that's available now, we've expanded the functionality of our DLSS overrides, enabling you to fine tune image quality or boost performance for DLSS Super Resolution.

Additionally, we've brought Display Scaling and Display Color settings over from the NVIDIA Control Panel, modernizing and improving them, and taking another step towards unifying all NVIDIA GPU features in one responsive application. And via the Discover section, you can now download Project G-Assist, an experimental AI assistant that runs locally on GeForce RTX AI desktop PCs, helping users control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting - all via basic voice or text commands.

Google Teams up with MediaTek for Next-Generation TPU v7 Design

According to Reuters, citing The Information, Google will collaborate with MediaTek to develop its seventh-generation Tensor Processing Unit (TPU), which is also known as TPU v7. Google maintains its existing partnership with Broadcom despite the new MediaTek collaboration. The AI accelerator is scheduled for production in 2026, and TSMC is handling manufacturing duties. Google will lead the core architecture design while MediaTek manages I/O and peripheral components, as Economic Daily News reports. This differs from Google's ongoing relationship with Broadcom, which co-develops core TPU architecture. The MediaTek partnership reportedly stems from the company's strong TSMC relationship and lower costs compared to Broadcom.

There is also a possibility that MediaTek could design inference-focused TPU v7 chips while Broadcom focuses on training architecture. Nonetheless, the development of TPU is a massive market as Google is using so many chips that it could use a third company, hypothetically. The development of TPU continues Google's vertical integration strategy for AI infrastructure. Google reduces dependency on NVIDIA hardware by designing proprietary AI chips for internal R&D and cloud operations. At the same time, competitors like OpenAI, Anthropic, and Meta rely heavily on NVIDIA's processors for AI training and inference. At Google's scale, serving billions of queries a day, designing custom chips makes sense from both financial and technological sides. As Google develops its own specific workloads, translating that into hardware acceleration is the game that Google has been playing for years now.

Equal1 Launches Bell-1: The First Quantum System Purpose-Built for the HPC Era

Equal1 today unveils Bell-1, the first quantum system purpose-built for the HPC era. Unlike first-generation quantum computers that demand dedicated rooms, infrastructure, and complex cooling systems, Bell-1 is designed for direct deployment in HPC-class environments. As a rack-mountable quantum node, it integrates directly alongside classical compute—as compact as a GPU server, yet exponentially more powerful for the world's hardest problems. Bell-1 is engineered to eliminate the traditional barriers of cost, infrastructure, and complexity, setting a new benchmark for scalable quantum computing integration.

Bell-1 rewrites the rule book. While today's quantum computers demand specialized infrastructure, Bell-1 is a silicon-powered quantum computer that integrates seamlessly into existing HPC environments. Simply rack it, plug it in, and unlock quantum capabilities wherever your classical computers already operate. No new cooling systems. No extraordinary power demands. Just quantum computing that works in the real world, as easy to deploy as a high-end GPU server. It plugs into a standard power socket, operates at just 1600 W, and delivers on-demand quantum computing for computationally intensive workloads.

AMD to Discuss Advancing of AI "From the Enterprise to the Edge" at MWC 2025

GSMA MWC Barcelona, runs from March 3 to 6, 2025 at the Fira Barcelona Gran Via in Barcelona, Spain. AMD is proud to participate in forward-thinking discussions and demos around AI, edge and cloud computing, the long-term revolutionary potential of moonshot technologies like quantum processing, and more. Check out the AMD hospitality suite in Hall 2 (Stand 2M61) and explore our demos and system design wins. Attendees are welcome to stop by informally or schedule a time slot with us.

As modern networks evolve, high-performance computing, energy efficiency, and AI acceleration are becoming just as critical as connectivity itself. AMD is at the forefront of this transformation, delivering solutions that power next-generation cloud, AI, and networking infrastructure. Our demos this year showcase AMD EPYC, AMD Instinct, and AMD Ryzen AI processors, as well as AMD Versal adaptive SoC and Zynq UltraScale+ RFSoC devices.

Baya Systems and Semidynamics Collaborate to Accelerate RISC-V System-on-Chip Development

Baya Systems, a leader in system IP technology that empowers the acceleration of intelligent compute, and Semidynamics, a provider of fully customizable high-bandwidth and high-performance RISC-V processor IP, today announced a collaboration to boost innovation in development of hyper-efficient, next-generation platforms for artificial intelligence (AI), machine learning (ML) and high-performance computing (HPC) applications.

The collaboration integrates Semidynamics' family of 64-bit RISC-V processor IP cores, known for their exceptional memory bandwidth and configurability, with Baya Systems' innovative WeaveIP Network on Chip (NoC) system IP. WeaveIP is engineered for ultra-efficient, high-bandwidth, and low-latency data transport, crucial for the demands of modern workloads. Complementing this is Baya Systems' software-driven WeaverPro platform, which enables rapid system-level optimization, ensuring that key performance indicators (KPIs) are met based on real-world workloads while providing unparalleled design flexibility for future advancements.

Imagination's New DXTP GPU for Mobile and Laptop: 20% More Power Efficient

Today Imagination Technologies announces its latest GPU IP, Imagination DXTP, which sets a new standard for the efficient acceleration of graphics and compute workloads on smartphones and other power-constrained devices. Thanks to an array of micro-architectural improvements, DXTP delivers up to 20% improved power efficiency (FPS/W) on popular graphics workloads when compared to its DXT equivalent.

"The global smartphone market is experiencing a resurgence, propelled by cutting-edge AI features such as personal agents and enhanced photography," says Peter Richardson, Partner & VP at Counterpoint Research. "However, the success of this AI-driven revolution hinges on maintaining the high standards users expect: smooth interfaces, sleek designs, and all-day battery life. As the market matures, consumers are gravitating towards premium devices that seamlessly integrate these advanced AI capabilities without compromising on essential smartphone qualities."

Lenovo Delivers Unmatched Flexibility, Performance and Design with New ThinkSystem V4 Servers Powered by Intel Xeon 6 Processors

Today, Lenovo announced three new infrastructure solutions, powered by Intel Xeon 6 processors, designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next generation Lenovo ThinkSystem V4 servers that deliver breakthrough performance and exceptional versatility to handle any workload while enabling powerful AI capabilities in compact, high-density designs. Whether deploying at the edge, co-locating or leveraging a hybrid cloud, Lenovo is delivering the right mix of solutions that seamlessly unlock intelligence and bring AI wherever it is needed.

The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute intensive - from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth when using new MRDIMM technology, to scale and accelerate AI everywhere.

MITAC Computing Announces Intel Xeon 6 CPU-powered Next-gen AI & HPC Server Series

MiTAC Computing Technology Corporation, a leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation, today announced the launch of its latest server systems and motherboards powered by the latest Intel Xeon 6 with P-core processors. These industry-leading processors are designed for compute-intensive workloads, providing up to twice the performance for the widest range of workloads including AI and HPC.

Driving Innovation in AI and High-Performance Computing
"For over a decade, MiTAC Computing has collaborated with Intel to push the boundaries of server technology, delivering cutting-edge solutions optimized for AI and high-performance computing (HPC)," said Rick Hwang, President of MiTAC Computing Technology Corporation. "With the integration of the latest Intel Xeon 6 P-core processors our servers now unlock groundbreaking AI acceleration, boost computational efficiency, and scale cloud operations to new heights. These innovations provide our customers with a competitive edge, empowering them to tackle demanding workloads with superior empower our customers with a competitive edge through superior performance and an optimized total cost of ownership."

NVIDIA Recommends GeForce RTX 5070 Ti GPU to AI Content Creators

The NVIDIA GeForce RTX 5070 Ti graphics cards—built on the NVIDIA Blackwell architecture—are out now, ready to power generative AI content creation and accelerate creative performance. GeForce RTX 5070 Ti GPUs feature fifth-generation Tensor Cores with support for FP4, doubling performance and reducing VRAM requirements to run generative AI models.

In addition, the GPU comes equipped with two ninth-generation encoders and a sixth-generation decoder that add support for the 4:2:2 pro-grade color format and increase encoding quality for HEVC and AV1. This combo accelerates video editing workflows, reducing export times by 8x compared with single encoder GPUs without 4:2:2 support like the GeForce RTX 3090. The GeForce RTX 5070 Ti GPU also includes 16G B of fast GDDR7 memory and 896 GB/sec of total memory bandwidth—a 78% increase over the GeForce RTX 4070 Ti GPU.

Xbox Introduces Muse: a Generative AI Model for Gameplay

In nearly every corner of our lives, the buzz about AI is impossible to ignore. It's destined to revolutionize how we work, learn, and play. For those of us immersed in the world of gaming—whether as players or creators—the question isn't just how AI will change the game, but how it will ignite new possibilities.

At Xbox, we're all about using AI to make things better (and more fun!) for players and game creators. We want to bring more games to more people around the world and always stay true to the creative vision and artistry of game developers. We believe generative AI can boost this creativity and open up new possibilities. We're excited to announce a generative AI breakthrough, published today in the journal Nature and announced by Microsoft Research, that shows this potential to open up new possibilities—including the opportunity to make older games accessible to future generations of players across new devices and in new ways.

SEGA Unveils Sonic Racing: CrossWorlds

New worlds await in the upcoming Sonic the Hedgehog racing game—Sonic Racing: CrossWorlds—introducing a unique gameplay mechanic transporting the iconic characters from the Sonic and Sega universes into new dimensions. The newest entry to the Sonic Racing series is a Sonic game, driving game, and action game all-in-one that will offer an exciting racing experience across several modes, worlds, and more! Sonic Racing: CrossWorlds is coming to PC and current generation consoles soon. Sega is sharing details on what players can expect to see in the game.

Travel Rings
Sonic Racing: CrossWorlds introduces Travel Rings, a new gameplay mechanic making each race feel surprising and fresh. Travel Rings bring dramatic changes to races, transporting players from one world to a completely new location. The lead racer gets to choose the CrossWorld that all players will be transported to on the second lap, allowing the environments to come alive with various track changes. Each CrossWorld offers a theme park-like experience with surprises around every turn, including large monsters, engaging obstacles, and tracks filled with beautiful scenery.

Moore Threads Teases Excellent Performance of DeepSeek-R1 Model on MTT GPUs

Moore Threads, a Chinese manufacturer of proprietary GPU designs is (reportedly) the latest company to jump onto the DeepSeek-R1 bandwagon. Since late January, NVIDIA, Microsoft and AMD have swooped in with their own interpretations/deployments. By global standards, Moore Threads GPUs trail behind Western-developed offerings—early 2024 evaluations presented the firm's MTT S80 dedicated desktop graphics card struggling against an AMD integrated solution: Radeon 760M. The recent emergence of DeepSeek's open source models has signalled a shift away from reliance on extremely powerful and expensive AI-crunching hardware (often accessed via the cloud)—widespread excitement has been generated by DeepSeek solutions being relatively frugal, in terms of processing requirements. Tom's Hardware has observed cases of open source AI models running (locally) on: "inexpensive hardware, like the Raspberry Pi."

According to recent Chinese press coverage, Moore Threads has announced a successful deployment of DeepSeek's R1-Distill-Qwen-7B distilled model on the aforementioned MTT S80 GPU. The company also revealed that it had taken similar steps with its MTT S4000 datacenter-oriented graphics hardware. On the subject of adaptation, a Moore Threads spokesperson stated: "based on the Ollama open source framework, Moore Threads completed the deployment of the DeepSeek-R1-Distill-Qwen-7B distillation model and demonstrated excellent performance in a variety of Chinese tasks, verifying the versatility and CUDA compatibility of Moore Threads' self-developed full-featured GPU." Exact performance figures, benchmark results and technical details were not disclosed to the Chinese public, so Moore Threads appears to be teasing the prowess of its MTT GPU designs. ITHome reported that: "users can also perform inference deployment of the DeepSeek-R1 distillation model based on MTT S80 and MTT S4000. Some users have previously completed the practice manually on MTT S80." Moore Threads believes that its: "self-developed high-performance inference engine, combined with software and hardware co-optimization technology, significantly improves the model's computing efficiency and resource utilization through customized operator acceleration and memory management. This engine not only supports the efficient operation of the DeepSeek distillation model, but also provides technical support for the deployment of more large-scale models in the future."

Advantech Enhances AI and Hybrid Computing With Intel Core Ultra Processors (Series 2) S-Series

Advantech, a leader in embedded IoT computing solutions, is excited to introduce their AI-integrated desktop platform series: the Mini-ITX AIMB-2710 and Micro-ATX AIMB-589. These platforms are powered by Intel Core Ultra Processors (Series 2) S-Series, featuring the first desktop processor with an integrated NPU, delivering up to 36 TOPS for superior AI acceleration. Designed for data visualization and image analysis, both models offer PCIe Gen 5 support for high-performance GPU cards and feature 6400 MHz DDR5 memory, USB4 Type-C, multiple USB 3.2 Gen 2 and 2.5GbE LAN ports for fast and accurate data transmission in real time.

The AIMB-2710 and AIMB-589 excel in high-speed computing and AI-driven performance, making them ideal for applications such as medical imaging, automated optical inspection, and semiconductor testing. Backed by comprehensive software support, these platforms are engineered for the next wave of AI innovation.

Intel Rumored to Launch Arc Battlemage GPU With 24GB Memory in 2025

Intel could be working on a new Arc graphics card according to Quantum Bits quoted by VideoCardz. It's based on the Battlemage architecture and has 24 GB of memory, twice as much as current models. This new card seems to be oriented more towards professionals, not gamers. Intel's Battlemage lineup currently has the Arc B580 model with 12 GB GDDR6 memory and a 192-bit bus. There's also the upcoming B570 with 10 GB and a 160-bit bus. The new 24 GB model will use the same BGM-G21 GPU as the B580, while the increased VRAM version could use higher capacity memory modules or a dual-sided module setup. No further technical details are available at this moment.

Intel looks to be aiming this 24 GB version at professional tasks such as artificial intelligence jobs like Large Language Models (LLMs) and generative AI. The card would be useful in data centers, edge computing, schools, and research, and this makes sense for Intel as they don't have a high-memory GPU for professional productivity markets yet. The company wants to launch this Arc Battlemage with bigger memory in 2025, we guess it might be announced in late spring or ahead of next year's Computex if there's no rush. Intel in the meantime will keep making their current gaming cards too as the latest Arc series was very well received, a big win for Intel after all the struggles. This rumor hints that Intel's expanding its GPU plan rather than letting it fade away, that was a gray scenario before the launch of Battlemage. Now it seems they want to compete in the professional and AI acceleration markets as well.

Axelera AI Partners with Arduino for Edge AI Solutions

Axelera AI - a leading edge-inference company - and Arduino, the global leader in open-source hardware and software, today announced a strategic partnership to make high-performance AI at the edge more accessible than ever, building advanced technology solutions based on inference and an open ecosystem. This furthers Axelera AI's strategy to democratize artificial intelligence everywhere.

The collaboration will combine the strengths of Axelera AI's Metis AI Platform with the powerful SOMs from the Arduino Pro range to provide customers with easy-to-use hardware and software to innovate around AI. Users will enjoy the freedom to dictate their own AI journey, thanks to tools that provide unique digital in-memory computing and RISC-V controlled dataflow technology, delivering high performance and usability at a fraction of the cost and power of other solutions available today.

NVIDIA Blackwell RTX and AI Features Leaked by Inno3D

NVIDIA's RTX 5000 series GPU hardware has been leaked repeatedly in the weeks and months leading up to CES 2025, with previous leaks tipping significant updates for the RTX 5070 Ti in the VRAM department. Now, Inno3D is apparently hinting that the RTX 5000 series will also introduce updated machine learning and AI tools to NVIDIA's GPU line-up. An official CES 2025 teaser published by Inno3D, titled "Inno3D At CES 2025, See You In Las Vegas!" makes mention of potential updates to NVIDIA's AI acceleration suite for both gaming and productivity.

The Inno3D teaser specifically points out "Advanced DLSS Technology," "Enhanced Ray Tracing" with new RT cores, "better integration of AI in gaming and content creation," "AI-Enhanced Power Efficiency," AI-powered upscaling tech for content creators, and optimizations for generative AI tasks. All of this sounds like it builds off of previous NVIDIA technology, like RTX Video Super Resolution, although the mention of content creation suggests that it will be more capable than previous efforts, which were seemingly mostly consumer-focussed. Of course, improved RT cores in the new RTX 5000 GPUs is also expected, although it will seemingly be the first time NVIDIA will use AI to enhance power draw, suggesting that the CES announcement will come with new features for the NVIDIA App. The real standout feature, though, are called "Neural Rendering" and "Advanced DLSS," both of which are new nomenclatures. Of course, Advanced DLSS may simply be Inno3D marketing copy, but Neural Rendering suggests that NVIDIA will "Revolutionize how graphics are processed and displayed," which is about as vague as one could be.

Akeana Exits Stealth Mode with Comprehensive RISC-V Processor Portfolio

Akeana, the company committed to driving dramatic change in semiconductor IP innovation and performance, has announced its official company launch approximately three years after its foundation, having raised over $100 million in capital, with support from A-list investors including Kleiner Perkins, Mayfield, and Fidelity. Today's launch marks the formal availability of the company's expansive line of IP solutions that are uniquely customizable for any workload or application.

Formed by the same team that designed Marvell's ThunderX2 server chips, Akeana offers a variety of IP solutions, including microcontrollers, Android clusters, AI vector cores and subsystems, and compute clusters for networking and data centers. Akeana moves the industry beyond the status quo of legacy vendors and architectures, like Arm, with equitable licensing options and processors that fill and exceed current performance gaps.

Ampere Announces 512-Core AmpereOne Aurora CPU for AI Computing

Ampere has announced a significant update to its product roadmap, highlighting the upcoming 512-core AmpereOne Aurora processor. This new chip is specifically designed to address the growing demands of cloud-native AI computing.

The newly announced AmpereOne Aurora 512 cores processor integrates AI acceleration and on-chip High Bandwidth Memory (HBM), promising three times the performance per rack compared to current AmpereOne processors. Aurora is designed to handle both AI training and inference workloads, indicating Ampere's commitment to becoming a major player in the AI computing space.

MaxLinear to Showcase Panther III at Future of Memory and Storage 2024 Trade Show

MaxLinear, Inc., a leading provider of data storage acceleration solutions for enterprise and data center applications, today announced it will demonstrate the advanced compression, encryption, and security performance of its storage acceleration solution, Panther III, at the Future of Memory and Storage (FMS) 2024 trade show from August 6-8, 2024. The demos will show that Panther III can achieve up to 40 times more throughput, up to 190 times better latency, and up to 1000 times less CPU utilization than a software-only solution, leading to significant cost savings in terms of flash drives and needed CPU cores.

MaxLinear's Panther III creates a bold new product category for maximizing the performance of data storage systems - a comprehensive, all-in-one "storage accelerator." Unlike encryption and/or compression solutions, MaxLinear's Panther III consolidates a comprehensive suite of storage acceleration functions, including compression, deduplication, encryption, data protection, and real-time validation, in a single hardware-based solution. Panther III is engineered to offload and expedite specific data processing tasks, thus providing a significant performance boost, storage cost savings, and energy savings compared to traditional software-only, FPGA, and other competitive solutions.

Razer Enhances Mice with Mouse Rotation and Dynamic Sensitivity

At Razer, we're continually pushing the boundaries of what's possible with gaming technology, and our latest software updates for Razer mice are set to redefine precision and adaptability for gamers everywhere. The new Mouse Rotation and Dynamic Sensitivity features are now available on Razer's latest esports mice - the Razer Viper V3 Pro, and Razer DeathAdder V3 HyperSpeed.

Mouse Rotation: Aligning Movement with Natural Motion
Mouse Rotation customizes the output angle from your mouse sensor to perfectly match your unique setup and grip style. This is especially beneficial for gamers who have a naturally angled swipe or an unconventional setup. By adjusting the angle in Synapse, users ensure that a left-to-right swipe on their desk corresponds directly to a horizontal movement in-game, enhancing both comfort and control.

ASUS Updates Zenbook and ProArt Laptop Series with AMD Ryzen AI 9 and Snapdragon X Elite Processors

At Computex 2024, ASUS unveiled major updates to its popular laptop lineups, designed for the "Copilot+" era of AI computing. The first is the Zenbook S16 is a premium 16-inch laptop series powered by AMD's latest Ryzen AI 9 HX 370 processors with dedicated AI acceleration. Remarkably, ASUS has managed to pack this high-performance silicon into an ultra-portable 1.1 cm thin chassis weighing just 1.5 kg. The Zenbook S16 integrates AMD's new NPU capable of a 50 TOPS of AI compute for accelerating AI/ML workloads. The centerpiece is the laptop's stunning 16-inch 3K OLED display made with ASUS Lumina technology. It offers 100% vibrant DCI-P3 color gamut coverage, a blazing-fast 120 Hz refresh rate with 0.2 ms response time, and up to 600 nits brightness. ASUS paired this premium visual experience with a six-speaker audio system for an immersive multimedia experience.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Extropic Intends to Accelerate AI through Thermodynamic Computing

Extropic, a pioneer in physics-based computing, this week emerged from stealth mode and announced the release of its Litepaper, which outlines the company's revolutionary approach to AI acceleration through thermodynamic computing. Founded in 2022 by Guillaume Verdon, Extropic has been developing novel chips and algorithms that leverage the natural properties of out-of-equilibrium thermodynamic systems to perform probabilistic computations for generative AI applications in a highly efficient manner. The Litepaper delves into Extropic's groundbreaking computational paradigm, which aims to address the limitations of current digital hardware in handling the complex probability distributions required for generative AI.

Today's algorithms spend around 25% of their time moving numbers around in memory, limiting the speedup achievable by accelerating specific operations. In contrast, Extropic's chips natively accelerate a broad class of probabilistic algorithms by running them physically as a rapid and energy-efficient, physics-based process in their entirety, unlocking a new regime of AI acceleration well beyond what was previously thought achievable. In coming out of stealth, the company has announced the fabrication of a superconducting prototype processor and developments surrounding room-temperature semiconductor-based devices for the broader market, with the goal of revolutionizing the field of AI acceleration and enabling new possibilities in generative AI.

Mesa CPU-based Vulkan Driver Gets Ray Tracing Support - Quake II Performance Hits 1 FPS

Konstantin Seurer, a Mesa developer, has spent the past couple of months working on CPU-based Vulkan ray tracing—naturally, some folks will express scepticism about this project's practicality. Seurer has already set expectations with a brief message: "don't ask about performance." His GitLab merge request page attracted Michael Larabel's attention—the Phoronix founder and principal author was suitably impressed with Seurer's coding wizardry. He: "managed to implement support for VK_KHR_acceleration_structure, VK_KHR_deferred_host_operations, and VK_KHR_ray_query for Lavapipe. This Lavapipe Vulkan ray tracing support is based in part on porting code from the emulated ray tracing worked on for RADV with older Radeon GPUs." A lone screenshot provided evidence of Quake II running at 1 FPS with Vulkan ray tracing enabled—this "atrocious" performance was achieved thanks to a Mesa Lavapipe driver "implementing the Vulkan API for CPU-based execution."

VideoCardz has highlighted an older example of CPU-based rendering techniques: "this is not the first time we heard about ray tracing on the CPU in Quake. In 2008, Intel demonstrated Enemy Territory: Quake Wars running at 720p resolution at 14 to 29 FPS on 16 core and 20-35 FPS at 24 core CPUs (quad-socket). The basic implementation of ray tracing in 2008 is not comparable to complex ray tracing techniques designed for GPUs, thus the performance on modern system is actually much lower. Beyond that, that game was specifically designed for the Intel architecture and used a specific API to achieve that. Sadly, the original ET demo is no longer available, it would be interesting to see how it performs today." CPU-based Vulkan ray tracing is expected to hit public distribution channels with the rollout of Mesa 24.1. Several members of the Phoronix community reckon that modern AMD Threadripper PRO processors have the potential to post double-digit in-game frame rates.

Qualcomm AI Hub Introduced at MWC 2024

Qualcomm Technologies, Inc. unveiled its latest advancements in artificial intelligence (AI) at Mobile World Congress (MWC) Barcelona. From the new Qualcomm AI Hub, to cutting-edge research breakthroughs and a display of commercial AI-enabled devices, Qualcomm Technologies is empowering developers and revolutionizing user experiences across a wide range of devices powered by Snapdragon and Qualcomm platforms.

"With Snapdragon 8 Gen 3 for smartphones and Snapdragon X Elite for PCs, we sparked commercialization of on-device AI at scale. Now with the Qualcomm AI Hub, we will empower developers to fully harness the potential of these cutting-edge technologies and create captivating AI-enabled apps," said Durga Malladi, senior vice president and general manager, technology planning and edge solutions, Qualcomm Technologies, Inc. "The Qualcomm AI Hub provides developers with a comprehensive AI model library to quickly and easily integrate pre-optimized AI models into their applications, leading to faster, more reliable and private user experiences."
Return to Keyword Browsing
Mar 26th, 2025 04:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts