News Posts matching #Benchmark

Return to Keyword Browsing

UL Adds New DirectStorage Test to 3DMark

Today we're excited to launch the 3DMark DirectStorage feature test. This feature test is a free update for the 3DMark Storage Benchmark DLC. The 3DMark DirectStorage feature test helps gamers understand the potential performance benefits that Microsoft's DirectStorage technology could have for their PC's gaming performance.

DirectStorage is a Microsoft technology for Windows PCs with PCIe SSDs that reduces the overhead when loading game data. DirectStorage can be used to reduce game loading times when paired with other technologies such as GDeflate, where the GPU can be used to decompress certain game assets instead of the CPU. On systems running Windows 11, DirectStorage can bring further benefits with BypassIO, lowering a game's CPU overhead by reducing the CPU workload when transferring data.

SPEC Delivers Major SPECworkstation 4.0 Benchmark Update, Adds AI/ML Workloads

The Standard Performance Evaluation Corporation (SPEC), the trusted global leader in computing benchmarks, today announced the availability of the SPECworkstation 4.0 benchmark, a major update to SPEC's comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on data analytics, AI and machine learning (ML).

The new SPECworkstation 4.0 benchmark provides a robust, real-world measure of CPU, graphics, accelerator, and disk performance, ensuring professionals have the data they need to make informed decisions about their hardware investments. The benchmark caters to the diverse needs of engineers, scientists, and developers who rely on workstation hardware for daily tasks. It includes real-world applications like Blender, Handbrake, LLVM and more, providing a comprehensive performance measure across seven different industry verticals, each focusing on specific use cases and subsystems critical to workstation users. SPECworkstation 4.0 benchmark marks a significant milestone for measuring workstation AI performance, providing an unbiased, real-world, application-driven tool for measuring how workstations handle AI/ML workloads.

Apple M4 Max CPU Faster Than Intel and AMD in 1T/nT Benchmarks

Early benchmark results have revealed Apple's newest M4 Max processor as a serious competitor to Arm-based CPUs from Qualcomm and even the best of x86 from Intel and AMD. Recent Geekbench 6 tests conducted on the latest 16-inch MacBook Pro showcase considerable improvements over both its predecessor and rival chips from major competitors. The M4 Max achieved an impressive single-core score of 4,060 points and a multicore score of 26,675 points, marking significant advancements in processing capability. These results represent approximately 30% and 27% improvements in single-core and multicore performance, respectively, compared to the previous M3 Max. This is also much higher than something like Snapdragon X Elite, which tops out at twelve cores per SoC. When measured against x86 competitors, the M4 Max also demonstrates substantial advantages.

The chip outperforms Intel's Core Ultra 9 285K by 19% in single-core and 16% in multicore tests, surpassing AMD's Ryzen 9 9950X by 18% in single-core and 25% in multicore performance. Notably, these achievements come with significantly lower power consumption than traditional x86 processors. The flagship system-on-chip features a sophisticated 16-core CPU configuration, combining twelve performance and four efficiency cores. Additionally, it integrates 40 GPU cores and supports up to 128 GB of unified memory, shared between CPU and GPU operations. The new MacBook Pro line also introduces Thunderbolt 5 compatibility, enabling data transfer speeds up to 120 Gb/s. While the M4 Max presents an impressive response to the current market, we have yet to see its capabilities in real-world benchmarks, as these types of synthetic runs are only a part of the performance story that Apple has prepared. We need to see productivity, content creation, and even gaming benchmarks to fully crown it the king of performance. Below is a table comparing Geekbench v6 scores, courtesy of Tom's Hardware, and a random Snapdragon X Elite (X1E-00-1DE) run in top configuration.

Intel Core Ultra 9 285K Tops PassMark Single-Thread Benchmark

According to the latest PassMark benchmarks, the Intel Core Ultra 9 285K is the highest-performing single-thread CPU. The benchmark king title comes as PassMark's official account on X shared single-threaded performance number, with the upcoming Arrow Lake-S flagship SKU, Intel Core Ultra 9 285K, scoring 5,268 points in single-core results. This is fantastic news for gamers, as games mostly care about single-core performance. This CPU, having 8 P-cores and 16 E-cores, boasts 5.7 GHz P-core boost and 4.6 GHz E-core boost frequencies. The single-core tests put the new SKU at 11% lead compared to the previous-generation Intel Core i9-14900K processor.

However, the multithreaded cases are running more slowly. The PassMark multithreaded results put Intel Core Ultra 9 285K at 46,872 points, which is about 22% slower than the last-generation top SKU. While this may be a disappointment for some, it is partially expected, given that Arrow Lake stops the multithreaded designs in Intel CPU families. From now on, every CPU will be a combination of P and E-Cores, tuned for efficiency or performance depending on the use case. It is also possible that the CPU used inn PassMark's testing was an engineering sample, so until official launch, we have no concrete information about its definitive performance comparison.

Zhaoxin's KX-7000 8-Core Processor Tested in Detail, Bested by 7 Year Old Core i3

PC Watch recently got hands-on with Shanghai Zhaoxin's latest desktop processor for some in depth testing and published a less than optimistic review comparing it to both the previous generation KX-U6780A and Intel's equally clocked budget quad-core offering from 2017, the 3.6 GHz Core i3-8100. Though Zhaoxin's latest could muscle its way through some multithreaded tests such as Cinebench R23 due to having twice the core count, the single core performance showed to be nearly half that of the i3 in everything from synthetic tests to gaming.

PC Watch tested with the Dragon Quest X Benchmark, a DX9.0c title, to put the spotlight on single core gaming performance even in older games as well as with Final Fantasy XIV running the latest Golden Legacy benchmark released back in April of this year to show off more modern multithreaded gaming. With AMD's RX 6400 handling graphics at 1080p the KX-7000/8 scored around 60% of the i3-8100 in Dragon Quest X, and in Final Fantasy XIV it scored 90% of the i3. The result in Final Fantasy XIV was considered, "somewhat comfortable" for gameplay but still less than optimal. As a comparison point for a modern budget gaming PC option the Ryzen 5 5600G was also included in testing, where in Final Fantasy XIV it was 30% ahead of the KX-7000/8. PC Watch attempted to put the integrated ZX-C1190 to work in games but found that despite supporting modern APIs and features, the performance was no match for the competition.
KX-7000 CPU-Z - Credit: PC Watch

AMD Ryzen AI Max 390 "Strix Halo" Surfaces in Geekbench AI Benchmark

In case you missed it, AMD's new madcap enthusiast silicon engineering effort, the "Strix Halo," is real, and comes with the Ryzen AI Max 300 series branding. These are chiplet-based mobile processors with one or two "Zen 5" CCDs—same ones found in "Granite Ridge" desktop processors—paired with a large SoC die that has an oversized iGPU. This arrangement lets AMD give the processor up to 16 full-sized "Zen 5" CPU cores, and an iGPU with as many as 40 RDNA 3.5 compute units (2,560 stream processors), and a 256-bit LPDDR5/x memory interface for UMA.

"Strix Halo" is designed for ultraportable gaming notebooks or mobile workstations where low PCB footprint is of the essence, and discrete GPU is not an option. For enthusiast gaming notebooks with discrete GPUs, AMD is designing the "Fire Range" processor, which is essentially a mobile BGA version of "Granite Ridge," and a successor to the Ryzen 7045 series "Dragon Range." The Ryzen AI Max series has three models based on CPU and iGPU CU counts—the Ryzen AI Max 395+ (16-core/32-thread with 40 CU), the Ryzen AI Max 390 (12-core/24-thread with 40 CU), and the Ryzen AI Max 385 (8-core/16-thread, 32 CU). An alleged Ryzen AI Max 390 engineering sample surfaced on the Geekbench AI benchmark online database.

Geekbench AI Hits 1.0 Release: CPUs, GPUs, and NPUs Finally Get AI Benchmarking Solution

Primate Labs, the developer behind the popular Geekbench benchmarking suite, has launched Geekbench AI—a comprehensive benchmark tool designed to measure the artificial intelligence capabilities of various devices. Geekbench AI, previously known as Geekbench ML during its preview phase, has now reached version 1.0. The benchmark is available on multiple operating systems, including Windows, Linux, macOS, Android, and iOS, making it accessible to many users and developers. One of Geekbench AI's key features is its multifaceted approach to scoring. The benchmark utilizes three distinct precision levels: single-precision, half-precision, and quantized data. This evaluation aims to provide a more accurate representation of AI performance across different hardware designs.

In addition to speed, Geekbench AI places a strong emphasis on accuracy. The benchmark assesses how closely each test's output matches the expected results, offering insights into the trade-offs between performance and precision. The release of Geekbench AI 1.0 brings support for new frameworks, including OpenVINO, ONNX, and Qualcomm QNN, expanding its compatibility across various platforms. Primate Labs has also implemented measures to ensure fair comparisons, such as enforcing minimum runtime durations for each workload. The company noted that Samsung and NVIDIA are already utilizing the software to measure their chip performance in-house, showing that adoption is already strong. While the benchmark provides valuable insights, real-world AI applications are still limited, and reliance on a few benchmarks may paint a partial picture. Nevertheless, Geekbench AI represents a significant step forward in standardizing AI performance measurement, potentially influencing future consumer choices in the AI-driven tech market. Results from the benchmark runs can be seen here.

"Black Myth: Wukong" Game Gets Benchmarking Tool Companion Designed to Evaluate PC Performance

Game Science, the developer behind the highly anticipated action RPG "Black Myth: Wukong," has released a free benchmark tool on Steam for its upcoming game. This standalone application, separate from the main game, allows PC users to evaluate their hardware performance and system compatibility in preparation for the game's launch. The "Black Myth: Wukong Benchmark Tool" offers a unique glimpse into the game's visuals by rendering a real-time in-game sequence. While not playable, it provides valuable insights into how well a user's system will handle the game's demanding graphics and performance requirements. One of the tool's standout features is its customization options. Users can tweak various graphics settings to preview the game's visuals and performance under different configurations. This flexibility allows gamers to find the optimal balance between visual fidelity and smooth gameplay for their specific hardware setup.

However, Game Science has cautioned that due to the complexity and variability of gaming scenarios, the benchmark results may not fully represent the final gaming experience. This caveat shows the tool's role as a guide rather than a definitive measure of performance. The benchmark tool's system requirements offer a clear picture of the hardware needed to run "Black Myth: Wukong." At a minimum, users will need a Windows 10 system with an Intel Core i5-8400 or AMD Ryzen 5 1600 processor, 16 GB of RAM, and either an NVIDIA GeForce GTX 1060 6 GB or AMD Radeon RX 580 8 GB graphics card. For an optimal experience, the recommended specifications include an Intel Core i7-9700 or AMD Ryzen 5 5500 processor and an NVIDIA GeForce RTX 2060, AMD Radeon RX 5700 XT, or Intel Arc A750 graphics card. Interestingly, the benchmark tool supports DLSS, FSR, and XeSS technologies, indicating that the final game will likely include these performance-enhancing features. The developers also strongly recommend using an SSD for storage.

FinalWire Releases AIDA64 v7.35 with New CheckMate 64-bit Benchmark

FinalWire Ltd. today announced the immediate availability of AIDA64 Extreme 7.35 software, a streamlined diagnostic and benchmarking tool for home users; the immediate availability of AIDA64 Engineer 7.35 software, a professional diagnostic and benchmarking solution for corporate IT technicians and engineers; the immediate availability of AIDA64 Business 7.35 software, an essential network management solution for small and medium scale enterprises; and the immediate availability of AIDA64 Network Audit 7.35 software, a dedicated network audit toolset to collect and manage corporate network inventories. The new AIDA64 update introduces a new 64-bit CheckMate benchmark, AVX-512 accelerated benchmarks for AMD Ryzen AI APU, and supports the latest graphics and GPGPU computing technologies by AMD, Intel and NVIDIA.

DOWNLOAD: FinalWire AIDA64 v7.35 Extreme

Qualcomm Snapdragon X "Copilot+" AI PCs Only Accounted for 0.3% of PassMark Benchmark Runs

The much-anticipated revolution in AI-powered personal computing seems to be off to a slower start than expected. Qualcomm's Snapdragon X CPUs, touted as game-changers in the AI PC market, have struggled to gain significant traction since their launch. Recent data from PassMark, a popular benchmarking software, reveals that Snapdragon X CPUs account for a mere 0.3% of submissions in the past 30 days. This is a massive contrast to the 99.7% share held by traditional x86 processors from Intel and AMD, which raises questions about the immediate future of ARM-based PCs. The underwhelming adoption comes despite bold predictions from industry leaders. Qualcomm CEO Cristiano Amon had projected that ARM-based CPUs could capture up to 50% of the Windows PC market by 2029. Similarly, ARM's CEO anticipated a shift away from x86's long-standing dominance.

However, it turns out that these PCs are primarily bought for the battery life, not their AI capabilities. Of course, it's premature to declare Arm's Windows venture a failure. The AI PC market is still in its infancy, and upcoming mid-tier laptops featuring Snapdragon X Elite CPUs could boost adoption rates. A lot of time still needs to pass before the volume of these PCs reaches millions of units shipped by x86 makers. The true test will come with the launch of AMD's Ryzen AI 300 and Intel's Lunar Lake CPUs, providing a clearer picture of how ARM-based options compare in AI performance. As the AI PC landscape evolves, Qualcomm faces mounting pressure. NVIDIA's anticipated entry into the market and significant performance improvements in next-generation x86 processors from Intel and AMD pose a massive challenge. The coming months will be crucial in determining whether Snapdragon X CPUs can live up to their initial hype and carve out a significant place in the AI PC ecosystem.

Ryzen AI 300 Series: New AMD APUs Appear in CrossMark Benchmark Database

AMD's upcoming Ryzen AI 300 APUs pre-launch leaks continue, the latest coming from the BAPCo CrossMark benchmark database. Two models have been spotted: the officially announced Ryzen AI 9 HX 370 and the recently leaked Ryzen AI 7 PRO 360. The Ryzen AI 9 HX 370, part of the "Strix Point" family, boasts 12 cores and 24 threads. Its hybrid architecture combines four Zen 5 cores with eight Zen 5C cores. The chip reaches boost clocks up to 5.1 GHz, features 36 MB of cache (24 MB L3 + 12 MB L2), and includes a Radeon 890M iGPU with 16 compute units (1024 cores). The Ryzen AI 7 PRO 360, previously leaked as a 12-core part, has now been confirmed with 8 cores and 16 threads. It utilizes a 3+5 configuration of Zen 5 and Zen 5C cores, respectively. The APU includes 8 MB each of L2 and L3 cache, with a base clock of 2.0 GHz. Its integrated Radeon 870M GPU is expected to feature the RDNA 3.5 architecture with fewer cores than its higher-end counterparts, possibly 8 compute units.

According to the leaked benchmarks, the Ryzen AI 9 HX 370 was tested in an HP laptop, while the Ryzen AI 7 PRO 360 appeared in a Lenovo model equipped with LPDDR5-7500 memory. Initial scores appear unremarkable compared to top Intel Core Ultra 9 185H and AMD Ryzen 7040 APUs, however, the tested APUs may be early samples, and their performance could differ from final retail versions. Furthermore, while the TDP range is known to be between 15 W and 54 W, the specific power configurations used in these benchmarks remain unclear. The first Ryzen AI 300 laptops are slated for release on July 28th, with Ryzen AI 300 PRO models expected in October.

Basemark Releases Breaking Limit Cross-Platform Ray Tracing Benchmark

Basemark announced today the release of a groundbreaking cross-platform ray tracing benchmark, GPUScore: Breaking Limit. This new benchmark is designed to evaluate the performance of the full range of ray tracing capable devices, including smartphones, tablets, laptops and high-end desktops with discrete GPUs. With support for multiple operating systems and graphics APIs, Breaking Limit provides a comprehensive performance evaluation across various platforms and devices.

As ray tracing technology becomes increasingly prevalent in consumer electronics, from high-end desktops to portable devices like laptops and smartphones, there is a critical need for a benchmark that can accurately assess and compare performance across different devices and platforms. Breaking Limit addresses this gap, providing valuable insights into how various devices handle hardware-accelerated graphics rendering. The benchmark is an essential tool for developers, manufacturers, and consumers to measure and compare the performance of real-time ray tracing rendering across different hardware and software environments reliably.

Intel Releases Arc GPU Graphics Drivers 101.5592 WHQL

Intel today released the latest version of the Arc GPU Graphics drivers. Version 101.5592 WHQL is a minor update over the 101.5590 WHQL drivers that the company released on June 15. It corrects a bug that caused Pugetbench Extended Preset Benchmark to fail on Arc A-series discrete GPUs in certain Adobe Premiere Pro processing tests. The bug had caused the benchmark to fail to complete. The company took the opportunity to identify a handful more issues with certain Vulkan API games such as No Man's Sky, Enshrouded, and Doom Eternal. A bug that causes Topaz Video AI to throw up errors when exporting videos after using some models for video enhancements, has been identified.

DOWNLOAD: Intel Arc GPU Graphics Drivers 101.5592 WHQL

Quantinuum Launches Industry-First, Trapped-Ion 56-Qubit Quantum Computer, Breaking Key Benchmark Record

Quantinuum, the world's largest integrated quantum computing company, today unveiled the industry's first quantum computer with 56 trapped-ion qubits. H2-1 has further enhanced its market-leading fidelity and is now impossible for a classical computer to fully simulate.

A joint team from Quantinuum and JPMorgan Chase ran a Random Circuit Sampling (RCS) algorithm, achieving a remarkable 100x improvement over prior industry results from Google in 2019 and setting a new world record for the cross entropy benchmark. H2-1's combination of scale and hardware fidelity makes it difficult for today's most powerful supercomputers and other quantum computing architectures to match this result.

Sabrents New Apex X16 Rocket 5 Destroyer in Testing, Benchmark Numbers Included

You may have heard about Apex's new storage solution called the X21, we wrote about it a while ago. If not, let us remind you that is a huge single expansion card that can hold up to 21 M.2 solid-state drives (SSDs), which is incredible. Apex has also made x16 versions of this card, and it is currently in the late testing stage. The back side of the Apex X16 has 8 M.2 slots, giving it a total of 16 M.2 slots. Although these two cards are impressive with their unmatched speed and capacity, Apex is not stopping there.

Today, we are showing some early pictures and benchmark numbers of the Sabrent Apex X16 Rocket 5 Destroyer. As you can see, this new 5th generation (Gen 5) card from Apex holds 16 of Sabrent's Rocket 5 4 TB SSDs. This gives it a maximum capacity of 64 TB using the fastest Gen 5 SSDs on the market. The expansion card uses a single PCIe Gen 5 x16 slot, which leaves plenty of space for other cards or even another X16 card. Multiple X16 cards can be combined, or it can be paired with other Apex series cards like the Sabrent Apex X21 Destroyer, for a total of 168 TB of incredible 4th generation (Gen 4) speeds. The Sabrent Apex X16 Rocket 5 AIC can reach amazing speeds of 56 GiB/s Seq Reads and 54 GiB/s Seq Writes. For 4K Random IOPS, it can reach 20 million 4K Reads and 19 million Write IOPS. The Sabrent Apex X16 Rocket 5 will be available to order soon.

Chaos Releases V-Ray 6.1 Benchmark

Chaos releases V-Ray 6 Benchmark, updating the free standalone application to help users quickly evaluate V-Ray rendering speeds and compare the capabilities of leading CPUs and GPUs. V-Ray 6 Benchmark adds new looping capabilities and GPU mode comparison, giving users even more control over their findings.

Since launching in 2017, V-Ray Benchmark has become a standard for new hardware testing, helping countless users and reviewers assess the rendering performance of laptops, workstations, graphics cards and more. By benchmarking against one of the most popular renderers in the world, professionals can quickly gauge how hardware from NVIDIA, AMD, Intel and Apple handle common 3D assets like cities, characters and hard-surface models.

The free V-Ray 6 Benchmark app is available now.

Apple M4 Chip Benchmarked: 22% Faster Single-Core and 25% Faster Multi-Core Performance

Yesterday, Apple launched its next-generation M4 chip based on Apple Silicon custom design. The processor is a fourth-generation design that brings AI capabilities and improved CPU performance. First debuting in an iPad Pro, the CPU has been benchmarked in Geekbench v6. And results seem to be very promising. The latest M4 chip managed to score 3,767 points in single-core tests and 14,677 points in multi-core tests. Compared to the M3 chip, which scores 3,087 points in single-core and 11,702 in multi-core tests, the M4 chip is about 22% faster in single-core and 25% faster in multi-core synthetic benchmarks.

Of course, these results are not real-world use cases, but they give us a hint of what the Apple Silicon design team has been working on. For real-world results, we have to wait a little longer to see reviews and results from devices such as MacBook Pro and MacBook Air, which should have better cooling and possibly better clocks for the chip.

UL Announces the Procyon AI Image Generation Benchmark Based on Stable Diffusion

We're excited to announce we're expanding our AI Inference benchmark offerings with the UL Procyon AI Image Generation Benchmark, coming Monday, 25th March. AI has the potential to be one of the most significant new technologies hitting the mainstream this decade, and many industry leaders are competing to deliver the best AI Inference performance through their hardware. Last year, we launched the first of our Procyon AI Inference Benchmarks for Windows, which measured AI Inference performance with a workload using Computer Vision.

The upcoming UL Procyon AI Image Generation Benchmark provides a consistent, accurate and understandable workload for measuring the AI performance of high-end hardware, built with input from members of the industry to ensure fair and comparable results across all supported hardware.

Zhaoxin KX-7000 8-Core CPU Gets Geekbenched

Zhaoxin finally released its oft-delayed KX-7000 CPU series last December—the Chinese manufacturer claimed that its latest "Century Avenue Core" uArch consumer/desktop-oriented range was designed to "deliver double the performance of previous generations." Freshly discovered Geekbench 6.2.2 results indicate that Zhaoxin has succeeded on that front—Wccftech has pored over these figures, generated by an: "entry-level Zhaoxin KX-7000 CPU which has 8 cores, 8 threads, 4 MB of L2, and 32 MB of L3 cache. This chip was running at a base clock of 3.0 GHz and a boost clock of 3.3 GHz which is below its standard 3.6 GHz boost profile."

The new candidate was compared to Zhaoxin's previous-gen KX-U6780A and KX-6000G models. Intel's Core i3-10100F processor was thrown in as a familiar Western point of reference. The KX-7000 scored: "823 points in single-core, and 3813 points in multi-core tests. For comparisons, the Intel's Comet Lake CPU with 4 cores and 8 threads plus a boost of up to 4.3 GHz offers a much higher score. It's around 75% faster in single and 17% faster in multi-core tests within the same benchmark." The higher clock speeds, doubled core counts and TDPs do deliver "twice the performance" when compared to direct forebears—mission accomplished there. It is clear that Zhaoxin's latest CPU architecture cannot keep up with a generations old Team Blue design. Loongson's 3A6000 processor is a very promising prospect—reports suggest that this chip is somewhat comparable to mainstream AMD Zen 4 and Intel Raptor Lake products.

Avatar: Frontiers of Pandora's Latest Patch Claims Fixing of FSR 3 Artefacts & FPS Tracking

A new patch for Avatar: Frontiers of Pandora deployed on March 1, bringing more than 150 fixes and adjustments to the game. Title Update 3 includes technical, UI, balancing, main quest, and side quest improvements, plus additional bug fixes. To provide players with an improved experience, the development team, led by Massive Entertainment, listened to feedback from the community while working on Title Update 3.

An additional patch, Title Update 3.1, was deployed on March 7, adding additional fixes to the game. Check out the full list of improvements included in Title Update 3 & 3.1 here, and read on for the most notable improvements now available in Avatar: Frontiers of Pandora.

Update Mar 14th: TPU has received alerts regarding player feedbackMassive Entertainment's "Title Update 3" has reportedly broken their implementation of FSR 3 in Avatar: Frontiers of Pandora. We will keep an eye on official Ubisoft channels—so far they have not addressed these FSR-related problems.

AMD Ryzen 7 8840U "Hawk Point" APU Exceeds Expectations in 10 W TDP Gaming Test

AMD Ryzen 8040 "Hawk Point" mobile processors continue to roll out in all sorts of review sample guises—mostly within laptops/notebooks and handheld gaming PC segments. An example of the latter would be GPD's Hawk Point-refreshed Win Max 2 model—Cary Golomb, a tech reviewer and self-described evangelist of "PC Gaming Handhelds Since 2016" has acquired this device for benchmark comparison purposes. A Ryzen 7 8840U-powered GPD Win Max 2 model was pitched against similar devices that house older Team Red APU technologies. Golomb's collection included Valve's Steam Deck LCD model, and three "Phoenix" Ryzen 7840U-based GPD models. He did not have any top-of-the-line ASUS or Lenovo handhelds within reach, but the onboard Ryzen Z1 Extreme APU is a close relative of 7840U.

Golomb's social media post included a screenshot of a Batman: Arkham Knight "average frames per second" comparison chart—all devices were running on a low 10 W TDP setting. The overall verdict favors AMD's new Hawk Point part: "Steam Deck low TDP performance finally dethroned...GPD continues to make the best AMD devices. 8840U shouldn't be better, but everywhere I'm testing, it is consistently better across every TDP. TSP measuring similar." Hawk Point appears to be a slight upgrade over Phoenix—most of the generational improvements reside within a more capable XDNA NPU, so it is interesting to see that the 8840U outperforms its predecessor. They both sport AMD's Radeon 780M integrated graphics solution (RDNA 3), while the standard/first iteration Steam Deck makes do with an RDNA 2-era "Van Gogh" iGPU. Golomb found that the: "three other GPD 7840U devices behaved somewhat consistently."

MSI Claw Review Units Observed Trailing Behind ROG Ally in Benchmarks

Chinese review outlets have received MSI Claw sample units—the "Please, Xiao Fengfeng" Bilibili video channel has produced several comparison pieces detailing how the plucky Intel Meteor Lake-powered handheld stands up against its closest rival; ASUS ROG Ally. The latter utilizes an AMD Ryzen Z1 APU—in Extreme or Standard forms—many news outlets have pointed out that the Z1 Extreme processor is a slightly reworked Ryzen 7 7840U "Phoenix" processor. Intel and its handheld hardware partners have not dressed up Meteor Lake chips with alternative gaming monikers—simply put, the MSI Claw arrives with Core Ultra 7-155H or Ultra 5-135H processors onboard. The two rival systems both run on Window 11, and also share the same screen size, resolution, display technology (IPS) and 16 GB LPDDR5-6400 memory configuration. The almost eight months old ASUS handheld seems to outperform its near-launch competition.

Xiao Fengfeng's review (Ultra 7-155H versus Z1 Extreme) focuses on different power levels and how they affect handheld performance—the Claw and Ally have user selectable TDP modes. A VideoCardz analysis piece lays out key divergences: "Both companies offer easy TDP profile switches, allowing users to adjust performance based on the game's requirements or available battery life. The Claw's larger battery could theoretically offer more gaming time or higher TDP with the same battery life. The system can work at 40 W TDP level (but in reality it's between 35 and 40 watts)...In the Shadow of the Tomb Raider test, the Claw doesn't seem to outperform the ROG Ally. According to a Bilibili creator's test, the system falls short at four different power levels: 15 W, 20 W, 25 W, and max TDP (40 W for Claw and 30 W for Ally)."

AMD Develops ROCm-based Solution to Run Unmodified NVIDIA's CUDA Binaries on AMD Graphics

AMD has quietly funded an effort over the past two years to enable binary compatibility for NVIDIA CUDA applications on their ROCm stack. This allows CUDA software to run on AMD Radeon GPUs without adapting the source code. The project responsible is ZLUDA, which was initially developed to provide CUDA support on Intel graphics. The developer behind ZLUDA, Andrzej Janik, was contracted by AMD in 2022 to adapt his project for use on Radeon GPUs with HIP/ROCm. He spent two years bringing functional CUDA support to AMD's platform, allowing many real-world CUDA workloads to run without modification. AMD decided not to productize this effort for unknown reasons but did open-source it once funding ended per their agreement. Over at Phoronix, there were several benchmarks testing AMD's ZLUDA implementation over a wide variety of benchmarks.

Benchmarks found that proprietary CUDA renderers and software worked on Radeon GPUs out-of-the-box with the drop-in ZLUDA library replacements. CUDA-optimized Blender 4.0 rendering now runs faster on AMD Radeon GPUs than the native ROCm/HIP port, reducing render times by around 10-20%, depending on the scene. The implementation is surprisingly robust, considering it was a single-developer project. However, there are some limitations—OptiX and PTX assembly codes still need to be fully supported. Overall, though, testing showed very promising results. Over the generic OpenCL runtimes in Geekbench, CUDA-optimized binaries produce up to 75% better results. With the ZLUDA libraries handling API translation, unmodified CUDA binaries can now run directly on top of ROCm and Radeon GPUs. Strangely, the ZLUDA port targets AMD ROCm 5.7, not the newest 6.x versions. Only time will tell if AMD continues investing in this approach to simplify porting of CUDA software. However, the open-sourced project now enables anyone to contribute and help improve compatibility. For a complete review, check out Phoronix tests.

AMD Ryzen 9 7940HX APU Benchmarked in ASUS Tianxuan 5 Pro Laptop

ASUS China has distributed Tianxuan 5 Pro laptop review samples to media outlets in the region—a video evaluation was uploaded to Bilibili yesterday, as discovered and shared by 9550pro. The reviewer, "Wheat Milk Mitsu," put his sampled laptop's AMD Ryzen 9 7940HX processor through the proverbial wringer—with benchmarking exercises conducted in Cinebench R23, PCMark 10, Counter Strike 2, Cyberpunk 2077, Metro Exodus and more. The Ryzen 9 7940HX "Dragon Range" APU was last spotted in the specification sheets for ASUS TUF Gaming A16 (2024) laptop models—the mobile processor is essentially an underclocked offshoot of Team Red's Ryzen 9 7945HX. AMD's Ryzen 8040 "Hawk Point" series has received most of the attention in Western markets—we only see occasional coverage of older Zen 4 "Dragon Range" parts.

AMD's slightly weaker Ryzen 9 7940HX processor is no slouch when compared to its higher clock sibling, despite a lower base clock (2.4 GHz) and Turbo (5.2 GHz)—the Tianxuan (China's equivalent to TUF Gaming) branded laptop was outfitted with a GeForce RTX 4070 mobile GPU and 16 GB of DDR5 5600 RAM. Synthetic benchmark results in Cinebench R23 indicate a marginal 3.7% difference, and multi-core figures show an even smaller difference; 1%. The two Dragon Range APUs exhibited largely the same performance in gaming scenarios, although the 7945HX pulls ahead in Counter-Strike 2 frame rate stakes—328 vs. 265 at 1440p, and 378 vs. 308 at 1080p. AMD's convoluted naming schemes make it difficult to keep track of its many mobile offerings—a 7840HX SKU could join the Dragon Range family in Q1 2024. A few Western media outlets believe that a smattering of these parts are destined for global markets, but Team Red's Marketing HQ has not bothered to announce them in any official capacity. Strange times.

UL Solutions Previews Upcoming 3DMark Steel Nomad Benchmark

Thank you to the 3DMark community - the gamers, overclockers, hardware reviewers, tech-heads and those in the industry using our benchmarks, who have joined us in discovering what the cutting edge of PC hardware can do over this last quarter of a century. Looking back, it's amazing how far graphics have come, and we're very excited to see what the next 25 years bring.

After looking back, it's time to share a sneak peek of what's coming next. Here are some preview screenshots for 3DMark Steel Nomad, our successor to 3DMark Time Spy. It's been more than seven years since we launched Time Spy, and after more than 42 million submitted results, we think it's time for a new heavy non-ray tracing benchmark. Steel Nomad will be our most demanding non-ray tracing benchmark and will not only support Windows using DirectX 12, but also macOS and iOS using Metal, Android using Vulkan, and Linux using Vulkan for Enterprise and reviewers. To celebrate 3DMark's 25th year, the scene will feature some callbacks to many of our previous benchmarks. We hope you have fun finding them all!
Return to Keyword Browsing
Dec 12th, 2024 08:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts