News Posts matching #Benchmark

Return to Keyword Browsing

Apple's A18 4-core iGPU Benched Against Older A16 Bionic, 3DMark Results Reveal 10% Performance Deficit

Apple's new budget-friendly iPhone 16e model was introduced earlier this month; potential buyers were eyeing a device (starting at $599) that houses a selectively "binned" A18 mobile chipset. The more expensive iPhone 16 and iPhone 16 Plus models were launched last September, with A18 chips on-board; featuring six CPU cores, and five GPU cores. Apple's brand-new 16E smartphone seems to utilize an A18 sub-variant—tech boffins have highlighted this package's reduced GPU core count: of four. The so-called "binned A18" reportedly posted inferior performance figures—15% slower—when lined up against its standard 5-core sibling (in Geekbench 6 Metal tests). The iPhone 16E was released at retail today (February 28), with review embargoes lifted earlier in the week.

A popular portable tech YouTuber—Dave2D (aka Dave Lee)—decided to pit his iPhone 16E sample unit against older technology; contained within the iPhone 15 (2023). The binned A18's 4-core iGPU competed with the A16 Bionic's 5-core integrated graphics solution in a 3DMark Wild Life Extreme Unlimited head-to-head. Respective tallies—of 2882 and 3170 points—were recorded for posterity's sake. The more mature chipset (from 2022) managed to surpass its younger sibling by ~10%, according to the scores presented on Dave2D's comparison chart. The video reviewer reckoned that the iPhone 16E's SoC offers "killer performance," despite reservations expressed about the device not offering great value for money. Other outlets have questioned the prowess of Apple's latest step down model. Referencing current-gen 3DMark benchmark results, Wccftech observed: "for those wanting to know the difference between the binned A18 and non-binned variant; the SoC with a 5-core GPU running in the iPhone 16 finishes the benchmark run with an impressive 4007 points, making it a massive 28.04 percent variation between the two (pieces of) silicon. It is an eye-opener to witness such a mammoth performance drop, which also explains why Apple resorted to chip-binning on the iPhone 16e as it would help bring the price down substantially."

AMD Ryzen 9 9950X3D Leaked 3DMark & Cinebench Results Indicate 9950X-esque Performance

The AMD Ryzen 9 9950X3D processor will head to retail next month—a March 12 launch day is rumored—but a handful of folks seem to have early samples in their possession. Reviewers and online influencers have been tasked with evaluating pre-launch silicon, albeit under strict conditions; i.e. no leaking. Inevitably, NDA-shredding material has seeped out—yesterday, we reported on an alleged sample's ASUS Silicon Prediction rating. Following that, a Bulgarian system integrator/hardware retailer decided to upload Cinebench R23 and PCMark Time Spy results to Facebook. Evidence of this latest leak was scrubbed at the source, but VideoCardz preserved crucial details.

The publication noticed distinguishable QR and serial codes in PCbuild.bg's social media post; so tracing activities could sniff out points of origin. As expected, the leaked benchmark data points were compared to Ryzen 9 9950X and 7950X3D scores. The Ryzen 9 9950X3D sample recorded a score of 17,324 points in 3DMark Time Spy, as well as 2279 points (single-core) and 42,423 points (multi-core) in Cinebench R23. Notebookcheck observed that the pre-launch candidate came: "out ahead of the Ryzen 9 7950X3D in both counts, even if the gaming win is less than significant. Comparing the images of the benchmark results to our in-house testing and benchmark database shows the 9950X3D beating the 7950X3D by nearly 17% in Cinebench multicore." When compared to its non-3D V-Cache equivalent, the Ryzen 9 9950X3D leverages a slight performance advantage. A blurry shot of PCbuild.bg's HWiNFO session shows the leaked processor's core clock speeds; going up to 5.7 GHz (turbo) on a single CCD (non-X3D). The X3D-equipped portion seems capable of going up to 5.54 GHz.

Dune: Awakening Release Date and Price Revealed, Character Creation Now Live!

Today, Funcom finally lifted the veil on Dune: Awakening's release date. The open world, multiplayer survival game set on Arrakis will come to Steam on May 20! Players can begin their journey today by diving into the brand-new Character Creation & Benchmark Mode, available now through Steam. Created characters can then be imported into Dune: Awakening at launch.

Inspired by Frank Herbert's legendary sci-fi novel and Legendary Entertainment's award-winning films, Dune: Awakening is crafted by Funcom's veteran developers to deliver an experience that resonates with both Dune enthusiasts and survival game fans alike. Get ready to step into the biggest Dune game ever made with today's trailer.

AMD & Nexa AI Reveal NexaQuant's Improvement of DeepSeek R1 Distill 4-bit Capabilities

Nexa AI, today, announced NexaQuants of two DeepSeek R1 Distills: The DeepSeek R1 Distill Qwen 1.5B and DeepSeek R1 Distill Llama 8B. Popular quantization methods like the llama.cpp based Q4 K M allow large language models to significantly reduce their memory footprint and typically offer low perplexity loss for dense models as a tradeoff. However, even low perplexity loss can result in a reasoning capability hit for (dense or MoE) models that use Chain of Thought traces. Nexa AI has stated that NexaQuants are able to recover this reasoning capability loss (compared to the full 16-bit precision) while keeping the 4-bit quantization and all the while retaining the performance advantage. Benchmarks provided by Nexa AI can be seen below.

We can see that the Q4 K M quantized DeepSeek R1 distills score slightly less (except for the AIME24 bench on Llama 3 8b distill, which scores significantly lower) in LLM benchmarks like GPQA and AIME24 compared to their full 16-bit counter parts. Moving to a Q6 or Q8 quantization would be one way to fix this problem - but would result in the model becoming slightly slower to run and requiring more memory. Nexa AI has stated that NexaQuants use a proprietary quantization method to recover the loss while keeping the quantization at 4-bits. This means users can theoretically get the best of both worlds: accuracy and speed.

NVIDIA GeForce RTX 5070 Ti Allegedly Scores 16.6% Improvement Over RTX 4070 Ti SUPER in Synthetic Benchmarks

Thanks to some early 3D Mark benchmarks obtained by VideoCardz, NVIDIA's upcoming GeForce RTX 5070 Ti GPU paints an interesting picture of performance gains over the predecessor. Testing conducted with AMD's Ryzen 7 9800X3D processor and 48 GB of DDR5-6000 memory has provided the first glimpse into the card's capabilities. The new GPU demonstrates a 16.6% performance improvement over its predecessor, the RTX 4070 Ti SUPER. However, benchmark data shows it is falling short of the more expensive RTX 5080 by 13.2%, raising questions about the price-to-performance ratio given the $250 price difference between the two cards. Priced at $749 MSRP, the RTX 5070 Ti could be even pricier in retail channels at launch, especially with limited availability. The card's positioning becomes particularly interesting compared to the RTX 5080's $999 price point, which commands a 33% premium for its additional performance capabilities.

As a reminder, the RTX 5070 Ti boasts 8,960 CUDA cores, 280 texture units, 70 RT cores for ray tracing, and 280 tensor cores for AI computations, all supported by 16 GB of GDDR7 memory running at 28 Gbps effective speed across a 256-bit bus interface, resulting in an 896 GB/s bandwidth. We have to wait for proper reviews for the final performance conclusion, as synthetic benchmarks tell only part of the story. Modern gaming demands consideration of advanced features such as ray tracing and upscaling technologies, which can significantly impact real-world performance. The true test will come from comprehensive gaming benchmarks tested over various cases. The gaming community won't have to wait long for detailed analysis, as official reviews will be reportedly released in just a few days. Additional evaluations of non-MSRP versions should follow on February 20, the card's launch date.

NVIDIA GeForce RTX 5070 Ti Edges Out RTX 4080 in OpenCL Benchmark

A recently surfaced Geekbench OpenCL listing has revealed the performance improvements that the GeForce RTX 5070 Ti is likely to bring to the table, and the numbers sure look promising - that is, coming from the disappointment of the GeForce RTX 5080, which manages roughly 260,000 points in the benchmark, portraying a paltry 8% improvement over its predecessor. The GeForce RTX 5070 Ti, however, managed an impressive 248,000 points, putting it a substantial 20% ahead of the GeForce RTX 4070 Ti. Hilariously enough, the RTX 5080 is merely 4% ahead, making the situation even worse for the somewhat contentious GPU. NVIDIA has claimed similar performance improvements in its marketing material, which does seem quiet plausible.

Of course, an OpenCL benchmark is hardly representative of real-world gaming performance. That being said, there is no denying that raw benchmarks will certainly help buyers temper expectations and make decisions. Previous leaks and speculations have hinted at a roughly 10% improvement over its predecessor in raster performance and up to 15% improvements in ray tracing performance, although the OpenCL listing does indicate the RTX 5070 ti might be capable of a larger generational jump, neck-and-neck with NVIDIA's claims. For those in need of a refresher, the RTX 5070 Ti boasts 8960 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. Like its siblings, the RTX 5070 is also rumored to face "extremely limited" supply at launch. With its official launch less than a week away, we won't have much waiting to do to find out for ourselves.

NVIDIA RTX 5080 Laptop Defeats Predecessor By 19% in Time Spy Benchmark

The NVIDIA RTX 50-series witnessed quite a contentious launch, to say the least. Hindered by abysmal availability, controversial generational improvement, and whacky marketing tactics by Team Green, it would be safe to say a lot of passionate gamers were left utterly disappointed. That said, while the desktop cards have been the talk of the town as of late, the RTX 50 Laptop counterparts are yet to make headlines. Occasional leaks do appear on the interwebs, the latest one of which seems to indicate the 3D Mark Time Spy performance for the RTX 5080 Laptop GPU. And the results are - well, debatable.

We do know that the RTX 5080 Laptop GPU will feature 7680 CUDA cores, a shockingly modest increase over its predecessor. Considering that we did not get a node shrink this time around, the architectural improvements appear to be rather minimal, going by the tests conducted so far. Of course, the biggest boost in performance will likely be afforded by GDDR7 memory, utilizing a 256-bit bus, compared its predecessor's GDDR6 memory on a 192-bit bus. In 3D Mark's Time Spy DX12 test, which is somewhat of an outdated benchmark, the RTX 5080 Laptop managed roughly around 21,900 points. The RTX 4080 Laptop, on an average, rakes in around 18,200 points, putting the RTX 5080 Laptop ahead by almost 19%. The RTX 4090 Laptop is also left behind, by around 5%.

Capcom Releases Monster Hunter Wilds PC Performance Benchmark Tool

Hey hunters, how's it going? February is here, which means we are officially in the launch month of Monster Hunter Wilds! On February 28, your journey into the Forbidden Lands begins. Now, to help ensure you have a satisfying, fun experience come launch, we're pleased to share that the Monster Hunter Wilds Benchmark we'd previously mentioned we were looking into, is real, it's ready, and it's live from right now for you to try!

With the Monster Hunter Wilds Benchmark, we want to help our PC players feel more confident about how their PC will run Monster Hunter Wilds. In the next section, we're going to explain what the Monster Hunter Wilds Benchmark is, how it works, as well as some important information and differences you'll see between this and the Open Beta Test 1 and 2 experiences, so please take a moment to check it out.

UL Solutions Adds Support for DLSS 4 and DLSS Multi Frame Generation to the 3DMark NVIDIA DLSS Feature Test

We're excited to announce that in today's update to 3DMark, we're adding support for DLSS 4 and DLSS Multi Frame generation to the NVIDIA DLSS feature test. The NVIDIA DLSS feature test and this update were developed in partnership with NVIDIA. The 3DMark NVIDIA DLSS feature test lets you compare performance and image quality brought by enabling DLSS processing. If you have a new GeForce RTX 50 Series GPU, you'll also be able to compare performance with and without the full capabilities of DLSS 4.

You can choose to run the NVIDIA DLSS feature test using DLSS 4, DLSS 3 or DLSS 2. DLSS 4 includes the new DLSS Multi Frame Generation feature, and you can choose between several image quality modes—Quality, Balanced, Performance, Ultra Performance and DLAA. These modes are designed for different resolutions, from Full HD up to 8K. DLSS Multi Frame Generation uses AI to boost frame rates with up to three additional frames generated per traditionally rendered frame. In the 3DMark NVIDIA DLSS feature test, you are able to choose between 2x, 3x and 4x Frame Generation settings if you have an NVIDIA GeForce RTX 50 series GPU.

Ubisoft Unveils Assassin's Creed Shadows Recommended PC Specs

Hi everyone, Assassin's Creed Shadows is launching March 20, inviting you to experience the intertwined stories of Naoe, an adept shinobi Assassin, and Yasuke, a powerful African samurai. Today, you can pre-order the game on console and PC, and read up on Shadows' upcoming expansion, Claws of Awaji, which brings 10 hours of additional content free with your pre-order.

For those of you playing on PC, we've got all of Assassin's Creed Shadows' recommended PC specs listed in this article. Assassin's Creed Shadows will support raytraced global illumination and reflections, and will feature an in-game benchmark tool for performance analysis, ultra-wide resolutions, an uncapped framerate, and more. Check out all the specs chart below.

AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way

Although it has only been a few days since the RDNA 4-based GPUs from Team Red hit the scene, it appears that we have already been granted a first look at the 3D Mark performance of the highest-end Radeon RX 9070 XT GPU, and to be perfectly honest, the scores seemingly live up to our expectations - although with disappointing ray tracing performance. Unsurprisingly, the thread has been erased over at Chiphell, but folks have managed to take screenshots in the nick of time.

The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.

NVIDIA GeForce RTX 5080 Laptop GPU Challenges RTX 4090 Laptop in Leaked Benchmark

Once every two years or so, technology enthusiasts like ourselves have our sights pinned on what the GPU giants have in store for us. That moment is here, with both NVIDIA and AMD unveiling their Blackwell and RDNA 4 products respectively. NVIDIA has also announced its laptop offerings, with the RTX 5080 Laptop attempting to rule the mainstream high-performance segment. Now, barely a day or two after launch, we already have a rough idea of how mobile Blackwell is going to perform.

The leaked Geekbench OpenCL results, which comes courtesy of the Alienware Area-51 laptop, reveals how well the RTX 5080 Laptop GPU performs in a 175-watt configuration. According to the numbers, the RTX 5080 Laptop managed to barely exceed the 190,000-points barrier, putting it miles ahead of its predecessor which managed around 160,000. Interestingly, as the headline notes, the RTX 4090 Laptop was also left behind, which scores around 180,000 points on average, although systems with beefier cooling setups can post higher numbers.

AMD Ryzen AI 7 350 Benchmark Tips Cut-Back Radeon 860M GPU

AMD's upcoming Ryzen AI Kraken Point APUs appear to be affordable APUs for next-generation thin-and-light laptops and potentially even some gaming handhelds. Murmurings of these new APUs have been going around for quite some time, but a PassMark benchmark was just posted, giving us a pretty comprehensive look at the hardware configuration for the upcoming Ryzen AI 7 350. While the CPU configuration in the PassMark result confirms the 4+4 configuration we reported on previously, it seems as though the iGPU portion of the new Ryzen AI 7 is getting something of a downgrade compared to previous generations.

While all previous mobile Ryzen 7 and Ryzen 9 APUs have featured the Radeon -80M or -90M series iGPUs, the Ryzen AI 7 350 steps down to the AMD Radeon 860M. Although not much is known about the new iGPU, it uses the same nomenclature as the Radeon iGPUs found in previous Ryzen 5 APUs, suggesting it is the less performant of the new 800 series iGPUs. This would be the first time, at least since the introduction of the Ryzen branding, that a Ryzen 7 CPU will use a cut-down iGPU. This, along with the 4+4 (Zen 5 and Zen 5c) heterogenous architecture, suggests that this Ryzen 7 APU will prioritize battery life and thermal performance, likely in response to Qualcomm's recent offerings. Comparing the 760M to the single 860M benchmark on PassMark reveals similar performance, with the 860M actually falling behind the average 760M by an average of 9.1%. Take this with a grain of salt, though, since there is only one benchmark result on PassMark for the 860M.

UL Adds New DirectStorage Test to 3DMark

Today we're excited to launch the 3DMark DirectStorage feature test. This feature test is a free update for the 3DMark Storage Benchmark DLC. The 3DMark DirectStorage feature test helps gamers understand the potential performance benefits that Microsoft's DirectStorage technology could have for their PC's gaming performance.

DirectStorage is a Microsoft technology for Windows PCs with PCIe SSDs that reduces the overhead when loading game data. DirectStorage can be used to reduce game loading times when paired with other technologies such as GDeflate, where the GPU can be used to decompress certain game assets instead of the CPU. On systems running Windows 11, DirectStorage can bring further benefits with BypassIO, lowering a game's CPU overhead by reducing the CPU workload when transferring data.

SPEC Delivers Major SPECworkstation 4.0 Benchmark Update, Adds AI/ML Workloads

The Standard Performance Evaluation Corporation (SPEC), the trusted global leader in computing benchmarks, today announced the availability of the SPECworkstation 4.0 benchmark, a major update to SPEC's comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on data analytics, AI and machine learning (ML).

The new SPECworkstation 4.0 benchmark provides a robust, real-world measure of CPU, graphics, accelerator, and disk performance, ensuring professionals have the data they need to make informed decisions about their hardware investments. The benchmark caters to the diverse needs of engineers, scientists, and developers who rely on workstation hardware for daily tasks. It includes real-world applications like Blender, Handbrake, LLVM and more, providing a comprehensive performance measure across seven different industry verticals, each focusing on specific use cases and subsystems critical to workstation users. SPECworkstation 4.0 benchmark marks a significant milestone for measuring workstation AI performance, providing an unbiased, real-world, application-driven tool for measuring how workstations handle AI/ML workloads.

Apple M4 Max CPU Faster Than Intel and AMD in 1T/nT Benchmarks

Early benchmark results have revealed Apple's newest M4 Max processor as a serious competitor to Arm-based CPUs from Qualcomm and even the best of x86 from Intel and AMD. Recent Geekbench 6 tests conducted on the latest 16-inch MacBook Pro showcase considerable improvements over both its predecessor and rival chips from major competitors. The M4 Max achieved an impressive single-core score of 4,060 points and a multicore score of 26,675 points, marking significant advancements in processing capability. These results represent approximately 30% and 27% improvements in single-core and multicore performance, respectively, compared to the previous M3 Max. This is also much higher than something like Snapdragon X Elite, which tops out at twelve cores per SoC. When measured against x86 competitors, the M4 Max also demonstrates substantial advantages.

The chip outperforms Intel's Core Ultra 9 285K by 19% in single-core and 16% in multicore tests, surpassing AMD's Ryzen 9 9950X by 18% in single-core and 25% in multicore performance. Notably, these achievements come with significantly lower power consumption than traditional x86 processors. The flagship system-on-chip features a sophisticated 16-core CPU configuration, combining twelve performance and four efficiency cores. Additionally, it integrates 40 GPU cores and supports up to 128 GB of unified memory, shared between CPU and GPU operations. The new MacBook Pro line also introduces Thunderbolt 5 compatibility, enabling data transfer speeds up to 120 Gb/s. While the M4 Max presents an impressive response to the current market, we have yet to see its capabilities in real-world benchmarks, as these types of synthetic runs are only a part of the performance story that Apple has prepared. We need to see productivity, content creation, and even gaming benchmarks to fully crown it the king of performance. Below is a table comparing Geekbench v6 scores, courtesy of Tom's Hardware, and a random Snapdragon X Elite (X1E-00-1DE) run in top configuration.

Intel Core Ultra 9 285K Tops PassMark Single-Thread Benchmark

According to the latest PassMark benchmarks, the Intel Core Ultra 9 285K is the highest-performing single-thread CPU. The benchmark king title comes as PassMark's official account on X shared single-threaded performance number, with the upcoming Arrow Lake-S flagship SKU, Intel Core Ultra 9 285K, scoring 5,268 points in single-core results. This is fantastic news for gamers, as games mostly care about single-core performance. This CPU, having 8 P-cores and 16 E-cores, boasts 5.7 GHz P-core boost and 4.6 GHz E-core boost frequencies. The single-core tests put the new SKU at 11% lead compared to the previous-generation Intel Core i9-14900K processor.

However, the multithreaded cases are running more slowly. The PassMark multithreaded results put Intel Core Ultra 9 285K at 46,872 points, which is about 22% slower than the last-generation top SKU. While this may be a disappointment for some, it is partially expected, given that Arrow Lake stops the multithreaded designs in Intel CPU families. From now on, every CPU will be a combination of P and E-Cores, tuned for efficiency or performance depending on the use case. It is also possible that the CPU used inn PassMark's testing was an engineering sample, so until official launch, we have no concrete information about its definitive performance comparison.

Zhaoxin's KX-7000 8-Core Processor Tested in Detail, Bested by 7 Year Old Core i3

PC Watch recently got hands-on with Shanghai Zhaoxin's latest desktop processor for some in depth testing and published a less than optimistic review comparing it to both the previous generation KX-U6780A and Intel's equally clocked budget quad-core offering from 2017, the 3.6 GHz Core i3-8100. Though Zhaoxin's latest could muscle its way through some multithreaded tests such as Cinebench R23 due to having twice the core count, the single core performance showed to be nearly half that of the i3 in everything from synthetic tests to gaming.

PC Watch tested with the Dragon Quest X Benchmark, a DX9.0c title, to put the spotlight on single core gaming performance even in older games as well as with Final Fantasy XIV running the latest Golden Legacy benchmark released back in April of this year to show off more modern multithreaded gaming. With AMD's RX 6400 handling graphics at 1080p the KX-7000/8 scored around 60% of the i3-8100 in Dragon Quest X, and in Final Fantasy XIV it scored 90% of the i3. The result in Final Fantasy XIV was considered, "somewhat comfortable" for gameplay but still less than optimal. As a comparison point for a modern budget gaming PC option the Ryzen 5 5600G was also included in testing, where in Final Fantasy XIV it was 30% ahead of the KX-7000/8. PC Watch attempted to put the integrated ZX-C1190 to work in games but found that despite supporting modern APIs and features, the performance was no match for the competition.
KX-7000 CPU-Z - Credit: PC Watch

AMD Ryzen AI Max 390 "Strix Halo" Surfaces in Geekbench AI Benchmark

In case you missed it, AMD's new madcap enthusiast silicon engineering effort, the "Strix Halo," is real, and comes with the Ryzen AI Max 300 series branding. These are chiplet-based mobile processors with one or two "Zen 5" CCDs—same ones found in "Granite Ridge" desktop processors—paired with a large SoC die that has an oversized iGPU. This arrangement lets AMD give the processor up to 16 full-sized "Zen 5" CPU cores, and an iGPU with as many as 40 RDNA 3.5 compute units (2,560 stream processors), and a 256-bit LPDDR5/x memory interface for UMA.

"Strix Halo" is designed for ultraportable gaming notebooks or mobile workstations where low PCB footprint is of the essence, and discrete GPU is not an option. For enthusiast gaming notebooks with discrete GPUs, AMD is designing the "Fire Range" processor, which is essentially a mobile BGA version of "Granite Ridge," and a successor to the Ryzen 7045 series "Dragon Range." The Ryzen AI Max series has three models based on CPU and iGPU CU counts—the Ryzen AI Max 395+ (16-core/32-thread with 40 CU), the Ryzen AI Max 390 (12-core/24-thread with 40 CU), and the Ryzen AI Max 385 (8-core/16-thread, 32 CU). An alleged Ryzen AI Max 390 engineering sample surfaced on the Geekbench AI benchmark online database.

Geekbench AI Hits 1.0 Release: CPUs, GPUs, and NPUs Finally Get AI Benchmarking Solution

Primate Labs, the developer behind the popular Geekbench benchmarking suite, has launched Geekbench AI—a comprehensive benchmark tool designed to measure the artificial intelligence capabilities of various devices. Geekbench AI, previously known as Geekbench ML during its preview phase, has now reached version 1.0. The benchmark is available on multiple operating systems, including Windows, Linux, macOS, Android, and iOS, making it accessible to many users and developers. One of Geekbench AI's key features is its multifaceted approach to scoring. The benchmark utilizes three distinct precision levels: single-precision, half-precision, and quantized data. This evaluation aims to provide a more accurate representation of AI performance across different hardware designs.

In addition to speed, Geekbench AI places a strong emphasis on accuracy. The benchmark assesses how closely each test's output matches the expected results, offering insights into the trade-offs between performance and precision. The release of Geekbench AI 1.0 brings support for new frameworks, including OpenVINO, ONNX, and Qualcomm QNN, expanding its compatibility across various platforms. Primate Labs has also implemented measures to ensure fair comparisons, such as enforcing minimum runtime durations for each workload. The company noted that Samsung and NVIDIA are already utilizing the software to measure their chip performance in-house, showing that adoption is already strong. While the benchmark provides valuable insights, real-world AI applications are still limited, and reliance on a few benchmarks may paint a partial picture. Nevertheless, Geekbench AI represents a significant step forward in standardizing AI performance measurement, potentially influencing future consumer choices in the AI-driven tech market. Results from the benchmark runs can be seen here.

"Black Myth: Wukong" Game Gets Benchmarking Tool Companion Designed to Evaluate PC Performance

Game Science, the developer behind the highly anticipated action RPG "Black Myth: Wukong," has released a free benchmark tool on Steam for its upcoming game. This standalone application, separate from the main game, allows PC users to evaluate their hardware performance and system compatibility in preparation for the game's launch. The "Black Myth: Wukong Benchmark Tool" offers a unique glimpse into the game's visuals by rendering a real-time in-game sequence. While not playable, it provides valuable insights into how well a user's system will handle the game's demanding graphics and performance requirements. One of the tool's standout features is its customization options. Users can tweak various graphics settings to preview the game's visuals and performance under different configurations. This flexibility allows gamers to find the optimal balance between visual fidelity and smooth gameplay for their specific hardware setup.

However, Game Science has cautioned that due to the complexity and variability of gaming scenarios, the benchmark results may not fully represent the final gaming experience. This caveat shows the tool's role as a guide rather than a definitive measure of performance. The benchmark tool's system requirements offer a clear picture of the hardware needed to run "Black Myth: Wukong." At a minimum, users will need a Windows 10 system with an Intel Core i5-8400 or AMD Ryzen 5 1600 processor, 16 GB of RAM, and either an NVIDIA GeForce GTX 1060 6 GB or AMD Radeon RX 580 8 GB graphics card. For an optimal experience, the recommended specifications include an Intel Core i7-9700 or AMD Ryzen 5 5500 processor and an NVIDIA GeForce RTX 2060, AMD Radeon RX 5700 XT, or Intel Arc A750 graphics card. Interestingly, the benchmark tool supports DLSS, FSR, and XeSS technologies, indicating that the final game will likely include these performance-enhancing features. The developers also strongly recommend using an SSD for storage.

FinalWire Releases AIDA64 v7.35 with New CheckMate 64-bit Benchmark

FinalWire Ltd. today announced the immediate availability of AIDA64 Extreme 7.35 software, a streamlined diagnostic and benchmarking tool for home users; the immediate availability of AIDA64 Engineer 7.35 software, a professional diagnostic and benchmarking solution for corporate IT technicians and engineers; the immediate availability of AIDA64 Business 7.35 software, an essential network management solution for small and medium scale enterprises; and the immediate availability of AIDA64 Network Audit 7.35 software, a dedicated network audit toolset to collect and manage corporate network inventories. The new AIDA64 update introduces a new 64-bit CheckMate benchmark, AVX-512 accelerated benchmarks for AMD Ryzen AI APU, and supports the latest graphics and GPGPU computing technologies by AMD, Intel and NVIDIA.

DOWNLOAD: FinalWire AIDA64 v7.35 Extreme

Qualcomm Snapdragon X "Copilot+" AI PCs Only Accounted for 0.3% of PassMark Benchmark Runs

The much-anticipated revolution in AI-powered personal computing seems to be off to a slower start than expected. Qualcomm's Snapdragon X CPUs, touted as game-changers in the AI PC market, have struggled to gain significant traction since their launch. Recent data from PassMark, a popular benchmarking software, reveals that Snapdragon X CPUs account for a mere 0.3% of submissions in the past 30 days. This is a massive contrast to the 99.7% share held by traditional x86 processors from Intel and AMD, which raises questions about the immediate future of ARM-based PCs. The underwhelming adoption comes despite bold predictions from industry leaders. Qualcomm CEO Cristiano Amon had projected that ARM-based CPUs could capture up to 50% of the Windows PC market by 2029. Similarly, ARM's CEO anticipated a shift away from x86's long-standing dominance.

However, it turns out that these PCs are primarily bought for the battery life, not their AI capabilities. Of course, it's premature to declare Arm's Windows venture a failure. The AI PC market is still in its infancy, and upcoming mid-tier laptops featuring Snapdragon X Elite CPUs could boost adoption rates. A lot of time still needs to pass before the volume of these PCs reaches millions of units shipped by x86 makers. The true test will come with the launch of AMD's Ryzen AI 300 and Intel's Lunar Lake CPUs, providing a clearer picture of how ARM-based options compare in AI performance. As the AI PC landscape evolves, Qualcomm faces mounting pressure. NVIDIA's anticipated entry into the market and significant performance improvements in next-generation x86 processors from Intel and AMD pose a massive challenge. The coming months will be crucial in determining whether Snapdragon X CPUs can live up to their initial hype and carve out a significant place in the AI PC ecosystem.

Ryzen AI 300 Series: New AMD APUs Appear in CrossMark Benchmark Database

AMD's upcoming Ryzen AI 300 APUs pre-launch leaks continue, the latest coming from the BAPCo CrossMark benchmark database. Two models have been spotted: the officially announced Ryzen AI 9 HX 370 and the recently leaked Ryzen AI 7 PRO 360. The Ryzen AI 9 HX 370, part of the "Strix Point" family, boasts 12 cores and 24 threads. Its hybrid architecture combines four Zen 5 cores with eight Zen 5C cores. The chip reaches boost clocks up to 5.1 GHz, features 36 MB of cache (24 MB L3 + 12 MB L2), and includes a Radeon 890M iGPU with 16 compute units (1024 cores). The Ryzen AI 7 PRO 360, previously leaked as a 12-core part, has now been confirmed with 8 cores and 16 threads. It utilizes a 3+5 configuration of Zen 5 and Zen 5C cores, respectively. The APU includes 8 MB each of L2 and L3 cache, with a base clock of 2.0 GHz. Its integrated Radeon 870M GPU is expected to feature the RDNA 3.5 architecture with fewer cores than its higher-end counterparts, possibly 8 compute units.

According to the leaked benchmarks, the Ryzen AI 9 HX 370 was tested in an HP laptop, while the Ryzen AI 7 PRO 360 appeared in a Lenovo model equipped with LPDDR5-7500 memory. Initial scores appear unremarkable compared to top Intel Core Ultra 9 185H and AMD Ryzen 7040 APUs, however, the tested APUs may be early samples, and their performance could differ from final retail versions. Furthermore, while the TDP range is known to be between 15 W and 54 W, the specific power configurations used in these benchmarks remain unclear. The first Ryzen AI 300 laptops are slated for release on July 28th, with Ryzen AI 300 PRO models expected in October.

Basemark Releases Breaking Limit Cross-Platform Ray Tracing Benchmark

Basemark announced today the release of a groundbreaking cross-platform ray tracing benchmark, GPUScore: Breaking Limit. This new benchmark is designed to evaluate the performance of the full range of ray tracing capable devices, including smartphones, tablets, laptops and high-end desktops with discrete GPUs. With support for multiple operating systems and graphics APIs, Breaking Limit provides a comprehensive performance evaluation across various platforms and devices.

As ray tracing technology becomes increasingly prevalent in consumer electronics, from high-end desktops to portable devices like laptops and smartphones, there is a critical need for a benchmark that can accurately assess and compare performance across different devices and platforms. Breaking Limit addresses this gap, providing valuable insights into how various devices handle hardware-accelerated graphics rendering. The benchmark is an essential tool for developers, manufacturers, and consumers to measure and compare the performance of real-time ray tracing rendering across different hardware and software environments reliably.
Return to Keyword Browsing
Mar 6th, 2025 22:12 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts