News Posts matching #Benchmark

Return to Keyword Browsing

UL Solutions Adds Support for DLSS 4 and DLSS Multi Frame Generation to the 3DMark NVIDIA DLSS Feature Test

We're excited to announce that in today's update to 3DMark, we're adding support for DLSS 4 and DLSS Multi Frame generation to the NVIDIA DLSS feature test. The NVIDIA DLSS feature test and this update were developed in partnership with NVIDIA. The 3DMark NVIDIA DLSS feature test lets you compare performance and image quality brought by enabling DLSS processing. If you have a new GeForce RTX 50 Series GPU, you'll also be able to compare performance with and without the full capabilities of DLSS 4.

You can choose to run the NVIDIA DLSS feature test using DLSS 4, DLSS 3 or DLSS 2. DLSS 4 includes the new DLSS Multi Frame Generation feature, and you can choose between several image quality modes—Quality, Balanced, Performance, Ultra Performance and DLAA. These modes are designed for different resolutions, from Full HD up to 8K. DLSS Multi Frame Generation uses AI to boost frame rates with up to three additional frames generated per traditionally rendered frame. In the 3DMark NVIDIA DLSS feature test, you are able to choose between 2x, 3x and 4x Frame Generation settings if you have an NVIDIA GeForce RTX 50 series GPU.

Ubisoft Unveils Assassin's Creed Shadows Recommended PC Specs

Hi everyone, Assassin's Creed Shadows is launching March 20, inviting you to experience the intertwined stories of Naoe, an adept shinobi Assassin, and Yasuke, a powerful African samurai. Today, you can pre-order the game on console and PC, and read up on Shadows' upcoming expansion, Claws of Awaji, which brings 10 hours of additional content free with your pre-order.

For those of you playing on PC, we've got all of Assassin's Creed Shadows' recommended PC specs listed in this article. Assassin's Creed Shadows will support raytraced global illumination and reflections, and will feature an in-game benchmark tool for performance analysis, ultra-wide resolutions, an uncapped framerate, and more. Check out all the specs chart below.

AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way

Although it has only been a few days since the RDNA 4-based GPUs from Team Red hit the scene, it appears that we have already been granted a first look at the 3D Mark performance of the highest-end Radeon RX 9070 XT GPU, and to be perfectly honest, the scores seemingly live up to our expectations - although with disappointing ray tracing performance. Unsurprisingly, the thread has been erased over at Chiphell, but folks have managed to take screenshots in the nick of time.

The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.

NVIDIA GeForce RTX 5080 Laptop GPU Challenges RTX 4090 Laptop in Leaked Benchmark

Once every two years or so, technology enthusiasts like ourselves have our sights pinned on what the GPU giants have in store for us. That moment is here, with both NVIDIA and AMD unveiling their Blackwell and RDNA 4 products respectively. NVIDIA has also announced its laptop offerings, with the RTX 5080 Laptop attempting to rule the mainstream high-performance segment. Now, barely a day or two after launch, we already have a rough idea of how mobile Blackwell is going to perform.

The leaked Geekbench OpenCL results, which comes courtesy of the Alienware Area-51 laptop, reveals how well the RTX 5080 Laptop GPU performs in a 175-watt configuration. According to the numbers, the RTX 5080 Laptop managed to barely exceed the 190,000-points barrier, putting it miles ahead of its predecessor which managed around 160,000. Interestingly, as the headline notes, the RTX 4090 Laptop was also left behind, which scores around 180,000 points on average, although systems with beefier cooling setups can post higher numbers.

AMD Ryzen AI 7 350 Benchmark Tips Cut-Back Radeon 860M GPU

AMD's upcoming Ryzen AI Kraken Point APUs appear to be affordable APUs for next-generation thin-and-light laptops and potentially even some gaming handhelds. Murmurings of these new APUs have been going around for quite some time, but a PassMark benchmark was just posted, giving us a pretty comprehensive look at the hardware configuration for the upcoming Ryzen AI 7 350. While the CPU configuration in the PassMark result confirms the 4+4 configuration we reported on previously, it seems as though the iGPU portion of the new Ryzen AI 7 is getting something of a downgrade compared to previous generations.

While all previous mobile Ryzen 7 and Ryzen 9 APUs have featured the Radeon -80M or -90M series iGPUs, the Ryzen AI 7 350 steps down to the AMD Radeon 860M. Although not much is known about the new iGPU, it uses the same nomenclature as the Radeon iGPUs found in previous Ryzen 5 APUs, suggesting it is the less performant of the new 800 series iGPUs. This would be the first time, at least since the introduction of the Ryzen branding, that a Ryzen 7 CPU will use a cut-down iGPU. This, along with the 4+4 (Zen 5 and Zen 5c) heterogenous architecture, suggests that this Ryzen 7 APU will prioritize battery life and thermal performance, likely in response to Qualcomm's recent offerings. Comparing the 760M to the single 860M benchmark on PassMark reveals similar performance, with the 860M actually falling behind the average 760M by an average of 9.1%. Take this with a grain of salt, though, since there is only one benchmark result on PassMark for the 860M.

UL Adds New DirectStorage Test to 3DMark

Today we're excited to launch the 3DMark DirectStorage feature test. This feature test is a free update for the 3DMark Storage Benchmark DLC. The 3DMark DirectStorage feature test helps gamers understand the potential performance benefits that Microsoft's DirectStorage technology could have for their PC's gaming performance.

DirectStorage is a Microsoft technology for Windows PCs with PCIe SSDs that reduces the overhead when loading game data. DirectStorage can be used to reduce game loading times when paired with other technologies such as GDeflate, where the GPU can be used to decompress certain game assets instead of the CPU. On systems running Windows 11, DirectStorage can bring further benefits with BypassIO, lowering a game's CPU overhead by reducing the CPU workload when transferring data.

SPEC Delivers Major SPECworkstation 4.0 Benchmark Update, Adds AI/ML Workloads

The Standard Performance Evaluation Corporation (SPEC), the trusted global leader in computing benchmarks, today announced the availability of the SPECworkstation 4.0 benchmark, a major update to SPEC's comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on data analytics, AI and machine learning (ML).

The new SPECworkstation 4.0 benchmark provides a robust, real-world measure of CPU, graphics, accelerator, and disk performance, ensuring professionals have the data they need to make informed decisions about their hardware investments. The benchmark caters to the diverse needs of engineers, scientists, and developers who rely on workstation hardware for daily tasks. It includes real-world applications like Blender, Handbrake, LLVM and more, providing a comprehensive performance measure across seven different industry verticals, each focusing on specific use cases and subsystems critical to workstation users. SPECworkstation 4.0 benchmark marks a significant milestone for measuring workstation AI performance, providing an unbiased, real-world, application-driven tool for measuring how workstations handle AI/ML workloads.

Apple M4 Max CPU Faster Than Intel and AMD in 1T/nT Benchmarks

Early benchmark results have revealed Apple's newest M4 Max processor as a serious competitor to Arm-based CPUs from Qualcomm and even the best of x86 from Intel and AMD. Recent Geekbench 6 tests conducted on the latest 16-inch MacBook Pro showcase considerable improvements over both its predecessor and rival chips from major competitors. The M4 Max achieved an impressive single-core score of 4,060 points and a multicore score of 26,675 points, marking significant advancements in processing capability. These results represent approximately 30% and 27% improvements in single-core and multicore performance, respectively, compared to the previous M3 Max. This is also much higher than something like Snapdragon X Elite, which tops out at twelve cores per SoC. When measured against x86 competitors, the M4 Max also demonstrates substantial advantages.

The chip outperforms Intel's Core Ultra 9 285K by 19% in single-core and 16% in multicore tests, surpassing AMD's Ryzen 9 9950X by 18% in single-core and 25% in multicore performance. Notably, these achievements come with significantly lower power consumption than traditional x86 processors. The flagship system-on-chip features a sophisticated 16-core CPU configuration, combining twelve performance and four efficiency cores. Additionally, it integrates 40 GPU cores and supports up to 128 GB of unified memory, shared between CPU and GPU operations. The new MacBook Pro line also introduces Thunderbolt 5 compatibility, enabling data transfer speeds up to 120 Gb/s. While the M4 Max presents an impressive response to the current market, we have yet to see its capabilities in real-world benchmarks, as these types of synthetic runs are only a part of the performance story that Apple has prepared. We need to see productivity, content creation, and even gaming benchmarks to fully crown it the king of performance. Below is a table comparing Geekbench v6 scores, courtesy of Tom's Hardware, and a random Snapdragon X Elite (X1E-00-1DE) run in top configuration.

Intel Core Ultra 9 285K Tops PassMark Single-Thread Benchmark

According to the latest PassMark benchmarks, the Intel Core Ultra 9 285K is the highest-performing single-thread CPU. The benchmark king title comes as PassMark's official account on X shared single-threaded performance number, with the upcoming Arrow Lake-S flagship SKU, Intel Core Ultra 9 285K, scoring 5,268 points in single-core results. This is fantastic news for gamers, as games mostly care about single-core performance. This CPU, having 8 P-cores and 16 E-cores, boasts 5.7 GHz P-core boost and 4.6 GHz E-core boost frequencies. The single-core tests put the new SKU at 11% lead compared to the previous-generation Intel Core i9-14900K processor.

However, the multithreaded cases are running more slowly. The PassMark multithreaded results put Intel Core Ultra 9 285K at 46,872 points, which is about 22% slower than the last-generation top SKU. While this may be a disappointment for some, it is partially expected, given that Arrow Lake stops the multithreaded designs in Intel CPU families. From now on, every CPU will be a combination of P and E-Cores, tuned for efficiency or performance depending on the use case. It is also possible that the CPU used inn PassMark's testing was an engineering sample, so until official launch, we have no concrete information about its definitive performance comparison.

Zhaoxin's KX-7000 8-Core Processor Tested in Detail, Bested by 7 Year Old Core i3

PC Watch recently got hands-on with Shanghai Zhaoxin's latest desktop processor for some in depth testing and published a less than optimistic review comparing it to both the previous generation KX-U6780A and Intel's equally clocked budget quad-core offering from 2017, the 3.6 GHz Core i3-8100. Though Zhaoxin's latest could muscle its way through some multithreaded tests such as Cinebench R23 due to having twice the core count, the single core performance showed to be nearly half that of the i3 in everything from synthetic tests to gaming.

PC Watch tested with the Dragon Quest X Benchmark, a DX9.0c title, to put the spotlight on single core gaming performance even in older games as well as with Final Fantasy XIV running the latest Golden Legacy benchmark released back in April of this year to show off more modern multithreaded gaming. With AMD's RX 6400 handling graphics at 1080p the KX-7000/8 scored around 60% of the i3-8100 in Dragon Quest X, and in Final Fantasy XIV it scored 90% of the i3. The result in Final Fantasy XIV was considered, "somewhat comfortable" for gameplay but still less than optimal. As a comparison point for a modern budget gaming PC option the Ryzen 5 5600G was also included in testing, where in Final Fantasy XIV it was 30% ahead of the KX-7000/8. PC Watch attempted to put the integrated ZX-C1190 to work in games but found that despite supporting modern APIs and features, the performance was no match for the competition.
KX-7000 CPU-Z - Credit: PC Watch

AMD Ryzen AI Max 390 "Strix Halo" Surfaces in Geekbench AI Benchmark

In case you missed it, AMD's new madcap enthusiast silicon engineering effort, the "Strix Halo," is real, and comes with the Ryzen AI Max 300 series branding. These are chiplet-based mobile processors with one or two "Zen 5" CCDs—same ones found in "Granite Ridge" desktop processors—paired with a large SoC die that has an oversized iGPU. This arrangement lets AMD give the processor up to 16 full-sized "Zen 5" CPU cores, and an iGPU with as many as 40 RDNA 3.5 compute units (2,560 stream processors), and a 256-bit LPDDR5/x memory interface for UMA.

"Strix Halo" is designed for ultraportable gaming notebooks or mobile workstations where low PCB footprint is of the essence, and discrete GPU is not an option. For enthusiast gaming notebooks with discrete GPUs, AMD is designing the "Fire Range" processor, which is essentially a mobile BGA version of "Granite Ridge," and a successor to the Ryzen 7045 series "Dragon Range." The Ryzen AI Max series has three models based on CPU and iGPU CU counts—the Ryzen AI Max 395+ (16-core/32-thread with 40 CU), the Ryzen AI Max 390 (12-core/24-thread with 40 CU), and the Ryzen AI Max 385 (8-core/16-thread, 32 CU). An alleged Ryzen AI Max 390 engineering sample surfaced on the Geekbench AI benchmark online database.

Geekbench AI Hits 1.0 Release: CPUs, GPUs, and NPUs Finally Get AI Benchmarking Solution

Primate Labs, the developer behind the popular Geekbench benchmarking suite, has launched Geekbench AI—a comprehensive benchmark tool designed to measure the artificial intelligence capabilities of various devices. Geekbench AI, previously known as Geekbench ML during its preview phase, has now reached version 1.0. The benchmark is available on multiple operating systems, including Windows, Linux, macOS, Android, and iOS, making it accessible to many users and developers. One of Geekbench AI's key features is its multifaceted approach to scoring. The benchmark utilizes three distinct precision levels: single-precision, half-precision, and quantized data. This evaluation aims to provide a more accurate representation of AI performance across different hardware designs.

In addition to speed, Geekbench AI places a strong emphasis on accuracy. The benchmark assesses how closely each test's output matches the expected results, offering insights into the trade-offs between performance and precision. The release of Geekbench AI 1.0 brings support for new frameworks, including OpenVINO, ONNX, and Qualcomm QNN, expanding its compatibility across various platforms. Primate Labs has also implemented measures to ensure fair comparisons, such as enforcing minimum runtime durations for each workload. The company noted that Samsung and NVIDIA are already utilizing the software to measure their chip performance in-house, showing that adoption is already strong. While the benchmark provides valuable insights, real-world AI applications are still limited, and reliance on a few benchmarks may paint a partial picture. Nevertheless, Geekbench AI represents a significant step forward in standardizing AI performance measurement, potentially influencing future consumer choices in the AI-driven tech market. Results from the benchmark runs can be seen here.

"Black Myth: Wukong" Game Gets Benchmarking Tool Companion Designed to Evaluate PC Performance

Game Science, the developer behind the highly anticipated action RPG "Black Myth: Wukong," has released a free benchmark tool on Steam for its upcoming game. This standalone application, separate from the main game, allows PC users to evaluate their hardware performance and system compatibility in preparation for the game's launch. The "Black Myth: Wukong Benchmark Tool" offers a unique glimpse into the game's visuals by rendering a real-time in-game sequence. While not playable, it provides valuable insights into how well a user's system will handle the game's demanding graphics and performance requirements. One of the tool's standout features is its customization options. Users can tweak various graphics settings to preview the game's visuals and performance under different configurations. This flexibility allows gamers to find the optimal balance between visual fidelity and smooth gameplay for their specific hardware setup.

However, Game Science has cautioned that due to the complexity and variability of gaming scenarios, the benchmark results may not fully represent the final gaming experience. This caveat shows the tool's role as a guide rather than a definitive measure of performance. The benchmark tool's system requirements offer a clear picture of the hardware needed to run "Black Myth: Wukong." At a minimum, users will need a Windows 10 system with an Intel Core i5-8400 or AMD Ryzen 5 1600 processor, 16 GB of RAM, and either an NVIDIA GeForce GTX 1060 6 GB or AMD Radeon RX 580 8 GB graphics card. For an optimal experience, the recommended specifications include an Intel Core i7-9700 or AMD Ryzen 5 5500 processor and an NVIDIA GeForce RTX 2060, AMD Radeon RX 5700 XT, or Intel Arc A750 graphics card. Interestingly, the benchmark tool supports DLSS, FSR, and XeSS technologies, indicating that the final game will likely include these performance-enhancing features. The developers also strongly recommend using an SSD for storage.

FinalWire Releases AIDA64 v7.35 with New CheckMate 64-bit Benchmark

FinalWire Ltd. today announced the immediate availability of AIDA64 Extreme 7.35 software, a streamlined diagnostic and benchmarking tool for home users; the immediate availability of AIDA64 Engineer 7.35 software, a professional diagnostic and benchmarking solution for corporate IT technicians and engineers; the immediate availability of AIDA64 Business 7.35 software, an essential network management solution for small and medium scale enterprises; and the immediate availability of AIDA64 Network Audit 7.35 software, a dedicated network audit toolset to collect and manage corporate network inventories. The new AIDA64 update introduces a new 64-bit CheckMate benchmark, AVX-512 accelerated benchmarks for AMD Ryzen AI APU, and supports the latest graphics and GPGPU computing technologies by AMD, Intel and NVIDIA.

DOWNLOAD: FinalWire AIDA64 v7.35 Extreme

Qualcomm Snapdragon X "Copilot+" AI PCs Only Accounted for 0.3% of PassMark Benchmark Runs

The much-anticipated revolution in AI-powered personal computing seems to be off to a slower start than expected. Qualcomm's Snapdragon X CPUs, touted as game-changers in the AI PC market, have struggled to gain significant traction since their launch. Recent data from PassMark, a popular benchmarking software, reveals that Snapdragon X CPUs account for a mere 0.3% of submissions in the past 30 days. This is a massive contrast to the 99.7% share held by traditional x86 processors from Intel and AMD, which raises questions about the immediate future of ARM-based PCs. The underwhelming adoption comes despite bold predictions from industry leaders. Qualcomm CEO Cristiano Amon had projected that ARM-based CPUs could capture up to 50% of the Windows PC market by 2029. Similarly, ARM's CEO anticipated a shift away from x86's long-standing dominance.

However, it turns out that these PCs are primarily bought for the battery life, not their AI capabilities. Of course, it's premature to declare Arm's Windows venture a failure. The AI PC market is still in its infancy, and upcoming mid-tier laptops featuring Snapdragon X Elite CPUs could boost adoption rates. A lot of time still needs to pass before the volume of these PCs reaches millions of units shipped by x86 makers. The true test will come with the launch of AMD's Ryzen AI 300 and Intel's Lunar Lake CPUs, providing a clearer picture of how ARM-based options compare in AI performance. As the AI PC landscape evolves, Qualcomm faces mounting pressure. NVIDIA's anticipated entry into the market and significant performance improvements in next-generation x86 processors from Intel and AMD pose a massive challenge. The coming months will be crucial in determining whether Snapdragon X CPUs can live up to their initial hype and carve out a significant place in the AI PC ecosystem.

Ryzen AI 300 Series: New AMD APUs Appear in CrossMark Benchmark Database

AMD's upcoming Ryzen AI 300 APUs pre-launch leaks continue, the latest coming from the BAPCo CrossMark benchmark database. Two models have been spotted: the officially announced Ryzen AI 9 HX 370 and the recently leaked Ryzen AI 7 PRO 360. The Ryzen AI 9 HX 370, part of the "Strix Point" family, boasts 12 cores and 24 threads. Its hybrid architecture combines four Zen 5 cores with eight Zen 5C cores. The chip reaches boost clocks up to 5.1 GHz, features 36 MB of cache (24 MB L3 + 12 MB L2), and includes a Radeon 890M iGPU with 16 compute units (1024 cores). The Ryzen AI 7 PRO 360, previously leaked as a 12-core part, has now been confirmed with 8 cores and 16 threads. It utilizes a 3+5 configuration of Zen 5 and Zen 5C cores, respectively. The APU includes 8 MB each of L2 and L3 cache, with a base clock of 2.0 GHz. Its integrated Radeon 870M GPU is expected to feature the RDNA 3.5 architecture with fewer cores than its higher-end counterparts, possibly 8 compute units.

According to the leaked benchmarks, the Ryzen AI 9 HX 370 was tested in an HP laptop, while the Ryzen AI 7 PRO 360 appeared in a Lenovo model equipped with LPDDR5-7500 memory. Initial scores appear unremarkable compared to top Intel Core Ultra 9 185H and AMD Ryzen 7040 APUs, however, the tested APUs may be early samples, and their performance could differ from final retail versions. Furthermore, while the TDP range is known to be between 15 W and 54 W, the specific power configurations used in these benchmarks remain unclear. The first Ryzen AI 300 laptops are slated for release on July 28th, with Ryzen AI 300 PRO models expected in October.

Basemark Releases Breaking Limit Cross-Platform Ray Tracing Benchmark

Basemark announced today the release of a groundbreaking cross-platform ray tracing benchmark, GPUScore: Breaking Limit. This new benchmark is designed to evaluate the performance of the full range of ray tracing capable devices, including smartphones, tablets, laptops and high-end desktops with discrete GPUs. With support for multiple operating systems and graphics APIs, Breaking Limit provides a comprehensive performance evaluation across various platforms and devices.

As ray tracing technology becomes increasingly prevalent in consumer electronics, from high-end desktops to portable devices like laptops and smartphones, there is a critical need for a benchmark that can accurately assess and compare performance across different devices and platforms. Breaking Limit addresses this gap, providing valuable insights into how various devices handle hardware-accelerated graphics rendering. The benchmark is an essential tool for developers, manufacturers, and consumers to measure and compare the performance of real-time ray tracing rendering across different hardware and software environments reliably.

Intel Releases Arc GPU Graphics Drivers 101.5592 WHQL

Intel today released the latest version of the Arc GPU Graphics drivers. Version 101.5592 WHQL is a minor update over the 101.5590 WHQL drivers that the company released on June 15. It corrects a bug that caused Pugetbench Extended Preset Benchmark to fail on Arc A-series discrete GPUs in certain Adobe Premiere Pro processing tests. The bug had caused the benchmark to fail to complete. The company took the opportunity to identify a handful more issues with certain Vulkan API games such as No Man's Sky, Enshrouded, and Doom Eternal. A bug that causes Topaz Video AI to throw up errors when exporting videos after using some models for video enhancements, has been identified.

DOWNLOAD: Intel Arc GPU Graphics Drivers 101.5592 WHQL

Quantinuum Launches Industry-First, Trapped-Ion 56-Qubit Quantum Computer, Breaking Key Benchmark Record

Quantinuum, the world's largest integrated quantum computing company, today unveiled the industry's first quantum computer with 56 trapped-ion qubits. H2-1 has further enhanced its market-leading fidelity and is now impossible for a classical computer to fully simulate.

A joint team from Quantinuum and JPMorgan Chase ran a Random Circuit Sampling (RCS) algorithm, achieving a remarkable 100x improvement over prior industry results from Google in 2019 and setting a new world record for the cross entropy benchmark. H2-1's combination of scale and hardware fidelity makes it difficult for today's most powerful supercomputers and other quantum computing architectures to match this result.

Sabrents New Apex X16 Rocket 5 Destroyer in Testing, Benchmark Numbers Included

You may have heard about Apex's new storage solution called the X21, we wrote about it a while ago. If not, let us remind you that is a huge single expansion card that can hold up to 21 M.2 solid-state drives (SSDs), which is incredible. Apex has also made x16 versions of this card, and it is currently in the late testing stage. The back side of the Apex X16 has 8 M.2 slots, giving it a total of 16 M.2 slots. Although these two cards are impressive with their unmatched speed and capacity, Apex is not stopping there.

Today, we are showing some early pictures and benchmark numbers of the Sabrent Apex X16 Rocket 5 Destroyer. As you can see, this new 5th generation (Gen 5) card from Apex holds 16 of Sabrent's Rocket 5 4 TB SSDs. This gives it a maximum capacity of 64 TB using the fastest Gen 5 SSDs on the market. The expansion card uses a single PCIe Gen 5 x16 slot, which leaves plenty of space for other cards or even another X16 card. Multiple X16 cards can be combined, or it can be paired with other Apex series cards like the Sabrent Apex X21 Destroyer, for a total of 168 TB of incredible 4th generation (Gen 4) speeds. The Sabrent Apex X16 Rocket 5 AIC can reach amazing speeds of 56 GiB/s Seq Reads and 54 GiB/s Seq Writes. For 4K Random IOPS, it can reach 20 million 4K Reads and 19 million Write IOPS. The Sabrent Apex X16 Rocket 5 will be available to order soon.

Chaos Releases V-Ray 6.1 Benchmark

Chaos releases V-Ray 6 Benchmark, updating the free standalone application to help users quickly evaluate V-Ray rendering speeds and compare the capabilities of leading CPUs and GPUs. V-Ray 6 Benchmark adds new looping capabilities and GPU mode comparison, giving users even more control over their findings.

Since launching in 2017, V-Ray Benchmark has become a standard for new hardware testing, helping countless users and reviewers assess the rendering performance of laptops, workstations, graphics cards and more. By benchmarking against one of the most popular renderers in the world, professionals can quickly gauge how hardware from NVIDIA, AMD, Intel and Apple handle common 3D assets like cities, characters and hard-surface models.

The free V-Ray 6 Benchmark app is available now.

Apple M4 Chip Benchmarked: 22% Faster Single-Core and 25% Faster Multi-Core Performance

Yesterday, Apple launched its next-generation M4 chip based on Apple Silicon custom design. The processor is a fourth-generation design that brings AI capabilities and improved CPU performance. First debuting in an iPad Pro, the CPU has been benchmarked in Geekbench v6. And results seem to be very promising. The latest M4 chip managed to score 3,767 points in single-core tests and 14,677 points in multi-core tests. Compared to the M3 chip, which scores 3,087 points in single-core and 11,702 in multi-core tests, the M4 chip is about 22% faster in single-core and 25% faster in multi-core synthetic benchmarks.

Of course, these results are not real-world use cases, but they give us a hint of what the Apple Silicon design team has been working on. For real-world results, we have to wait a little longer to see reviews and results from devices such as MacBook Pro and MacBook Air, which should have better cooling and possibly better clocks for the chip.

UL Announces the Procyon AI Image Generation Benchmark Based on Stable Diffusion

We're excited to announce we're expanding our AI Inference benchmark offerings with the UL Procyon AI Image Generation Benchmark, coming Monday, 25th March. AI has the potential to be one of the most significant new technologies hitting the mainstream this decade, and many industry leaders are competing to deliver the best AI Inference performance through their hardware. Last year, we launched the first of our Procyon AI Inference Benchmarks for Windows, which measured AI Inference performance with a workload using Computer Vision.

The upcoming UL Procyon AI Image Generation Benchmark provides a consistent, accurate and understandable workload for measuring the AI performance of high-end hardware, built with input from members of the industry to ensure fair and comparable results across all supported hardware.

Zhaoxin KX-7000 8-Core CPU Gets Geekbenched

Zhaoxin finally released its oft-delayed KX-7000 CPU series last December—the Chinese manufacturer claimed that its latest "Century Avenue Core" uArch consumer/desktop-oriented range was designed to "deliver double the performance of previous generations." Freshly discovered Geekbench 6.2.2 results indicate that Zhaoxin has succeeded on that front—Wccftech has pored over these figures, generated by an: "entry-level Zhaoxin KX-7000 CPU which has 8 cores, 8 threads, 4 MB of L2, and 32 MB of L3 cache. This chip was running at a base clock of 3.0 GHz and a boost clock of 3.3 GHz which is below its standard 3.6 GHz boost profile."

The new candidate was compared to Zhaoxin's previous-gen KX-U6780A and KX-6000G models. Intel's Core i3-10100F processor was thrown in as a familiar Western point of reference. The KX-7000 scored: "823 points in single-core, and 3813 points in multi-core tests. For comparisons, the Intel's Comet Lake CPU with 4 cores and 8 threads plus a boost of up to 4.3 GHz offers a much higher score. It's around 75% faster in single and 17% faster in multi-core tests within the same benchmark." The higher clock speeds, doubled core counts and TDPs do deliver "twice the performance" when compared to direct forebears—mission accomplished there. It is clear that Zhaoxin's latest CPU architecture cannot keep up with a generations old Team Blue design. Loongson's 3A6000 processor is a very promising prospect—reports suggest that this chip is somewhat comparable to mainstream AMD Zen 4 and Intel Raptor Lake products.

Avatar: Frontiers of Pandora's Latest Patch Claims Fixing of FSR 3 Artefacts & FPS Tracking

A new patch for Avatar: Frontiers of Pandora deployed on March 1, bringing more than 150 fixes and adjustments to the game. Title Update 3 includes technical, UI, balancing, main quest, and side quest improvements, plus additional bug fixes. To provide players with an improved experience, the development team, led by Massive Entertainment, listened to feedback from the community while working on Title Update 3.

An additional patch, Title Update 3.1, was deployed on March 7, adding additional fixes to the game. Check out the full list of improvements included in Title Update 3 & 3.1 here, and read on for the most notable improvements now available in Avatar: Frontiers of Pandora.

Update Mar 14th: TPU has received alerts regarding player feedbackMassive Entertainment's "Title Update 3" has reportedly broken their implementation of FSR 3 in Avatar: Frontiers of Pandora. We will keep an eye on official Ubisoft channels—so far they have not addressed these FSR-related problems.
Return to Keyword Browsing
Feb 4th, 2025 07:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts