Apr 15th, 2025 01:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

News Posts matching #Benchmark

Return to Keyword Browsing

AMD Instinct GPUs are Ready to Take on Today's Most Demanding AI Models

Customers evaluating AI infrastructure today rely on a combination of industry-standard benchmarks and real-world model performance metrics—such as those from Llama 3.1 405B, DeepSeek-R1, and other leading open-source models—to guide their GPU purchase decisions. At AMD, we believe that delivering value across both dimensions is essential to driving broader AI adoption and real-world deployment at scale. That's why we take a holistic approach—optimizing performance for rigorous industry benchmarks like MLperf while also enabling Day 0 support and rapid tuning for the models most widely used in production by our customers.

This strategy helps ensure AMD Instinct GPUs deliver not only strong, standardized performance, but also high-throughput, scalable AI inferencing across the latest generative and language models used by customers. We will explore how AMD's continued investment in benchmarking, open model enablement, software and ecosystem tools helps unlock greater value for customers—from MLPerf Inference 5.0 results to Llama 3.1 405B and DeepSeek-R1 performance, ROCm software advances, and beyond.

NVIDIA GeForce RTX 5080 Mobile GPU Benched, Approximately 10% Slower Than RTX 5090 Mobile

NVIDIA and its laptop manufacturing partners managed to squeeze out higher end models at the start of the week (March 31); qualifying just in time as a Q1 2025 launch. As predicted by PC gaming hardware watchdogs, conditions on day one—for the general public—were far from perfect. Media and influencer outlets received pre-launch evaluation units—Monday's embargo lift did not open up floodgates to a massive number of published/uploaded reviews. Independent benchmarking of Team Green's flagship—GeForce RTX 5090 Mobile—produced somewhat underwhelming results. To summarize, several outlets—including Notebookcheck—observed NVIDIA's topmost laptop-oriented GPU trailing way behind its desktop equivalent in lab tests. Notebookcheck commented on these findings: "laptop gamers will want to keep their expectations in check as the mobile GeForce RTX 5090 can be 50 percent slower than the desktop counterpart as shown by our benchmarks. The enormous gap between the mobile RTX 5090 and desktop RTX 5090 and the somewhat disappointing leap over the outgoing mobile RTX 4080 can be mostly attributed to TGP."

The German online publication was more impressed with NVIDIA's sub-flagship model—two Ryzen 9 9955HX-powered Schenker XMG Neo 16 test units—sporting almost identical specifications—were pitched against each other, a resultant mini-review of benched figures was made available earlier today. Notebookcheck's Allen Ngo provided some context: "3DMark benchmarks...show that the (Schenker Neo's) GeForce RTX 5080 Mobile unit is roughly 10 to 15 percent slower than its pricier sibling. This deficit translates fairly well when running actual games like Baldur's Gate 3, Final Fantasy XV, Alan Wake 2, or Assassin's Creed Shadows. As usual, the deficit is widest when running at 4K resolutions on demanding games and smallest when running at lower resolutions where graphics become less GPU bound. A notable observation is that the performance gap between the mobile RTX 5080 and mobile RTX 5090 would remain the same, whether or not DLSS is enabled. When running Assassin's Creed Shadows with DLSS on, for example, the mobile RTX 5090 would maintain its 15 percent lead over the mobile RTX 5080. The relatively small performance drop between the two enthusiast GPUs means it may be worth configuring laptops with the RTX 5080 instead of the RTX 5090 to save on hundreds of dollars or for better performance-per-dollar." As demonstrated by Bestware.com's system configurator, the XMG NEO 16 (A25) SKU with a GeForce RTX 5090 Mobile GPU demands a €855 (~$928 USD) upcharge over an RTX 5080-based build.

AMD Ryzen 5 9600 Nearly Matches 9600X in Early Benchmarks

The AMD Ryzen 5 9600 launched recently as a slightly more affordable variant of the popular Ryzen 5 9600X. Despite launching over a month ago, the 9600 still appears rather difficult to track down in retail stores. However, a recent PassMark benchmark has provided some insights as to the performance of the non-X variant of AMD's six-core Zen 5 budget CPU. Unsurprisingly, the Ryzen 5 9600X and the Ryzen 5 9600 are neck-and-neck, with the 9600X scraping past its non-X counterpart by a mere 2.2% in the CPU benchmark.

According to the PassMark result, the Ryzen 5 9600 scored 29,369, compared to the Ryzen 5 9600X's 30,016, while single-core scores were 4581 for the 9600X and 4433 points for the 9600, representing a 3.2% disparity between the two CPUs. The result is not surprising, since the only real difference between the 9600 and the 9600X is 200 MHz boost clock. All other specifications, including TDP, core count, cache amount and base clock speed, are identical. Both CPUs are also unlocked for overclocking, and both feature AMD Precision Boost 2. While the Ryzen 5 9600 isn't available just yet, it will seemingly be a good option for those who want to stretch their budget to the absolute maximum, since recent reports indicate that it will be around $20 cheaper than the Ryzen 5 9600X, coming in at around the $250-260 mark.

GALAX RTX 5090D HOF XOC LE Card Overclocked to 3.27 GHz, Record Breaking Prototype Enabled w/ Second 12V-2×6 Connector

As reported last month, GALAX had distributed prototypes of its upcoming flagship "Hall of Fame" (HOF) card—based on NVIDIA's Chinese market exclusive GeForce RTX 5090D GPU—to prominent figures within the PC hardware overclocking community. Earlier examples sported single 12V-2×6 power connectors, although GALAX's exposed white PCB design showed extra space for an additional unit. Evaluators conducted experiments involving liquid nitrogen-based cooling methods. The most vocal of online critics questioned the overclocking capability of initial GeForce RTX 5090D HOF samples, due to limitations presented by a lone avenue of power delivery. A definitive answer has arrived in the form of the manufacturer's elite team-devised GeForce RTX 5090D HOF Extreme Overclock (XOC) Lab Limited Edition candidate; a newer variant that makes use of dual 12V-2×6 power connectors. Several overclocking experts have entered into a GALAX-hosted competition—Micka:)Shu, a Chinese participant, posted photos of their test rig setup (see below).

Micka's early access sample managed to achieve top placement GPU on UL Benchmarks' 3DMark Speed Way Hall of Fame, with a final score of 17169 points. A screenshotted GPU-Z session shows the card's core frequency reaching 3277 MHz. Around late January, ASUS China's general manager (Tony Yu) documented his overclocking of a ROG Astral RTX 5090 D GAMING OC specimen up to 3.4 GHz; under liquid nitrogen cooled conditions. GALAX has similarly outfitted its flagship model with selectively binned components and an "over-engineered" design. The company's "bog-standard" HOF model is no slouch, despite the limitation imposed by a single power connector. The GALAX OC Facebook account sent out some appreciation to another noted competitor (and collaborator): "thanks to Overclocked Gaming Systems—OGS Rauf for help with the overclock of GeForce RTX 5090D HOF, and all of (our) GALAX products." The OGS member set world records with said "normal" HOF card—achieving scores of 59,072 points in 3DMark's Fire Strike Extreme project, and 25,040 points in Unigine Superposition (8K-optimized).

AMD Ryzen 9 9950X3D Leaked PassMark Score Shows 14% Single Thread Improvement Over Predecessor

Last Friday, AMD confirmed finalized price points for its upcoming Ryzen 9 9950X3D ($699) and 9900X3D ($599) gaming processors—both launching on March 12. Media outlets are very likely finalizing their evaluations of review silicon; official embargoes are due for lifting tomorrow (March 11). By Team Red decree, a drip feed of pre-launch information was restricted to teasers, a loose March launch window, and an unveiling of basic specifications (at CES 2025). A trickle of mid-January to early March leaks have painted an incomplete picture of performance expectations for the 3D V-Cache-equipped 16 and 12-core parts. A fresh NDA-busting disclosure has arrived online, courtesy of an alleged Ryzen 9 9950X3D sample's set of benchmark scores.

A pre-release candidate posted single and multi-thread ratings of 4739 and 69,701 (respectively), upon completion of PassMark tests. Based on this information, a comparison chart was assembled—pitching the Ryzen 9 9950X3D against its direct predecessor (7950X3D), a Zen 5 relative (9950X), and competition from Intel (Core Ultra 9 285K). AMD's brand-new 16-core flagship managed to outpace the previous-gen Ryzen 9 7950X3D by ~14% in single thread stakes, and roughly 11% in multithreaded scenarios. Test system build details and settings were not mentioned with this leak—we expect to absorb a more complete picture tomorrow, upon publication of widespread reviews. The sampled Ryzen 9 9950X3D CPU surpassed its 9950X sibling by ~5% with its multi-thread result, both processors are just about equal in terms of single-core performance. The Intel Core Ultra 9 285K CPU posted the highest single-core result within the comparison—5078 points—exceeding the 9950X3D's tally by about 7%. The latter pulls ahead by ~3% in terms of recorded multi-thread performance. Keep an eye on TechPowerUp's review section; where W1zzard will be delivering his verdict(s) imminently.

Apple M3 Ultra SoC: Disappointing CPU Benchmark Result Surfaces

Just recently, Apple somewhat stunned the industry with the introduction of its refreshed Mac Studio with the M4 Max and M3 Ultra SoCs. For whatever reason, the Cupertino giant decided to spec its most expensive Mac desktop with an Ultra SoC that is based on an older generation, M3, instead of the newer M4 family. However, the M3 Max, which the M3 Ultra is based on, was no slouch, indicating that the M3 Ultra will likely boast impressive performance. However, if a collection of recent benchmark runs are anything to go by, it appears that the M3 Ultra is a tad too closely matched with the M4 Max in CPU performance, which makes the $2000 premium between the two SoCs rather difficult to digest. Needless to say, a single benchmark is hardly representative of real-world performance, so accept this information with a grain of salt.

According to the recently spotted Geekbench result, the M3 Ultra managed a single-core score of 3,221, which is roughly 18% slower than the M4 Max. In multicore performance, one might expect the 32-core M3 Ultra to sweep the floor with the 16-core M4 Max, but that is not quite the case. With a score of 27,749, the M3 Ultra leads the M4 Max by an abysmal 8%. Of course, these are early runs, which may suggest that future scores will likely be higher. However, it is clear as day that the M3 Ultra and the M4 Max, at least in terms of CPU performance, will be close together in multithreaded performance, with the M4 Max continuing to be substantially faster than the far more expensive M3 Ultra variant in single-threaded performance. It does appear that the primary selling point for the M3 Ultra-equipped Mac Studio will be the massive 80-core GPU and up to 512 GB of unified memory shared by the CPU and the GPU, which should come in handy for running massive LLMs locally and other niche workloads.

Apple's A18 4-core iGPU Benched Against Older A16 Bionic, 3DMark Results Reveal 10% Performance Deficit

Apple's new budget-friendly iPhone 16e model was introduced earlier this month; potential buyers were eyeing a device (starting at $599) that houses a selectively "binned" A18 mobile chipset. The more expensive iPhone 16 and iPhone 16 Plus models were launched last September, with A18 chips on-board; featuring six CPU cores, and five GPU cores. Apple's brand-new 16E smartphone seems to utilize an A18 sub-variant—tech boffins have highlighted this package's reduced GPU core count: of four. The so-called "binned A18" reportedly posted inferior performance figures—15% slower—when lined up against its standard 5-core sibling (in Geekbench 6 Metal tests). The iPhone 16E was released at retail today (February 28), with review embargoes lifted earlier in the week.

A popular portable tech YouTuber—Dave2D (aka Dave Lee)—decided to pit his iPhone 16E sample unit against older technology; contained within the iPhone 15 (2023). The binned A18's 4-core iGPU competed with the A16 Bionic's 5-core integrated graphics solution in a 3DMark Wild Life Extreme Unlimited head-to-head. Respective tallies—of 2882 and 3170 points—were recorded for posterity's sake. The more mature chipset (from 2022) managed to surpass its younger sibling by ~10%, according to the scores presented on Dave2D's comparison chart. The video reviewer reckoned that the iPhone 16E's SoC offers "killer performance," despite reservations expressed about the device not offering great value for money. Other outlets have questioned the prowess of Apple's latest step down model. Referencing current-gen 3DMark benchmark results, Wccftech observed: "for those wanting to know the difference between the binned A18 and non-binned variant; the SoC with a 5-core GPU running in the iPhone 16 finishes the benchmark run with an impressive 4007 points, making it a massive 28.04 percent variation between the two (pieces of) silicon. It is an eye-opener to witness such a mammoth performance drop, which also explains why Apple resorted to chip-binning on the iPhone 16e as it would help bring the price down substantially."

AMD Ryzen 9 9950X3D Leaked 3DMark & Cinebench Results Indicate 9950X-esque Performance

The AMD Ryzen 9 9950X3D processor will head to retail next month—a March 12 launch day is rumored—but a handful of folks seem to have early samples in their possession. Reviewers and online influencers have been tasked with evaluating pre-launch silicon, albeit under strict conditions; i.e. no leaking. Inevitably, NDA-shredding material has seeped out—yesterday, we reported on an alleged sample's ASUS Silicon Prediction rating. Following that, a Bulgarian system integrator/hardware retailer decided to upload Cinebench R23 and PCMark Time Spy results to Facebook. Evidence of this latest leak was scrubbed at the source, but VideoCardz preserved crucial details.

The publication noticed distinguishable QR and serial codes in PCbuild.bg's social media post; so tracing activities could sniff out points of origin. As expected, the leaked benchmark data points were compared to Ryzen 9 9950X and 7950X3D scores. The Ryzen 9 9950X3D sample recorded a score of 17,324 points in 3DMark Time Spy, as well as 2279 points (single-core) and 42,423 points (multi-core) in Cinebench R23. Notebookcheck observed that the pre-launch candidate came: "out ahead of the Ryzen 9 7950X3D in both counts, even if the gaming win is less than significant. Comparing the images of the benchmark results to our in-house testing and benchmark database shows the 9950X3D beating the 7950X3D by nearly 17% in Cinebench multicore." When compared to its non-3D V-Cache equivalent, the Ryzen 9 9950X3D leverages a slight performance advantage. A blurry shot of PCbuild.bg's HWiNFO session shows the leaked processor's core clock speeds; going up to 5.7 GHz (turbo) on a single CCD (non-X3D). The X3D-equipped portion seems capable of going up to 5.54 GHz.

Dune: Awakening Release Date and Price Revealed, Character Creation Now Live!

Today, Funcom finally lifted the veil on Dune: Awakening's release date. The open world, multiplayer survival game set on Arrakis will come to Steam on May 20! Players can begin their journey today by diving into the brand-new Character Creation & Benchmark Mode, available now through Steam. Created characters can then be imported into Dune: Awakening at launch.

Inspired by Frank Herbert's legendary sci-fi novel and Legendary Entertainment's award-winning films, Dune: Awakening is crafted by Funcom's veteran developers to deliver an experience that resonates with both Dune enthusiasts and survival game fans alike. Get ready to step into the biggest Dune game ever made with today's trailer.

AMD & Nexa AI Reveal NexaQuant's Improvement of DeepSeek R1 Distill 4-bit Capabilities

Nexa AI, today, announced NexaQuants of two DeepSeek R1 Distills: The DeepSeek R1 Distill Qwen 1.5B and DeepSeek R1 Distill Llama 8B. Popular quantization methods like the llama.cpp based Q4 K M allow large language models to significantly reduce their memory footprint and typically offer low perplexity loss for dense models as a tradeoff. However, even low perplexity loss can result in a reasoning capability hit for (dense or MoE) models that use Chain of Thought traces. Nexa AI has stated that NexaQuants are able to recover this reasoning capability loss (compared to the full 16-bit precision) while keeping the 4-bit quantization and all the while retaining the performance advantage. Benchmarks provided by Nexa AI can be seen below.

We can see that the Q4 K M quantized DeepSeek R1 distills score slightly less (except for the AIME24 bench on Llama 3 8b distill, which scores significantly lower) in LLM benchmarks like GPQA and AIME24 compared to their full 16-bit counter parts. Moving to a Q6 or Q8 quantization would be one way to fix this problem - but would result in the model becoming slightly slower to run and requiring more memory. Nexa AI has stated that NexaQuants use a proprietary quantization method to recover the loss while keeping the quantization at 4-bits. This means users can theoretically get the best of both worlds: accuracy and speed.

NVIDIA GeForce RTX 5070 Ti Allegedly Scores 16.6% Improvement Over RTX 4070 Ti SUPER in Synthetic Benchmarks

Thanks to some early 3D Mark benchmarks obtained by VideoCardz, NVIDIA's upcoming GeForce RTX 5070 Ti GPU paints an interesting picture of performance gains over the predecessor. Testing conducted with AMD's Ryzen 7 9800X3D processor and 48 GB of DDR5-6000 memory has provided the first glimpse into the card's capabilities. The new GPU demonstrates a 16.6% performance improvement over its predecessor, the RTX 4070 Ti SUPER. However, benchmark data shows it is falling short of the more expensive RTX 5080 by 13.2%, raising questions about the price-to-performance ratio given the $250 price difference between the two cards. Priced at $749 MSRP, the RTX 5070 Ti could be even pricier in retail channels at launch, especially with limited availability. The card's positioning becomes particularly interesting compared to the RTX 5080's $999 price point, which commands a 33% premium for its additional performance capabilities.

As a reminder, the RTX 5070 Ti boasts 8,960 CUDA cores, 280 texture units, 70 RT cores for ray tracing, and 280 tensor cores for AI computations, all supported by 16 GB of GDDR7 memory running at 28 Gbps effective speed across a 256-bit bus interface, resulting in an 896 GB/s bandwidth. We have to wait for proper reviews for the final performance conclusion, as synthetic benchmarks tell only part of the story. Modern gaming demands consideration of advanced features such as ray tracing and upscaling technologies, which can significantly impact real-world performance. The true test will come from comprehensive gaming benchmarks tested over various cases. The gaming community won't have to wait long for detailed analysis, as official reviews will be reportedly released in just a few days. Additional evaluations of non-MSRP versions should follow on February 20, the card's launch date.

NVIDIA GeForce RTX 5070 Ti Edges Out RTX 4080 in OpenCL Benchmark

A recently surfaced Geekbench OpenCL listing has revealed the performance improvements that the GeForce RTX 5070 Ti is likely to bring to the table, and the numbers sure look promising - that is, coming from the disappointment of the GeForce RTX 5080, which manages roughly 260,000 points in the benchmark, portraying a paltry 8% improvement over its predecessor. The GeForce RTX 5070 Ti, however, managed an impressive 248,000 points, putting it a substantial 20% ahead of the GeForce RTX 4070 Ti. Hilariously enough, the RTX 5080 is merely 4% ahead, making the situation even worse for the somewhat contentious GPU. NVIDIA has claimed similar performance improvements in its marketing material, which does seem quiet plausible.

Of course, an OpenCL benchmark is hardly representative of real-world gaming performance. That being said, there is no denying that raw benchmarks will certainly help buyers temper expectations and make decisions. Previous leaks and speculations have hinted at a roughly 10% improvement over its predecessor in raster performance and up to 15% improvements in ray tracing performance, although the OpenCL listing does indicate the RTX 5070 ti might be capable of a larger generational jump, neck-and-neck with NVIDIA's claims. For those in need of a refresher, the RTX 5070 Ti boasts 8960 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. Like its siblings, the RTX 5070 is also rumored to face "extremely limited" supply at launch. With its official launch less than a week away, we won't have much waiting to do to find out for ourselves.

NVIDIA RTX 5080 Laptop Defeats Predecessor By 19% in Time Spy Benchmark

The NVIDIA RTX 50-series witnessed quite a contentious launch, to say the least. Hindered by abysmal availability, controversial generational improvement, and whacky marketing tactics by Team Green, it would be safe to say a lot of passionate gamers were left utterly disappointed. That said, while the desktop cards have been the talk of the town as of late, the RTX 50 Laptop counterparts are yet to make headlines. Occasional leaks do appear on the interwebs, the latest one of which seems to indicate the 3D Mark Time Spy performance for the RTX 5080 Laptop GPU. And the results are - well, debatable.

We do know that the RTX 5080 Laptop GPU will feature 7680 CUDA cores, a shockingly modest increase over its predecessor. Considering that we did not get a node shrink this time around, the architectural improvements appear to be rather minimal, going by the tests conducted so far. Of course, the biggest boost in performance will likely be afforded by GDDR7 memory, utilizing a 256-bit bus, compared its predecessor's GDDR6 memory on a 192-bit bus. In 3D Mark's Time Spy DX12 test, which is somewhat of an outdated benchmark, the RTX 5080 Laptop managed roughly around 21,900 points. The RTX 4080 Laptop, on an average, rakes in around 18,200 points, putting the RTX 5080 Laptop ahead by almost 19%. The RTX 4090 Laptop is also left behind, by around 5%.

Capcom Releases Monster Hunter Wilds PC Performance Benchmark Tool

Hey hunters, how's it going? February is here, which means we are officially in the launch month of Monster Hunter Wilds! On February 28, your journey into the Forbidden Lands begins. Now, to help ensure you have a satisfying, fun experience come launch, we're pleased to share that the Monster Hunter Wilds Benchmark we'd previously mentioned we were looking into, is real, it's ready, and it's live from right now for you to try!

With the Monster Hunter Wilds Benchmark, we want to help our PC players feel more confident about how their PC will run Monster Hunter Wilds. In the next section, we're going to explain what the Monster Hunter Wilds Benchmark is, how it works, as well as some important information and differences you'll see between this and the Open Beta Test 1 and 2 experiences, so please take a moment to check it out.

UL Solutions Adds Support for DLSS 4 and DLSS Multi Frame Generation to the 3DMark NVIDIA DLSS Feature Test

We're excited to announce that in today's update to 3DMark, we're adding support for DLSS 4 and DLSS Multi Frame generation to the NVIDIA DLSS feature test. The NVIDIA DLSS feature test and this update were developed in partnership with NVIDIA. The 3DMark NVIDIA DLSS feature test lets you compare performance and image quality brought by enabling DLSS processing. If you have a new GeForce RTX 50 Series GPU, you'll also be able to compare performance with and without the full capabilities of DLSS 4.

You can choose to run the NVIDIA DLSS feature test using DLSS 4, DLSS 3 or DLSS 2. DLSS 4 includes the new DLSS Multi Frame Generation feature, and you can choose between several image quality modes—Quality, Balanced, Performance, Ultra Performance and DLAA. These modes are designed for different resolutions, from Full HD up to 8K. DLSS Multi Frame Generation uses AI to boost frame rates with up to three additional frames generated per traditionally rendered frame. In the 3DMark NVIDIA DLSS feature test, you are able to choose between 2x, 3x and 4x Frame Generation settings if you have an NVIDIA GeForce RTX 50 series GPU.

Ubisoft Unveils Assassin's Creed Shadows Recommended PC Specs

Hi everyone, Assassin's Creed Shadows is launching March 20, inviting you to experience the intertwined stories of Naoe, an adept shinobi Assassin, and Yasuke, a powerful African samurai. Today, you can pre-order the game on console and PC, and read up on Shadows' upcoming expansion, Claws of Awaji, which brings 10 hours of additional content free with your pre-order.

For those of you playing on PC, we've got all of Assassin's Creed Shadows' recommended PC specs listed in this article. Assassin's Creed Shadows will support raytraced global illumination and reflections, and will feature an in-game benchmark tool for performance analysis, ultra-wide resolutions, an uncapped framerate, and more. Check out all the specs chart below.

AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way

Although it has only been a few days since the RDNA 4-based GPUs from Team Red hit the scene, it appears that we have already been granted a first look at the 3D Mark performance of the highest-end Radeon RX 9070 XT GPU, and to be perfectly honest, the scores seemingly live up to our expectations - although with disappointing ray tracing performance. Unsurprisingly, the thread has been erased over at Chiphell, but folks have managed to take screenshots in the nick of time.

The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.

NVIDIA GeForce RTX 5080 Laptop GPU Challenges RTX 4090 Laptop in Leaked Benchmark

Once every two years or so, technology enthusiasts like ourselves have our sights pinned on what the GPU giants have in store for us. That moment is here, with both NVIDIA and AMD unveiling their Blackwell and RDNA 4 products respectively. NVIDIA has also announced its laptop offerings, with the RTX 5080 Laptop attempting to rule the mainstream high-performance segment. Now, barely a day or two after launch, we already have a rough idea of how mobile Blackwell is going to perform.

The leaked Geekbench OpenCL results, which comes courtesy of the Alienware Area-51 laptop, reveals how well the RTX 5080 Laptop GPU performs in a 175-watt configuration. According to the numbers, the RTX 5080 Laptop managed to barely exceed the 190,000-points barrier, putting it miles ahead of its predecessor which managed around 160,000. Interestingly, as the headline notes, the RTX 4090 Laptop was also left behind, which scores around 180,000 points on average, although systems with beefier cooling setups can post higher numbers.

AMD Ryzen AI 7 350 Benchmark Tips Cut-Back Radeon 860M GPU

AMD's upcoming Ryzen AI Kraken Point APUs appear to be affordable APUs for next-generation thin-and-light laptops and potentially even some gaming handhelds. Murmurings of these new APUs have been going around for quite some time, but a PassMark benchmark was just posted, giving us a pretty comprehensive look at the hardware configuration for the upcoming Ryzen AI 7 350. While the CPU configuration in the PassMark result confirms the 4+4 configuration we reported on previously, it seems as though the iGPU portion of the new Ryzen AI 7 is getting something of a downgrade compared to previous generations.

While all previous mobile Ryzen 7 and Ryzen 9 APUs have featured the Radeon -80M or -90M series iGPUs, the Ryzen AI 7 350 steps down to the AMD Radeon 860M. Although not much is known about the new iGPU, it uses the same nomenclature as the Radeon iGPUs found in previous Ryzen 5 APUs, suggesting it is the less performant of the new 800 series iGPUs. This would be the first time, at least since the introduction of the Ryzen branding, that a Ryzen 7 CPU will use a cut-down iGPU. This, along with the 4+4 (Zen 5 and Zen 5c) heterogenous architecture, suggests that this Ryzen 7 APU will prioritize battery life and thermal performance, likely in response to Qualcomm's recent offerings. Comparing the 760M to the single 860M benchmark on PassMark reveals similar performance, with the 860M actually falling behind the average 760M by an average of 9.1%. Take this with a grain of salt, though, since there is only one benchmark result on PassMark for the 860M.

UL Adds New DirectStorage Test to 3DMark

Today we're excited to launch the 3DMark DirectStorage feature test. This feature test is a free update for the 3DMark Storage Benchmark DLC. The 3DMark DirectStorage feature test helps gamers understand the potential performance benefits that Microsoft's DirectStorage technology could have for their PC's gaming performance.

DirectStorage is a Microsoft technology for Windows PCs with PCIe SSDs that reduces the overhead when loading game data. DirectStorage can be used to reduce game loading times when paired with other technologies such as GDeflate, where the GPU can be used to decompress certain game assets instead of the CPU. On systems running Windows 11, DirectStorage can bring further benefits with BypassIO, lowering a game's CPU overhead by reducing the CPU workload when transferring data.

SPEC Delivers Major SPECworkstation 4.0 Benchmark Update, Adds AI/ML Workloads

The Standard Performance Evaluation Corporation (SPEC), the trusted global leader in computing benchmarks, today announced the availability of the SPECworkstation 4.0 benchmark, a major update to SPEC's comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on data analytics, AI and machine learning (ML).

The new SPECworkstation 4.0 benchmark provides a robust, real-world measure of CPU, graphics, accelerator, and disk performance, ensuring professionals have the data they need to make informed decisions about their hardware investments. The benchmark caters to the diverse needs of engineers, scientists, and developers who rely on workstation hardware for daily tasks. It includes real-world applications like Blender, Handbrake, LLVM and more, providing a comprehensive performance measure across seven different industry verticals, each focusing on specific use cases and subsystems critical to workstation users. SPECworkstation 4.0 benchmark marks a significant milestone for measuring workstation AI performance, providing an unbiased, real-world, application-driven tool for measuring how workstations handle AI/ML workloads.

Apple M4 Max CPU Faster Than Intel and AMD in 1T/nT Benchmarks

Early benchmark results have revealed Apple's newest M4 Max processor as a serious competitor to Arm-based CPUs from Qualcomm and even the best of x86 from Intel and AMD. Recent Geekbench 6 tests conducted on the latest 16-inch MacBook Pro showcase considerable improvements over both its predecessor and rival chips from major competitors. The M4 Max achieved an impressive single-core score of 4,060 points and a multicore score of 26,675 points, marking significant advancements in processing capability. These results represent approximately 30% and 27% improvements in single-core and multicore performance, respectively, compared to the previous M3 Max. This is also much higher than something like Snapdragon X Elite, which tops out at twelve cores per SoC. When measured against x86 competitors, the M4 Max also demonstrates substantial advantages.

The chip outperforms Intel's Core Ultra 9 285K by 19% in single-core and 16% in multicore tests, surpassing AMD's Ryzen 9 9950X by 18% in single-core and 25% in multicore performance. Notably, these achievements come with significantly lower power consumption than traditional x86 processors. The flagship system-on-chip features a sophisticated 16-core CPU configuration, combining twelve performance and four efficiency cores. Additionally, it integrates 40 GPU cores and supports up to 128 GB of unified memory, shared between CPU and GPU operations. The new MacBook Pro line also introduces Thunderbolt 5 compatibility, enabling data transfer speeds up to 120 Gb/s. While the M4 Max presents an impressive response to the current market, we have yet to see its capabilities in real-world benchmarks, as these types of synthetic runs are only a part of the performance story that Apple has prepared. We need to see productivity, content creation, and even gaming benchmarks to fully crown it the king of performance. Below is a table comparing Geekbench v6 scores, courtesy of Tom's Hardware, and a random Snapdragon X Elite (X1E-00-1DE) run in top configuration.

Intel Core Ultra 9 285K Tops PassMark Single-Thread Benchmark

According to the latest PassMark benchmarks, the Intel Core Ultra 9 285K is the highest-performing single-thread CPU. The benchmark king title comes as PassMark's official account on X shared single-threaded performance number, with the upcoming Arrow Lake-S flagship SKU, Intel Core Ultra 9 285K, scoring 5,268 points in single-core results. This is fantastic news for gamers, as games mostly care about single-core performance. This CPU, having 8 P-cores and 16 E-cores, boasts 5.7 GHz P-core boost and 4.6 GHz E-core boost frequencies. The single-core tests put the new SKU at 11% lead compared to the previous-generation Intel Core i9-14900K processor.

However, the multithreaded cases are running more slowly. The PassMark multithreaded results put Intel Core Ultra 9 285K at 46,872 points, which is about 22% slower than the last-generation top SKU. While this may be a disappointment for some, it is partially expected, given that Arrow Lake stops the multithreaded designs in Intel CPU families. From now on, every CPU will be a combination of P and E-Cores, tuned for efficiency or performance depending on the use case. It is also possible that the CPU used inn PassMark's testing was an engineering sample, so until official launch, we have no concrete information about its definitive performance comparison.

Zhaoxin's KX-7000 8-Core Processor Tested in Detail, Bested by 7 Year Old Core i3

PC Watch recently got hands-on with Shanghai Zhaoxin's latest desktop processor for some in depth testing and published a less than optimistic review comparing it to both the previous generation KX-U6780A and Intel's equally clocked budget quad-core offering from 2017, the 3.6 GHz Core i3-8100. Though Zhaoxin's latest could muscle its way through some multithreaded tests such as Cinebench R23 due to having twice the core count, the single core performance showed to be nearly half that of the i3 in everything from synthetic tests to gaming.

PC Watch tested with the Dragon Quest X Benchmark, a DX9.0c title, to put the spotlight on single core gaming performance even in older games as well as with Final Fantasy XIV running the latest Golden Legacy benchmark released back in April of this year to show off more modern multithreaded gaming. With AMD's RX 6400 handling graphics at 1080p the KX-7000/8 scored around 60% of the i3-8100 in Dragon Quest X, and in Final Fantasy XIV it scored 90% of the i3. The result in Final Fantasy XIV was considered, "somewhat comfortable" for gameplay but still less than optimal. As a comparison point for a modern budget gaming PC option the Ryzen 5 5600G was also included in testing, where in Final Fantasy XIV it was 30% ahead of the KX-7000/8. PC Watch attempted to put the integrated ZX-C1190 to work in games but found that despite supporting modern APIs and features, the performance was no match for the competition.
KX-7000 CPU-Z - Credit: PC Watch

AMD Ryzen AI Max 390 "Strix Halo" Surfaces in Geekbench AI Benchmark

In case you missed it, AMD's new madcap enthusiast silicon engineering effort, the "Strix Halo," is real, and comes with the Ryzen AI Max 300 series branding. These are chiplet-based mobile processors with one or two "Zen 5" CCDs—same ones found in "Granite Ridge" desktop processors—paired with a large SoC die that has an oversized iGPU. This arrangement lets AMD give the processor up to 16 full-sized "Zen 5" CPU cores, and an iGPU with as many as 40 RDNA 3.5 compute units (2,560 stream processors), and a 256-bit LPDDR5/x memory interface for UMA.

"Strix Halo" is designed for ultraportable gaming notebooks or mobile workstations where low PCB footprint is of the essence, and discrete GPU is not an option. For enthusiast gaming notebooks with discrete GPUs, AMD is designing the "Fire Range" processor, which is essentially a mobile BGA version of "Granite Ridge," and a successor to the Ryzen 7045 series "Dragon Range." The Ryzen AI Max series has three models based on CPU and iGPU CU counts—the Ryzen AI Max 395+ (16-core/32-thread with 40 CU), the Ryzen AI Max 390 (12-core/24-thread with 40 CU), and the Ryzen AI Max 385 (8-core/16-thread, 32 CU). An alleged Ryzen AI Max 390 engineering sample surfaced on the Geekbench AI benchmark online database.
Return to Keyword Browsing
Apr 15th, 2025 01:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts