News Posts matching #Benchmark

Return to Keyword Browsing

Samsung Exynos 2500 Benchmarks Put New SoC Close to Qualcomm Competition but Still Slower

Samsung's Exynos 2500 SoC has appeared on Geekbench, this time giving us a clearer indication of what to expect from the upcoming SoC that will power the next generation of Samsung flagship smartphones. There are three total runs that have appeared on Geekbench, putting forward anywhere between 2303 and 2356 points in the single-core Geekbench 6 benchmark and 8062 and 8076 points in the multicore benchmark. Meanwhile, the Qualcomm Snapdragon 8 Elite in the current-generation Samsung Galaxy S25 Ultra manages a single-core score of 2883 and a multicore score or 9518 on the same Geekbench 6 benchmark. Samsung recently made the Exynos 2500 public, with the spec sheet revealing a Samsung Xclipse 950 GPU paired with 10 Arm Cortex CPUs (1× Cortex-X5, 2× Cortex-A725 at 2.74 GHz, 5× Cortex-A725 at 2.36 GHz, and 2× Cortex A520 at 1.8 GHz).

The new SoC is reportedly the first chip to use Samsung's 3 nm GAA process, and leaks suggest that Samsung may be using the new SoC across its entire next-gen global smartphone line-up, starting with the launch of the Galaxy Z Flip 7. This would be a stark departure from previous releases, where the US versions of the Galaxy S line-up featured Qualcomm Snapdragon processors, with the international Galaxy S smartphones packing the in-house Exynos designs. In recent years, however, Samsung has pivoted to using Snapdragon SoCs across all regions.

Unreal Engine 5.6 Delivers Up to 35% Performance Improvement Over v5.4

Thanks to a new comparison video from the YouTube channel MxBenchmarkPC, the Paris Tech Demo by Scans Factory is put through its paces on an RTX 5080, running side by side in Unreal Engine 5.6 and version 5.4 with hardware Lumen enabled. That way, we get to see what Epic Games has done with the hardware optimization in the latest release. In GPU‑limited scenarios, the upgrade is immediately clear, with frame rates jumping by as much as 25% thanks to better utilization of graphics resources, even if that means the card draws a bit more power to deliver the boost. When the CPU becomes the bottleneck, Unreal Engine 5.6 really pulls ahead, smoothing out frame-time spikes and delivering up to 35% higher throughput compared to the older build. Beyond the raw numbers, the new version also refines Lumen's visuals. Lighting feels more accurate, and reflections appear crisper while maintaining the same level of shadow and ambient occlusion detail that developers expect.

Unreal Engine 5.6 was officially launched earlier this month, just after Epic Games wrapped its Unreal Fest keynote, where it teased many of these improvements. Hardware-accelerated ray tracing enhancements now shift more of the Lumen global illumination workload onto modern GPUs, and a Fast Geometry Streaming plugin makes loading vast, static worlds feel seamless and stutter-free. Animators will appreciate the revamped motion trails interface, which speeds up keyframe adjustments, and new device profiles automatically tune settings to hit target frame rates on consoles and high‑end PCs. To showcase what's possible, Epic teamed up with CD Projekt Red for a The Witcher IV tech demo that runs at a steady 60 FPS with ray tracing fully enabled on the current-gen PlayStation 5 console. If you're curious to dive in, you can download Unreal Engine 5.6 Paris - Fontaine Saint-Michel Tech Demo today and explore it for yourself on your PC.

AMD Radeon RX 9060 XT 16 GB Shows Up In Early Time Spy Benchmark With Mixed Conclusions

AMD's upcoming Radeon RX 9060 XT has shown up in the news a number of times leading up to the expected retail launch, from AsRock's announcement to a recent Geekbench leak that put the RDNA 4 GPU ahead of the RX 7600 XT by a fair shout. Now, however, we have a gaming benchmark from 3DMark Time Spy showing the RX 9060 XT nearly matching the RX 7700 XT, and those results could still improve as drivers mature and become more stable. The benchmark results are courtesy of u/uesato_hinata, who got their hands on an XFX Swift AMD Radeon RX 9060 XT 16 GB and posted their results on r/AMD on Reddit.

There are a few caveats to these performance figures, though, since the redditor who shared the results was using beta drivers and a moderate GPU overclock and undervolt—cited as "+200mhz clock offset -40mv undervolt +10% power limit, I can get 3.46Ghz at 199 W". With those performance tweaks, however, the RX 9060 XT puts up a respectable result of 14,210 points in 3DMark Time Spy. For comparison, the average RX 7700 XT scores 15,452 points in the same benchmark. However, it should also be noted that the gaming PC used in the RX 9060 XT benchmark in question was powered by a rather old AMD Ryzen 5 5600 paired with mismatched DDR4-2133 RAM, meaning there is likely at least some performance left on the table, even if GPU utilization seems consistently high in the 3DMark monitoring chart, indicating there was little bottlenecking limiting the performance. The redditor went on to benchmark the GPU in Black Myth: Wukong, where it managed a 64 FPS average at stock clocks at 1080p, with most settings set to high. Applying the overclock boosted average FPS to a mere 65 FPS, but increased the minimum FPS from 17 to 23. These numbers also won't be representative of the performance for all RX 9060 XT GPUs, since we know that AMD is launching both 8 and 16 GB versions of the RX 9060 XT with different GPU clock speeds for the different memory variants

NVIDIA's Unreleased TITAN ADA: 18,432 Cores and 48 GB GDDR6X in Early Tests

Overclocking expert Roman "Der8auer" Hartung has managed to get his hands on a prototype of the NVIDIA TITAN ADA, a flagship design that never saw the light of day. Despite its absence from store shelves, the TITAN ADA prototype reveals a blend of workstation-level ambition and cutting-edge gaming power. Enthusiasts have long whispered about a true TITAN successor in the "Ada Lovelace" lineup, and now we finally have concrete evidence of what that card could have delivered. Inside this beast sits a fully enabled AD102 GPU, boasting all 144 streaming multiprocessors and delivering 18,432 CUDA cores, about 12 percent more than the retail RTX 4090. NVIDIA equipped the card with a staggering 48 GB of GDDR6X memory on a 384-bit bus to match this raw compute potential, twice the capacity of any other consumer Ada GPU. The prototype uses a large quad-slot, flow-through cooler in the classic beige TITAN color to keep everything cool.

Unlike conventional layouts, the TITAN ADA's PCB is split into three sections: a rotated main board flanked by a separate daughterboard for the PCIe interface. This unusual arrangement precedes the vertical board designs that would later appear on Founders Edition cards in the RTX 50 Series. Dual 16-pin power connectors suggest NVIDIA originally aimed for a total board power well above 600 watts, yet Der8auer's tests topped out near 450 watts, likely due to the prototype's early vBIOS and older driver builds. Benchmark results confirm the card's full AD102 configuration. In synthetic graphics tests, it ran roughly 15 percent ahead of the RTX 4090, and in real-world gaming scenarios, popular titles like Cyberpunk 2077 and Remnant 2 saw performance uplifts ranging from 10 to 22 percent. These figures place the TITAN ADA comfortably between the 4090 and the new RTX 5090 in both speed and efficiency.

NVIDIA's GB10 Arm Superchip Looks Promising in Leaked Benchmark Results

Recent benchmark leaks from Geekbench have revealed that NVIDIA's first Arm-based "superchip," the GB10 Grace Blackwell, is on the verge of its market launch as reported by Notebookcheck. This processor is expected to be showcased at Computex 2025 later this month, where NVIDIA may also roll out the N1 and N1X (MediaTek confirmed in April that their CEO—Dr. Rick Tsai—will be delivering a big keynote speech at Computex 2025 trade show) alternatives tailored for desktop and laptop use. ASUS and Dell have already put the GB10 in their upcoming products while NVIDIA has also used it in its Project DIGITS AI supercomputer. The company announced this machine at CES 2025 saying it would cost around $2,999 and be ready to buy this month.

The benchmark listings show some inconsistencies, like identifying the chipset as Armv8 instead of Armv9. However, they point out that the GB10's Cortex-X925 cores can reach speeds up to 3.9 GHz. The performance results show that the GB10 can compete with high-end Arm and x86 processors in single-core metrics. Yet, Apple's M4 Max processors still leads in this area. The GB10 marks NVIDIA's move into the workstation-grade Arm processor market and could shake up the established players in the high-performance computing field.

AMD Radeon RX 9070 GRE Gets Reviewed - Gaming Perf. Comparable to RX 7900 GRE

AMD and a select bunch of its board partners are set to launch Radeon RX 9070 GRE 12 GB graphics card models tomorrow; starting as exclusives for China's PC gaming hardware market. Just before an unleashing of retail stock, local media outlets have published reviews—mostly covering brand-new ASUS, Sapphire, and XFX products. The RDNA 4 generation's first "Great Radeon Edition" (GRE) is positioned as a slightly cheaper alternative to Team Red's Radeon RX 9070 (non-XT) 16 GB model; 4199 RMB versus 4499 RMB (respectively, including VAT). In general, Chinese evaluators seem to express lukewarm opinions about the Radeon RX 9070 GRE's value-to-performance ratio. After all, this is a cut-down design—a "reduced" Navi 48 chip makes do with 3072 Stream Processors. The card's 12 GB of GDDR6 VRAM configuration is paired up with a 192-bit memory interface.

Carbon Based Technology's video review presented benchmark results that placed AMD's new contender on par with a previous-gen card: Radeon RX 7900 GRE 16 GB. Considering that this RDNA 3 era Golden Rabbit Edition (GRE) model launched globally with an MSRP of $549, its Navi 48 XL GPU-based descendant's ~$580 (USD) guide price appears to be mildly nonsensical. GamerSky pitched their ASUS ATS RX 9070 GRE MEGALODON OC sample against mid-range and lower level current-gen NVIDIA gaming products: ""through testing, we can find that at 4K resolution, the GeForce RTX 5070 12 GB performs the best, 5% higher than the ASUS RX 9070 GRE Megalodon. As the resolution decreases, its lead also decreases, and at 2K resolution it is only 2% higher. At 1080p resolution, the difference is only 1%. At the same time, compared with RTX 5060 Ti 16G, ASUS RX 9070 GRE Megalodon has a greater advantage. The performance of its competitor's RTX 5060 Ti 16G is only 77% of that of RX 9070 GRE at 4K and 2K resolutions. At 1080p, its performance increased slightly to 79%." AMD and involved AIBs could be testing the waters with an initial Chinese market exclusive release, but Western news outlets reckon that a more aggressive pricing strategy is needed for a (potential) proper global rollout of Radeon RX 9070 GRE cards.

NVIDIA RTX PRO 6000 "Blackwell" Underperforms with Pre‑Release Drivers

Today, we are looking at the latest benchmark results for NVIDIA's upcoming RTX PRO 6000 "Blackwell" workstation-class GPU. Based on the new GB202 GPU, this professional visualization card features an impressive 24,064 CUDA cores distributed across 188 streaming multiprocessors, with boost clocks up to 2,617 MHz. It also introduces 96 GB of GDDR7 memory with full error‑correcting code, a capacity made possible by dual‑sided 3 GB modules. In Geekbench 6.4.0 OpenCL trials, the PRO 6000 Blackwell registered a total score of 368,219. That result trails the gaming‑oriented GeForce RTX 5090, which posted 376,858 points despite having fewer cores (21,760 vs. 24,064 of RTX PRO) and a lower peak clock of 2,410 MHz versus the 2617 MHz of RTX PRO.

A breakdown of subtests reveals that the workstation card falls behind in background blur (263.9 versus 310.7 images per second) and face detection (196.7 versus 241.5 images per second), yet it leads modestly in horizon detection and Gaussian blur. These mixed outcomes are attributed to pre‑release drivers, a temporary cap on visible memory (currently limited to 23.8 GB), and power‑limit settings. If the card ran on release drivers, software (especially OpenCL) could greatly benefit from more cores and higher max frequency. One significant distinction within the RTX PRO 6000 family concerns power consumption. The Max‑Q Workstation Edition is engineered for a 300 W thermal design point, making it suitable for compact chassis and environments where quiet operation is essential. It retains all 24,064 cores and the full 96 GB of memory, but clocks and voltages are adjusted to fit the 300 W budget. By contrast, the standard Workstation and Server models allow a thermal budget of up to 600 W, enabling higher sustained frequencies and heavier compute workloads in full‑size desktop towers and rack‑mounted systems.

NVIDIA GeForce RTX 5060 Ti 8 GB Variant Benched by Chinese Reviewer, Lags Behind 16 GB Sibling in DLSS 4 Test Scenario

NVIDIA's GeForce RTX 5060 Ti 8 GB graphics card design received little fanfare when review embargoes lifted mid-way through the working week. Reportedly by official instruction, involved board partners sent out 16 GB samples to evaluators. Multiple Western outlets are currently attempting to source GeForce RTX 5060 Ti 8 GB cards—on their own dime—including TechPowerUp. As mentioned in his conclusive rundown of PALIT's GeForce RTX 5060 Ti Infinity 3 16 GB model, W1zzard commented on this situation: "personally, I'm very interested in my results for the RTX 5060 Ti 8 GB, which I'm trying to buy now." The ever reliable harukaze5719 has already stumbled upon one such review. Yesterday, Carbon-based Technology Research Institute (CBTRI) uploaded their findings onto the Chinese bilibili video platform.

Two ASUS options were compared to each other: an 8 GB Hatsune Miku Special Edition card, and a better known property: PRIME RTX 5060 Ti 16 GB. In most situations the two variants perform similarly. A clear difference was demonstrated when CBTRI's lab test moved into a DLSS 4 with Multi-Frame Generation (MFG) phase. Both harukaze5719 and Tom's Hardware noted a significant gulf—the latter's report observed: "in Cyberpunk 2077, for example, the RTX 5060 Ti 8 GB inexplicably performed worse than the RTX 4060 Ti 8 GB at native 1440p resolution. While enabling MFG helped improve performance, pushing it to 4x delivered underwhelming results, with the 16 GB version providing 22% higher performance than the 8 GB card." Rumors have swirled about the late arrival of GeForce RTX 5060 Ti 8 GB cards at retail; potentially a week after the launch of 16 GB siblings. As evidenced by early results, potential buyers should consider paying a little extra ($50) for a larger pool of VRAM. Team Green's introductory material outlined starter price tags of $429 (16 GB) and $379 (8 GB).

AMD Instinct GPUs are Ready to Take on Today's Most Demanding AI Models

Customers evaluating AI infrastructure today rely on a combination of industry-standard benchmarks and real-world model performance metrics—such as those from Llama 3.1 405B, DeepSeek-R1, and other leading open-source models—to guide their GPU purchase decisions. At AMD, we believe that delivering value across both dimensions is essential to driving broader AI adoption and real-world deployment at scale. That's why we take a holistic approach—optimizing performance for rigorous industry benchmarks like MLperf while also enabling Day 0 support and rapid tuning for the models most widely used in production by our customers.

This strategy helps ensure AMD Instinct GPUs deliver not only strong, standardized performance, but also high-throughput, scalable AI inferencing across the latest generative and language models used by customers. We will explore how AMD's continued investment in benchmarking, open model enablement, software and ecosystem tools helps unlock greater value for customers—from MLPerf Inference 5.0 results to Llama 3.1 405B and DeepSeek-R1 performance, ROCm software advances, and beyond.

NVIDIA GeForce RTX 5080 Mobile GPU Benched, Approximately 10% Slower Than RTX 5090 Mobile

NVIDIA and its laptop manufacturing partners managed to squeeze out higher end models at the start of the week (March 31); qualifying just in time as a Q1 2025 launch. As predicted by PC gaming hardware watchdogs, conditions on day one—for the general public—were far from perfect. Media and influencer outlets received pre-launch evaluation units—Monday's embargo lift did not open up floodgates to a massive number of published/uploaded reviews. Independent benchmarking of Team Green's flagship—GeForce RTX 5090 Mobile—produced somewhat underwhelming results. To summarize, several outlets—including Notebookcheck—observed NVIDIA's topmost laptop-oriented GPU trailing way behind its desktop equivalent in lab tests. Notebookcheck commented on these findings: "laptop gamers will want to keep their expectations in check as the mobile GeForce RTX 5090 can be 50 percent slower than the desktop counterpart as shown by our benchmarks. The enormous gap between the mobile RTX 5090 and desktop RTX 5090 and the somewhat disappointing leap over the outgoing mobile RTX 4080 can be mostly attributed to TGP."

The German online publication was more impressed with NVIDIA's sub-flagship model—two Ryzen 9 9955HX-powered Schenker XMG Neo 16 test units—sporting almost identical specifications—were pitched against each other, a resultant mini-review of benched figures was made available earlier today. Notebookcheck's Allen Ngo provided some context: "3DMark benchmarks...show that the (Schenker Neo's) GeForce RTX 5080 Mobile unit is roughly 10 to 15 percent slower than its pricier sibling. This deficit translates fairly well when running actual games like Baldur's Gate 3, Final Fantasy XV, Alan Wake 2, or Assassin's Creed Shadows. As usual, the deficit is widest when running at 4K resolutions on demanding games and smallest when running at lower resolutions where graphics become less GPU bound. A notable observation is that the performance gap between the mobile RTX 5080 and mobile RTX 5090 would remain the same, whether or not DLSS is enabled. When running Assassin's Creed Shadows with DLSS on, for example, the mobile RTX 5090 would maintain its 15 percent lead over the mobile RTX 5080. The relatively small performance drop between the two enthusiast GPUs means it may be worth configuring laptops with the RTX 5080 instead of the RTX 5090 to save on hundreds of dollars or for better performance-per-dollar." As demonstrated by Bestware.com's system configurator, the XMG NEO 16 (A25) SKU with a GeForce RTX 5090 Mobile GPU demands a €855 (~$928 USD) upcharge over an RTX 5080-based build.

AMD Ryzen 5 9600 Nearly Matches 9600X in Early Benchmarks

The AMD Ryzen 5 9600 launched recently as a slightly more affordable variant of the popular Ryzen 5 9600X. Despite launching over a month ago, the 9600 still appears rather difficult to track down in retail stores. However, a recent PassMark benchmark has provided some insights as to the performance of the non-X variant of AMD's six-core Zen 5 budget CPU. Unsurprisingly, the Ryzen 5 9600X and the Ryzen 5 9600 are neck-and-neck, with the 9600X scraping past its non-X counterpart by a mere 2.2% in the CPU benchmark.

According to the PassMark result, the Ryzen 5 9600 scored 29,369, compared to the Ryzen 5 9600X's 30,016, while single-core scores were 4581 for the 9600X and 4433 points for the 9600, representing a 3.2% disparity between the two CPUs. The result is not surprising, since the only real difference between the 9600 and the 9600X is 200 MHz boost clock. All other specifications, including TDP, core count, cache amount and base clock speed, are identical. Both CPUs are also unlocked for overclocking, and both feature AMD Precision Boost 2. While the Ryzen 5 9600 isn't available just yet, it will seemingly be a good option for those who want to stretch their budget to the absolute maximum, since recent reports indicate that it will be around $20 cheaper than the Ryzen 5 9600X, coming in at around the $250-260 mark.

GALAX RTX 5090D HOF XOC LE Card Overclocked to 3.27 GHz, Record Breaking Prototype Enabled w/ Second 12V-2×6 Connector

As reported last month, GALAX had distributed prototypes of its upcoming flagship "Hall of Fame" (HOF) card—based on NVIDIA's Chinese market exclusive GeForce RTX 5090D GPU—to prominent figures within the PC hardware overclocking community. Earlier examples sported single 12V-2×6 power connectors, although GALAX's exposed white PCB design showed extra space for an additional unit. Evaluators conducted experiments involving liquid nitrogen-based cooling methods. The most vocal of online critics questioned the overclocking capability of initial GeForce RTX 5090D HOF samples, due to limitations presented by a lone avenue of power delivery. A definitive answer has arrived in the form of the manufacturer's elite team-devised GeForce RTX 5090D HOF Extreme Overclock (XOC) Lab Limited Edition candidate; a newer variant that makes use of dual 12V-2×6 power connectors. Several overclocking experts have entered into a GALAX-hosted competition—Micka:)Shu, a Chinese participant, posted photos of their test rig setup (see below).

Micka's early access sample managed to achieve top placement GPU on UL Benchmarks' 3DMark Speed Way Hall of Fame, with a final score of 17169 points. A screenshotted GPU-Z session shows the card's core frequency reaching 3277 MHz. Around late January, ASUS China's general manager (Tony Yu) documented his overclocking of a ROG Astral RTX 5090 D GAMING OC specimen up to 3.4 GHz; under liquid nitrogen cooled conditions. GALAX has similarly outfitted its flagship model with selectively binned components and an "over-engineered" design. The company's "bog-standard" HOF model is no slouch, despite the limitation imposed by a single power connector. The GALAX OC Facebook account sent out some appreciation to another noted competitor (and collaborator): "thanks to Overclocked Gaming Systems—OGS Rauf for help with the overclock of GeForce RTX 5090D HOF, and all of (our) GALAX products." The OGS member set world records with said "normal" HOF card—achieving scores of 59,072 points in 3DMark's Fire Strike Extreme project, and 25,040 points in Unigine Superposition (8K-optimized).

AMD Ryzen 9 9950X3D Leaked PassMark Score Shows 14% Single Thread Improvement Over Predecessor

Last Friday, AMD confirmed finalized price points for its upcoming Ryzen 9 9950X3D ($699) and 9900X3D ($599) gaming processors—both launching on March 12. Media outlets are very likely finalizing their evaluations of review silicon; official embargoes are due for lifting tomorrow (March 11). By Team Red decree, a drip feed of pre-launch information was restricted to teasers, a loose March launch window, and an unveiling of basic specifications (at CES 2025). A trickle of mid-January to early March leaks have painted an incomplete picture of performance expectations for the 3D V-Cache-equipped 16 and 12-core parts. A fresh NDA-busting disclosure has arrived online, courtesy of an alleged Ryzen 9 9950X3D sample's set of benchmark scores.

A pre-release candidate posted single and multi-thread ratings of 4739 and 69,701 (respectively), upon completion of PassMark tests. Based on this information, a comparison chart was assembled—pitching the Ryzen 9 9950X3D against its direct predecessor (7950X3D), a Zen 5 relative (9950X), and competition from Intel (Core Ultra 9 285K). AMD's brand-new 16-core flagship managed to outpace the previous-gen Ryzen 9 7950X3D by ~14% in single thread stakes, and roughly 11% in multithreaded scenarios. Test system build details and settings were not mentioned with this leak—we expect to absorb a more complete picture tomorrow, upon publication of widespread reviews. The sampled Ryzen 9 9950X3D CPU surpassed its 9950X sibling by ~5% with its multi-thread result, both processors are just about equal in terms of single-core performance. The Intel Core Ultra 9 285K CPU posted the highest single-core result within the comparison—5078 points—exceeding the 9950X3D's tally by about 7%. The latter pulls ahead by ~3% in terms of recorded multi-thread performance. Keep an eye on TechPowerUp's review section; where W1zzard will be delivering his verdict(s) imminently.

Apple M3 Ultra SoC: Disappointing CPU Benchmark Result Surfaces

Just recently, Apple somewhat stunned the industry with the introduction of its refreshed Mac Studio with the M4 Max and M3 Ultra SoCs. For whatever reason, the Cupertino giant decided to spec its most expensive Mac desktop with an Ultra SoC that is based on an older generation, M3, instead of the newer M4 family. However, the M3 Max, which the M3 Ultra is based on, was no slouch, indicating that the M3 Ultra will likely boast impressive performance. However, if a collection of recent benchmark runs are anything to go by, it appears that the M3 Ultra is a tad too closely matched with the M4 Max in CPU performance, which makes the $2000 premium between the two SoCs rather difficult to digest. Needless to say, a single benchmark is hardly representative of real-world performance, so accept this information with a grain of salt.

According to the recently spotted Geekbench result, the M3 Ultra managed a single-core score of 3,221, which is roughly 18% slower than the M4 Max. In multicore performance, one might expect the 32-core M3 Ultra to sweep the floor with the 16-core M4 Max, but that is not quite the case. With a score of 27,749, the M3 Ultra leads the M4 Max by an abysmal 8%. Of course, these are early runs, which may suggest that future scores will likely be higher. However, it is clear as day that the M3 Ultra and the M4 Max, at least in terms of CPU performance, will be close together in multithreaded performance, with the M4 Max continuing to be substantially faster than the far more expensive M3 Ultra variant in single-threaded performance. It does appear that the primary selling point for the M3 Ultra-equipped Mac Studio will be the massive 80-core GPU and up to 512 GB of unified memory shared by the CPU and the GPU, which should come in handy for running massive LLMs locally and other niche workloads.

Apple's A18 4-core iGPU Benched Against Older A16 Bionic, 3DMark Results Reveal 10% Performance Deficit

Apple's new budget-friendly iPhone 16e model was introduced earlier this month; potential buyers were eyeing a device (starting at $599) that houses a selectively "binned" A18 mobile chipset. The more expensive iPhone 16 and iPhone 16 Plus models were launched last September, with A18 chips on-board; featuring six CPU cores, and five GPU cores. Apple's brand-new 16E smartphone seems to utilize an A18 sub-variant—tech boffins have highlighted this package's reduced GPU core count: of four. The so-called "binned A18" reportedly posted inferior performance figures—15% slower—when lined up against its standard 5-core sibling (in Geekbench 6 Metal tests). The iPhone 16E was released at retail today (February 28), with review embargoes lifted earlier in the week.

A popular portable tech YouTuber—Dave2D (aka Dave Lee)—decided to pit his iPhone 16E sample unit against older technology; contained within the iPhone 15 (2023). The binned A18's 4-core iGPU competed with the A16 Bionic's 5-core integrated graphics solution in a 3DMark Wild Life Extreme Unlimited head-to-head. Respective tallies—of 2882 and 3170 points—were recorded for posterity's sake. The more mature chipset (from 2022) managed to surpass its younger sibling by ~10%, according to the scores presented on Dave2D's comparison chart. The video reviewer reckoned that the iPhone 16E's SoC offers "killer performance," despite reservations expressed about the device not offering great value for money. Other outlets have questioned the prowess of Apple's latest step down model. Referencing current-gen 3DMark benchmark results, Wccftech observed: "for those wanting to know the difference between the binned A18 and non-binned variant; the SoC with a 5-core GPU running in the iPhone 16 finishes the benchmark run with an impressive 4007 points, making it a massive 28.04 percent variation between the two (pieces of) silicon. It is an eye-opener to witness such a mammoth performance drop, which also explains why Apple resorted to chip-binning on the iPhone 16e as it would help bring the price down substantially."

AMD Ryzen 9 9950X3D Leaked 3DMark & Cinebench Results Indicate 9950X-esque Performance

The AMD Ryzen 9 9950X3D processor will head to retail next month—a March 12 launch day is rumored—but a handful of folks seem to have early samples in their possession. Reviewers and online influencers have been tasked with evaluating pre-launch silicon, albeit under strict conditions; i.e. no leaking. Inevitably, NDA-shredding material has seeped out—yesterday, we reported on an alleged sample's ASUS Silicon Prediction rating. Following that, a Bulgarian system integrator/hardware retailer decided to upload Cinebench R23 and PCMark Time Spy results to Facebook. Evidence of this latest leak was scrubbed at the source, but VideoCardz preserved crucial details.

The publication noticed distinguishable QR and serial codes in PCbuild.bg's social media post; so tracing activities could sniff out points of origin. As expected, the leaked benchmark data points were compared to Ryzen 9 9950X and 7950X3D scores. The Ryzen 9 9950X3D sample recorded a score of 17,324 points in 3DMark Time Spy, as well as 2279 points (single-core) and 42,423 points (multi-core) in Cinebench R23. Notebookcheck observed that the pre-launch candidate came: "out ahead of the Ryzen 9 7950X3D in both counts, even if the gaming win is less than significant. Comparing the images of the benchmark results to our in-house testing and benchmark database shows the 9950X3D beating the 7950X3D by nearly 17% in Cinebench multicore." When compared to its non-3D V-Cache equivalent, the Ryzen 9 9950X3D leverages a slight performance advantage. A blurry shot of PCbuild.bg's HWiNFO session shows the leaked processor's core clock speeds; going up to 5.7 GHz (turbo) on a single CCD (non-X3D). The X3D-equipped portion seems capable of going up to 5.54 GHz.

Dune: Awakening Release Date and Price Revealed, Character Creation Now Live!

Today, Funcom finally lifted the veil on Dune: Awakening's release date. The open world, multiplayer survival game set on Arrakis will come to Steam on May 20! Players can begin their journey today by diving into the brand-new Character Creation & Benchmark Mode, available now through Steam. Created characters can then be imported into Dune: Awakening at launch.

Inspired by Frank Herbert's legendary sci-fi novel and Legendary Entertainment's award-winning films, Dune: Awakening is crafted by Funcom's veteran developers to deliver an experience that resonates with both Dune enthusiasts and survival game fans alike. Get ready to step into the biggest Dune game ever made with today's trailer.

AMD & Nexa AI Reveal NexaQuant's Improvement of DeepSeek R1 Distill 4-bit Capabilities

Nexa AI, today, announced NexaQuants of two DeepSeek R1 Distills: The DeepSeek R1 Distill Qwen 1.5B and DeepSeek R1 Distill Llama 8B. Popular quantization methods like the llama.cpp based Q4 K M allow large language models to significantly reduce their memory footprint and typically offer low perplexity loss for dense models as a tradeoff. However, even low perplexity loss can result in a reasoning capability hit for (dense or MoE) models that use Chain of Thought traces. Nexa AI has stated that NexaQuants are able to recover this reasoning capability loss (compared to the full 16-bit precision) while keeping the 4-bit quantization and all the while retaining the performance advantage. Benchmarks provided by Nexa AI can be seen below.

We can see that the Q4 K M quantized DeepSeek R1 distills score slightly less (except for the AIME24 bench on Llama 3 8b distill, which scores significantly lower) in LLM benchmarks like GPQA and AIME24 compared to their full 16-bit counter parts. Moving to a Q6 or Q8 quantization would be one way to fix this problem - but would result in the model becoming slightly slower to run and requiring more memory. Nexa AI has stated that NexaQuants use a proprietary quantization method to recover the loss while keeping the quantization at 4-bits. This means users can theoretically get the best of both worlds: accuracy and speed.

NVIDIA GeForce RTX 5070 Ti Allegedly Scores 16.6% Improvement Over RTX 4070 Ti SUPER in Synthetic Benchmarks

Thanks to some early 3D Mark benchmarks obtained by VideoCardz, NVIDIA's upcoming GeForce RTX 5070 Ti GPU paints an interesting picture of performance gains over the predecessor. Testing conducted with AMD's Ryzen 7 9800X3D processor and 48 GB of DDR5-6000 memory has provided the first glimpse into the card's capabilities. The new GPU demonstrates a 16.6% performance improvement over its predecessor, the RTX 4070 Ti SUPER. However, benchmark data shows it is falling short of the more expensive RTX 5080 by 13.2%, raising questions about the price-to-performance ratio given the $250 price difference between the two cards. Priced at $749 MSRP, the RTX 5070 Ti could be even pricier in retail channels at launch, especially with limited availability. The card's positioning becomes particularly interesting compared to the RTX 5080's $999 price point, which commands a 33% premium for its additional performance capabilities.

As a reminder, the RTX 5070 Ti boasts 8,960 CUDA cores, 280 texture units, 70 RT cores for ray tracing, and 280 tensor cores for AI computations, all supported by 16 GB of GDDR7 memory running at 28 Gbps effective speed across a 256-bit bus interface, resulting in an 896 GB/s bandwidth. We have to wait for proper reviews for the final performance conclusion, as synthetic benchmarks tell only part of the story. Modern gaming demands consideration of advanced features such as ray tracing and upscaling technologies, which can significantly impact real-world performance. The true test will come from comprehensive gaming benchmarks tested over various cases. The gaming community won't have to wait long for detailed analysis, as official reviews will be reportedly released in just a few days. Additional evaluations of non-MSRP versions should follow on February 20, the card's launch date.

NVIDIA GeForce RTX 5070 Ti Edges Out RTX 4080 in OpenCL Benchmark

A recently surfaced Geekbench OpenCL listing has revealed the performance improvements that the GeForce RTX 5070 Ti is likely to bring to the table, and the numbers sure look promising - that is, coming from the disappointment of the GeForce RTX 5080, which manages roughly 260,000 points in the benchmark, portraying a paltry 8% improvement over its predecessor. The GeForce RTX 5070 Ti, however, managed an impressive 248,000 points, putting it a substantial 20% ahead of the GeForce RTX 4070 Ti. Hilariously enough, the RTX 5080 is merely 4% ahead, making the situation even worse for the somewhat contentious GPU. NVIDIA has claimed similar performance improvements in its marketing material, which does seem quiet plausible.

Of course, an OpenCL benchmark is hardly representative of real-world gaming performance. That being said, there is no denying that raw benchmarks will certainly help buyers temper expectations and make decisions. Previous leaks and speculations have hinted at a roughly 10% improvement over its predecessor in raster performance and up to 15% improvements in ray tracing performance, although the OpenCL listing does indicate the RTX 5070 ti might be capable of a larger generational jump, neck-and-neck with NVIDIA's claims. For those in need of a refresher, the RTX 5070 Ti boasts 8960 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. Like its siblings, the RTX 5070 is also rumored to face "extremely limited" supply at launch. With its official launch less than a week away, we won't have much waiting to do to find out for ourselves.

NVIDIA RTX 5080 Laptop Defeats Predecessor By 19% in Time Spy Benchmark

The NVIDIA RTX 50-series witnessed quite a contentious launch, to say the least. Hindered by abysmal availability, controversial generational improvement, and whacky marketing tactics by Team Green, it would be safe to say a lot of passionate gamers were left utterly disappointed. That said, while the desktop cards have been the talk of the town as of late, the RTX 50 Laptop counterparts are yet to make headlines. Occasional leaks do appear on the interwebs, the latest one of which seems to indicate the 3D Mark Time Spy performance for the RTX 5080 Laptop GPU. And the results are - well, debatable.

We do know that the RTX 5080 Laptop GPU will feature 7680 CUDA cores, a shockingly modest increase over its predecessor. Considering that we did not get a node shrink this time around, the architectural improvements appear to be rather minimal, going by the tests conducted so far. Of course, the biggest boost in performance will likely be afforded by GDDR7 memory, utilizing a 256-bit bus, compared its predecessor's GDDR6 memory on a 192-bit bus. In 3D Mark's Time Spy DX12 test, which is somewhat of an outdated benchmark, the RTX 5080 Laptop managed roughly around 21,900 points. The RTX 4080 Laptop, on an average, rakes in around 18,200 points, putting the RTX 5080 Laptop ahead by almost 19%. The RTX 4090 Laptop is also left behind, by around 5%.

Capcom Releases Monster Hunter Wilds PC Performance Benchmark Tool

Hey hunters, how's it going? February is here, which means we are officially in the launch month of Monster Hunter Wilds! On February 28, your journey into the Forbidden Lands begins. Now, to help ensure you have a satisfying, fun experience come launch, we're pleased to share that the Monster Hunter Wilds Benchmark we'd previously mentioned we were looking into, is real, it's ready, and it's live from right now for you to try!

With the Monster Hunter Wilds Benchmark, we want to help our PC players feel more confident about how their PC will run Monster Hunter Wilds. In the next section, we're going to explain what the Monster Hunter Wilds Benchmark is, how it works, as well as some important information and differences you'll see between this and the Open Beta Test 1 and 2 experiences, so please take a moment to check it out.

UL Solutions Adds Support for DLSS 4 and DLSS Multi Frame Generation to the 3DMark NVIDIA DLSS Feature Test

We're excited to announce that in today's update to 3DMark, we're adding support for DLSS 4 and DLSS Multi Frame generation to the NVIDIA DLSS feature test. The NVIDIA DLSS feature test and this update were developed in partnership with NVIDIA. The 3DMark NVIDIA DLSS feature test lets you compare performance and image quality brought by enabling DLSS processing. If you have a new GeForce RTX 50 Series GPU, you'll also be able to compare performance with and without the full capabilities of DLSS 4.

You can choose to run the NVIDIA DLSS feature test using DLSS 4, DLSS 3 or DLSS 2. DLSS 4 includes the new DLSS Multi Frame Generation feature, and you can choose between several image quality modes—Quality, Balanced, Performance, Ultra Performance and DLAA. These modes are designed for different resolutions, from Full HD up to 8K. DLSS Multi Frame Generation uses AI to boost frame rates with up to three additional frames generated per traditionally rendered frame. In the 3DMark NVIDIA DLSS feature test, you are able to choose between 2x, 3x and 4x Frame Generation settings if you have an NVIDIA GeForce RTX 50 series GPU.

Ubisoft Unveils Assassin's Creed Shadows Recommended PC Specs

Hi everyone, Assassin's Creed Shadows is launching March 20, inviting you to experience the intertwined stories of Naoe, an adept shinobi Assassin, and Yasuke, a powerful African samurai. Today, you can pre-order the game on console and PC, and read up on Shadows' upcoming expansion, Claws of Awaji, which brings 10 hours of additional content free with your pre-order.

For those of you playing on PC, we've got all of Assassin's Creed Shadows' recommended PC specs listed in this article. Assassin's Creed Shadows will support raytraced global illumination and reflections, and will feature an in-game benchmark tool for performance analysis, ultra-wide resolutions, an uncapped framerate, and more. Check out all the specs chart below.

AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way

Although it has only been a few days since the RDNA 4-based GPUs from Team Red hit the scene, it appears that we have already been granted a first look at the 3D Mark performance of the highest-end Radeon RX 9070 XT GPU, and to be perfectly honest, the scores seemingly live up to our expectations - although with disappointing ray tracing performance. Unsurprisingly, the thread has been erased over at Chiphell, but folks have managed to take screenshots in the nick of time.

The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.
Return to Keyword Browsing
Jul 12th, 2025 04:49 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts