News Posts matching #Benchmark

Return to Keyword Browsing

UL Announces the Procyon AI Image Generation Benchmark Based on Stable Diffusion

We're excited to announce we're expanding our AI Inference benchmark offerings with the UL Procyon AI Image Generation Benchmark, coming Monday, 25th March. AI has the potential to be one of the most significant new technologies hitting the mainstream this decade, and many industry leaders are competing to deliver the best AI Inference performance through their hardware. Last year, we launched the first of our Procyon AI Inference Benchmarks for Windows, which measured AI Inference performance with a workload using Computer Vision.

The upcoming UL Procyon AI Image Generation Benchmark provides a consistent, accurate and understandable workload for measuring the AI performance of high-end hardware, built with input from members of the industry to ensure fair and comparable results across all supported hardware.

Zhaoxin KX-7000 8-Core CPU Gets Geekbenched

Zhaoxin finally released its oft-delayed KX-7000 CPU series last December—the Chinese manufacturer claimed that its latest "Century Avenue Core" uArch consumer/desktop-oriented range was designed to "deliver double the performance of previous generations." Freshly discovered Geekbench 6.2.2 results indicate that Zhaoxin has succeeded on that front—Wccftech has pored over these figures, generated by an: "entry-level Zhaoxin KX-7000 CPU which has 8 cores, 8 threads, 4 MB of L2, and 32 MB of L3 cache. This chip was running at a base clock of 3.0 GHz and a boost clock of 3.3 GHz which is below its standard 3.6 GHz boost profile."

The new candidate was compared to Zhaoxin's previous-gen KX-U6780A and KX-6000G models. Intel's Core i3-10100F processor was thrown in as a familiar Western point of reference. The KX-7000 scored: "823 points in single-core, and 3813 points in multi-core tests. For comparisons, the Intel's Comet Lake CPU with 4 cores and 8 threads plus a boost of up to 4.3 GHz offers a much higher score. It's around 75% faster in single and 17% faster in multi-core tests within the same benchmark." The higher clock speeds, doubled core counts and TDPs do deliver "twice the performance" when compared to direct forebears—mission accomplished there. It is clear that Zhaoxin's latest CPU architecture cannot keep up with a generations old Team Blue design. Loongson's 3A6000 processor is a very promising prospect—reports suggest that this chip is somewhat comparable to mainstream AMD Zen 4 and Intel Raptor Lake products.

Avatar: Frontiers of Pandora's Latest Patch Claims Fixing of FSR 3 Artefacts & FPS Tracking

A new patch for Avatar: Frontiers of Pandora deployed on March 1, bringing more than 150 fixes and adjustments to the game. Title Update 3 includes technical, UI, balancing, main quest, and side quest improvements, plus additional bug fixes. To provide players with an improved experience, the development team, led by Massive Entertainment, listened to feedback from the community while working on Title Update 3.

An additional patch, Title Update 3.1, was deployed on March 7, adding additional fixes to the game. Check out the full list of improvements included in Title Update 3 & 3.1 here, and read on for the most notable improvements now available in Avatar: Frontiers of Pandora.

Update Mar 14th: TPU has received alerts regarding player feedbackMassive Entertainment's "Title Update 3" has reportedly broken their implementation of FSR 3 in Avatar: Frontiers of Pandora. We will keep an eye on official Ubisoft channels—so far they have not addressed these FSR-related problems.

AMD Ryzen 7 8840U "Hawk Point" APU Exceeds Expectations in 10 W TDP Gaming Test

AMD Ryzen 8040 "Hawk Point" mobile processors continue to roll out in all sorts of review sample guises—mostly within laptops/notebooks and handheld gaming PC segments. An example of the latter would be GPD's Hawk Point-refreshed Win Max 2 model—Cary Golomb, a tech reviewer and self-described evangelist of "PC Gaming Handhelds Since 2016" has acquired this device for benchmark comparison purposes. A Ryzen 7 8840U-powered GPD Win Max 2 model was pitched against similar devices that house older Team Red APU technologies. Golomb's collection included Valve's Steam Deck LCD model, and three "Phoenix" Ryzen 7840U-based GPD models. He did not have any top-of-the-line ASUS or Lenovo handhelds within reach, but the onboard Ryzen Z1 Extreme APU is a close relative of 7840U.

Golomb's social media post included a screenshot of a Batman: Arkham Knight "average frames per second" comparison chart—all devices were running on a low 10 W TDP setting. The overall verdict favors AMD's new Hawk Point part: "Steam Deck low TDP performance finally dethroned...GPD continues to make the best AMD devices. 8840U shouldn't be better, but everywhere I'm testing, it is consistently better across every TDP. TSP measuring similar." Hawk Point appears to be a slight upgrade over Phoenix—most of the generational improvements reside within a more capable XDNA NPU, so it is interesting to see that the 8840U outperforms its predecessor. They both sport AMD's Radeon 780M integrated graphics solution (RDNA 3), while the standard/first iteration Steam Deck makes do with an RDNA 2-era "Van Gogh" iGPU. Golomb found that the: "three other GPD 7840U devices behaved somewhat consistently."

MSI Claw Review Units Observed Trailing Behind ROG Ally in Benchmarks

Chinese review outlets have received MSI Claw sample units—the "Please, Xiao Fengfeng" Bilibili video channel has produced several comparison pieces detailing how the plucky Intel Meteor Lake-powered handheld stands up against its closest rival; ASUS ROG Ally. The latter utilizes an AMD Ryzen Z1 APU—in Extreme or Standard forms—many news outlets have pointed out that the Z1 Extreme processor is a slightly reworked Ryzen 7 7840U "Phoenix" processor. Intel and its handheld hardware partners have not dressed up Meteor Lake chips with alternative gaming monikers—simply put, the MSI Claw arrives with Core Ultra 7-155H or Ultra 5-135H processors onboard. The two rival systems both run on Window 11, and also share the same screen size, resolution, display technology (IPS) and 16 GB LPDDR5-6400 memory configuration. The almost eight months old ASUS handheld seems to outperform its near-launch competition.

Xiao Fengfeng's review (Ultra 7-155H versus Z1 Extreme) focuses on different power levels and how they affect handheld performance—the Claw and Ally have user selectable TDP modes. A VideoCardz analysis piece lays out key divergences: "Both companies offer easy TDP profile switches, allowing users to adjust performance based on the game's requirements or available battery life. The Claw's larger battery could theoretically offer more gaming time or higher TDP with the same battery life. The system can work at 40 W TDP level (but in reality it's between 35 and 40 watts)...In the Shadow of the Tomb Raider test, the Claw doesn't seem to outperform the ROG Ally. According to a Bilibili creator's test, the system falls short at four different power levels: 15 W, 20 W, 25 W, and max TDP (40 W for Claw and 30 W for Ally)."

AMD Develops ROCm-based Solution to Run Unmodified NVIDIA's CUDA Binaries on AMD Graphics

AMD has quietly funded an effort over the past two years to enable binary compatibility for NVIDIA CUDA applications on their ROCm stack. This allows CUDA software to run on AMD Radeon GPUs without adapting the source code. The project responsible is ZLUDA, which was initially developed to provide CUDA support on Intel graphics. The developer behind ZLUDA, Andrzej Janik, was contracted by AMD in 2022 to adapt his project for use on Radeon GPUs with HIP/ROCm. He spent two years bringing functional CUDA support to AMD's platform, allowing many real-world CUDA workloads to run without modification. AMD decided not to productize this effort for unknown reasons but did open-source it once funding ended per their agreement. Over at Phoronix, there were several benchmarks testing AMD's ZLUDA implementation over a wide variety of benchmarks.

Benchmarks found that proprietary CUDA renderers and software worked on Radeon GPUs out-of-the-box with the drop-in ZLUDA library replacements. CUDA-optimized Blender 4.0 rendering now runs faster on AMD Radeon GPUs than the native ROCm/HIP port, reducing render times by around 10-20%, depending on the scene. The implementation is surprisingly robust, considering it was a single-developer project. However, there are some limitations—OptiX and PTX assembly codes still need to be fully supported. Overall, though, testing showed very promising results. Over the generic OpenCL runtimes in Geekbench, CUDA-optimized binaries produce up to 75% better results. With the ZLUDA libraries handling API translation, unmodified CUDA binaries can now run directly on top of ROCm and Radeon GPUs. Strangely, the ZLUDA port targets AMD ROCm 5.7, not the newest 6.x versions. Only time will tell if AMD continues investing in this approach to simplify porting of CUDA software. However, the open-sourced project now enables anyone to contribute and help improve compatibility. For a complete review, check out Phoronix tests.

AMD Ryzen 9 7940HX APU Benchmarked in ASUS Tianxuan 5 Pro Laptop

ASUS China has distributed Tianxuan 5 Pro laptop review samples to media outlets in the region—a video evaluation was uploaded to Bilibili yesterday, as discovered and shared by 9550pro. The reviewer, "Wheat Milk Mitsu," put his sampled laptop's AMD Ryzen 9 7940HX processor through the proverbial wringer—with benchmarking exercises conducted in Cinebench R23, PCMark 10, Counter Strike 2, Cyberpunk 2077, Metro Exodus and more. The Ryzen 9 7940HX "Dragon Range" APU was last spotted in the specification sheets for ASUS TUF Gaming A16 (2024) laptop models—the mobile processor is essentially an underclocked offshoot of Team Red's Ryzen 9 7945HX. AMD's Ryzen 8040 "Hawk Point" series has received most of the attention in Western markets—we only see occasional coverage of older Zen 4 "Dragon Range" parts.

AMD's slightly weaker Ryzen 9 7940HX processor is no slouch when compared to its higher clock sibling, despite a lower base clock (2.4 GHz) and Turbo (5.2 GHz)—the Tianxuan (China's equivalent to TUF Gaming) branded laptop was outfitted with a GeForce RTX 4070 mobile GPU and 16 GB of DDR5 5600 RAM. Synthetic benchmark results in Cinebench R23 indicate a marginal 3.7% difference, and multi-core figures show an even smaller difference; 1%. The two Dragon Range APUs exhibited largely the same performance in gaming scenarios, although the 7945HX pulls ahead in Counter-Strike 2 frame rate stakes—328 vs. 265 at 1440p, and 378 vs. 308 at 1080p. AMD's convoluted naming schemes make it difficult to keep track of its many mobile offerings—a 7840HX SKU could join the Dragon Range family in Q1 2024. A few Western media outlets believe that a smattering of these parts are destined for global markets, but Team Red's Marketing HQ has not bothered to announce them in any official capacity. Strange times.

UL Solutions Previews Upcoming 3DMark Steel Nomad Benchmark

Thank you to the 3DMark community - the gamers, overclockers, hardware reviewers, tech-heads and those in the industry using our benchmarks, who have joined us in discovering what the cutting edge of PC hardware can do over this last quarter of a century. Looking back, it's amazing how far graphics have come, and we're very excited to see what the next 25 years bring.

After looking back, it's time to share a sneak peek of what's coming next. Here are some preview screenshots for 3DMark Steel Nomad, our successor to 3DMark Time Spy. It's been more than seven years since we launched Time Spy, and after more than 42 million submitted results, we think it's time for a new heavy non-ray tracing benchmark. Steel Nomad will be our most demanding non-ray tracing benchmark and will not only support Windows using DirectX 12, but also macOS and iOS using Metal, Android using Vulkan, and Linux using Vulkan for Enterprise and reviewers. To celebrate 3DMark's 25th year, the scene will feature some callbacks to many of our previous benchmarks. We hope you have fun finding them all!

Maxon Introduce Cinebench 2024

Maxon, developers of professional software solutions for editors, filmmakers, motion designers, visual effects artists and creators of all types, is thrilled to announce the highly anticipated release of Cinebench 2024. This latest iteration of the industry-standard benchmarking software, which has been a cornerstone in computer performance evaluation for two decades, sets a new standard for performance evaluation, embracing cutting-edge technology to provide artists, designers, and creators with a more accurate and relevant representation of their hardware capabilities.

Redshift Rendering Engine Integration
Cinebench 2024 ushers in a new era by embracing the power of Redshift, Cinema 4D's default rendering engine. Unlike its predecessors, which utilized Cinema 4D's standard renderer, Cinebench 2024 utilizes the same render algorithms across both CPU and GPU implementations. This leap to the Redshift engine ensures that performance testing aligns seamlessly with the demands of modern creative workflows, delivering accurate and consistent results.

UL Solutions Launches 3DMark Solar Bay, New Cross-Platform Ray Tracing Benchmark

We're excited to announce the launch of 3DMark Solar Bay, a new cross-platform benchmark for testing ray traced graphics performance on Windows PCs and high-end Android devices. This benchmark measures games-related graphics performance by rendering a demanding, ray-traced scene in real time. Solar Bay is available now for Android on the Google Play Store and for Windows on Steam, Epic Games or directly from UL Solutions.

Compare ray tracing performance across platforms
Ray tracing is the showcase technology for Solar Bay, simulating real-time reflections. Compared to traditional rasterization, ray-traced scenes produce far more realistic lighting. While dedicated desktop and laptop graphics processing units (GPUs) have supported ray tracing for several years, it's only recently that integrated GPUs and Android devices have been capable of running real-time ray-traced games at frame rates acceptable to gamers.

AMD's Radeon RX 7900 GRE Gets Benchmarked

AMD's China exclusive Radeon RX 7900 GRE has been put through its paces by Expreview and the US$740 equivalent card should in short not carry the 7900-series moniker. In most of the tests, the card performs like a Raden RX 6950 XT or worse, with it being beaten by the Radeon RX 6800 XT in 3D Mark Fire Strike, even if it's only by the tiniest amount. Expreview has done a fairly limited comparison, mainly pitching the Radeon RX 7900 GRE against the Radeon RX 7900 XT and NVIDIA's GeForce RTX 4070, where it loses by a mile towards AMD's higher-end GPU, which by no means was unexpected as this is a lower tier product.

However, when it comes the GeForce RTX 4070, AMD struggles to keep up at 1080p, where NVIDIA takes home the win in games like The Last of US Part 1 and Diablo 4. In games like F1 22 and Assassin's Creed Hall of Valor, AMD is only ahead by a mere percentage point or less. Once ray tracing is enabled, AMD only wins in F1 22 and it's by less than one percent again and Far Cry 6, where AMD is almost three percent faster. Moving up in resolution, the Radeon RX 7900 GRE ends up being a clear winner, most likely partially due to having 16 GB of VRAM and at 1440p the GeForce RTX 4070 also falls behind in most of the ray traced game tests, if only just in most of them. At 4K the NVIDIA card can no longer keep up, but the Radeon RX 7900 GRE isn't really a 4K champion either, dropping under 60 FPS in more resource heavy games like Cyberpunk 2077 and The Last of Us Part 1. Considering the GeForce RTX 4070 Ti only costs around US$50 more, it seems like it would be the better choice, despite having less VRAM. AMD appears to have pulled an NVIDIA with this card, which at least performance wise, seems to belong in the Radeon RX 7800 segment. The benchmark figures also suggests that the actual Radeon RX 7800 cards won't be worth the wait, unless AMD prices them very competitively.

Update 11:45 UTC: [Editor's note: The official MSRP from AMD appears to be US$649 for this card, which is more reasonable, but the performance still places this in in a category lower than the model name suggests.]

AMD's Ryzen 5 7500F Gets Benchmarked, Available Globally

AMD's recently added Ryzen 5 7500F for the AM5 socket was initially said to only be available in the PRC, but according to AMD, it will apparently be available globally. That said, AMD apparently only seeded review units to select Asian media, among them Quasar Zone in Korea, who put the six core, 12 thread CPU through its paces. Overall performance is very close to the Ryzen 5 7600, which isn't really all that strange, considering the two only differ by 100 MHz in both base and boost clock. In most of the benchmarks, the Ryzen 5 7500F is around two to three percent slower than the Ryzen 5 7600 on average.

When compared to the slightly more pricey Intel Core i5-13400, AMD falls behind multithreaded apps but comes out on top in most of the games tested, with the usual odd exception as would be expected. On average, the Ryzen 5 7500F is some 13 percent faster in the game benchmarks at 1080p, although this is using an NVIDIA GeForce RTX 4090 graphics card. It even beats the overall much faster Intel Core i5-13500 in gaming by around nine percent on average. However, the Ryzen 5 7500F system loses out to the two Intel systems when it comes to power efficiency, drawing around 20 Watts more on average when gaming. At US$179.99 it seems like AMD finally has a budget friendly CPU for the AM5 platform, if you're willing to lose the integrated GPU. It's unknown when the CPU will be available outside of Asia at this point in time.

Leaked AMD Radeon RX 7700 & RX 7800 GPU Benchmarks Emerge

A set of intriguing 3DMark Time Spy benchmark results have been released by hardware leaker All_The_Watts!!—these are alleged to have been produced by prototype Radeon RX 7700 and Radeon RX 7800 graphics cards (rumored to be based on variants of the Navi 32 GPU). The current RDNA 3 lineup of mainstream GPUs is severely lacking in middle ground representation, but Team Red is reported to be working on a number of models to fill in the gap. We expect a number of leaks to emerge as we get closer to a rumored product reveal scheduled for late August (to coincide with Gamescon).

The recently released 3DMark Time Spy scores reveal that the alleged Radeon RX 7700 candidate scored 15,465 points, while the RX 7800 achieved 18,197 points—both running on an unspecified test system. The results (refer to the Tom's Hardware-produced chart placed below) are not going to generate a lot of excitement at this stage when compared to predecessors and some of the competition—evaluation samples are not really expected to be optimized to a great degree. We hope to see finalized products with decent drivers putting in a good appearance and performing better later on this year.

Denuvo Setting Up Benchmarking System, Attempting to Disprove Performance Shortfalls

Irdeto is the current owner of Denuvo Software Solutions—the Austrian development team behind the infamous anti-tamper technology and digital rights management (DRM) system. According to Ars Technica neither of these organizations have made great efforts (in the past) to engage in discussion about the controversial anti-piracy and anti-cheat suites—but Steeve Huin, Irdeto's Chief Operating Officer of Video Games—agreed to grant the publication an exclusive interview. The article is titled "Denuvo wants to convince you its DRM isn't evil," which sums up a lot of the public perception regarding Denuvo technologies—having received plenty of flak for high CPU usage and causing excessive activity within storage components. Some users propose that the latter scenario has resulted in shorter lifespans for their solid-state drives. Ars Technica has a long history of Denuvo-related coverage, so a company representative has been sent in for some damage control.

Off the bat, Huin acknowledges that he and his colleagues are aware of Denuvo's reputation: "In the pirating/cracking community, we're seen as evil because we're helping DRM exist and we're ensuring people make money out of games." He considers the technology to be a positive force: "Anti-piracy technologies is to the benefit of the game publishers, [but also] is of benefit to the players in that it protects the [publisher's] investment and it means the publishers can then invest in the next game...But people typically don't think enough of that...Whether people want to believe it or not, we are all gamers, we love gaming, we love being part of it. We develop technologies with the intent to make the industry better and stronger."

AMD Ryzen 5 7500F CPU Gets Benchmarked

The Puget Systems benchmark database outed AMD's Ryzen 5 7500F 6-core/12-thread processor last week—industry experts proposed that it was the first example of a Ryzen 7000 SKU with a disabled iGPU. A South Korean retailer indicated unit pricing of around $170-180, with a possible local launch date on July 7. It seems that retail units have not hit the market (at the time of writing), but Geekbench 6.1 results have since appeared online. According to an entry on the Geekbench database—that was spotted by Olrak29 earlier today—the Ryzen 5 7500F has a base clock of 3.7 GHz. It can boost up to 5.0 GHz on a single core, while all cores can reach a maximum of 4.8 GHz. The listing confirms that this new SKU sits firmly in the AMD "Raphael" CPU family.

The processor was tested on a system running Microsoft Windows 11—partial specifications of the evaluation build include an ASUS TUF Gaming A620M-PLUS WIFI motherboard and 32 GB of DDR5-6000 RAM. The tested Ryzen 5 7500F CPU achieved scores of 2782 points (single-core) and 13323 points (multi-threaded), which places it slightly ahead of the Ryzen 5 7600X in multi-thread performance. It trails slightly behind with its single-core result, but these figures are impressive considering that the Ryzen 5 7500F will likely be offered at a more budget friendly price when compared to its closest iGPU-enabled siblings.

NVIDIA H100 GPUs Set Standard for Generative AI in Debut MLPerf Benchmark

In a new industry-standard benchmark, a cluster of 3,584 H100 GPUs at cloud service provider CoreWeave trained a massive GPT-3-based model in just 11 minutes. Leading users and industry-standard benchmarks agree: NVIDIA H100 Tensor Core GPUs deliver the best AI performance, especially on the large language models (LLMs) powering generative AI.

H100 GPUs set new records on all eight tests in the latest MLPerf training benchmarks released today, excelling on a new MLPerf test for generative AI. That excellence is delivered both per-accelerator and at-scale in massive servers. For example, on a commercially available cluster of 3,584 H100 GPUs co-developed by startup Inflection AI and operated by CoreWeave, a cloud service provider specializing in GPU-accelerated workloads, the system completed the massive GPT-3-based training benchmark in less than eleven minutes.

MLCommons Shares Intel Habana Gaudi2 and 4th Gen Intel Xeon Scalable AI Benchmark Results

Today, MLCommons published results of its industry AI performance benchmark, MLPerf Training 3.0, in which both the Habana Gaudi2 deep learning accelerator and the 4th Gen Intel Xeon Scalable processor delivered impressive training results.

"The latest MLPerf results published by MLCommons validates the TCO value Intel Xeon processors and Intel Gaudi deep learning accelerators provide to customers in the area of AI. Xeon's built-in accelerators make it an ideal solution to run volume AI workloads on general-purpose processors, while Gaudi delivers competitive performance for large language models and generative AI. Intel's scalable systems with optimized, easy-to-program open software lowers the barrier for customers and partners to deploy a broad array of AI-based solutions in the data center from the cloud to the intelligent edge." - Sandra Rivera, Intel executive vice president and general manager of the Data Center and AI Group

Geekbench Leak Suggests NVIDIA GeForce RTX 4060 Nearly 20% Faster than RTX 3060

NVIDIA is launching its lower end GeForce RTX 4060 graphics card series next week, but has kept schtum about the smaller Ada Lovelace AD107 GPU's performance level. This more budget-friendly offering (MSRP $299) is rumored to have 3,072 CUDA cores, 24 RT cores, 96 Tensor cores, 96 TMUs, and 32 ROPs. It will likely sport 8 GB of GDDR6 memory across a 128-bit wide memory bus. Benchleaks has discovered the first set of test results via a database leak, and posted these details on social media earlier today. Two Geekbench 6 runs were conducted on a test system comprised of an Intel Core i5-13600K CPU, ASUS Z790 ROG APEX motherboard, DDR5-6000 memory and the aforementioned GeForce card.

The GPU Compute test utilizing the Vulkan API resulted in a score of 99419, and another using OpenCL achieved 105630. We are looking at a single sample here, so expect variations when other units get tested in Geekbench prior to the June 29 launch. The RTX 4060 is about 12% faster (in Vulkan) than its direct predecessor—RTX 3060. The gap widens with its Open CL performance, where it offers an almost 20% jump over the older card. The RTX 3060 Ti presents around 3-5% faster performance over the RTX 4060. We hope to see actual in-game benchmarking carried out soon.

NVIDIA H100 Hopper GPU Tested for Gaming, Slower Than Integrated GPU

NVIDIA's H100 Hopper GPU is a device designed for pure AI and other compute workloads, with the least amount of consideration for gaming workloads that involve graphics processing. However, it is still interesting to see how this 30,000 USD GPU fairs in comparison to other gaming GPUs and whether it is even possible to run games on it. It turns out that it is technically feasible but not making much sense, as the Chinese YouTube channel Geekerwan notes. Based on the GH100 GPU SKU with 14,592 CUDA, the H100 PCIe version tested here can achieve 204.9 TeraFLOPS at FP16, 51.22 TeraFLOPS at FP32, and 25.61 TeraFLOPS at FP64, with its natural power laying in accelerating AI workloads.

However, how does it fare in gaming benchmarks? Not very well, as the testing shows. It scored 2681 points in 3DMark Time Spy, which is lower than AMD's integrated Radeon 680M, which managed to score 2710 points. Interestingly, the GH100 has only 24 ROPs (render output units), while the gaming-oriented GA102 (highest-end gaming GPU SKU) has 112 ROPs. This is self-explanatory and provides a clear picture as to why the H100 GPU is used for computing only. Since it doesn't have any display outputs, the system needed another regular GPU to provide the picture, while the computation happened on the H100 GPU.

Primate Labs Rolls Out Geekbench 6.1

Primate Labs has released the newest update to its cross-platform CPU and GPU benchmark that measures your system's performance, Geekbench 6.1. The latest version brings new features and improvements, including the upgrade to Clang 16, an increased workload gap that should minimize thermal throttling on some devices, as well as introduces support for SVE and AVX 512- FP 16 instructions, and support for fixed-point math. The update also improves multi-core performance.

These changes result in Geekbench 6.1 single-scores to be up to 5 percent higher and multi-core scores up to 10 percent higher, compared to Geekbench 6.0 scores. Due to these differences, Primate Labs recommends that users do not compare scores between Geekbench 6.0 and Geekbench 6.1. Geekbench 6.1 is also a recommended update, according to Primate Labs.

Capcom Releases Street Fighter 6 PC Benchmark Tool

Capcom has kindly provided a new benchmarking tool for folks who are wondering whether their souped-up PC gaming rigs can run the upcoming seventh (not sixth, despite the title) main entry in the Street Fighter series with aplomb - the testing suite can be downloaded from here. The development team's introductory message states: "The Street Fighter 6 Benchmark Tool monitors gameplay on the Street Fighter 6 demo and calculates a score that indicates your PC's performance. The results of the benchmarking will be shown as follows, with a score of 91 or above demonstrating that your PC can play the game with ease."

The explanation continues: "If your PC does not meet the system requirements needed to run this benchmarking software, it may not launch properly. If that happens, please reconfirm that you satisfy the criteria listed on this page under System Requirements." Street Fighter 6 is arriving this Friday (June 2), so Capcom's benchmarking tool only gives a little bit of advanced notice - an unfortunate few who "cannot operate the game" (with a 0-30 score) will need to make necessary PC upgrades in time for launch day action. Or they could simply buy the bare minimum point of entry on console: a PlayStation 4 Slim or the cheapest current generation system - Xbox Series S.

Intel Core Ultra 7 1003H CPU Benchmark Results Appear Online

A hardware tipster - Benchleaks - has highlighted an interesting new entry for an Intel Meteor Lake Client Platform CPU on the PugetBench site - it seems that early platform validation results have been uploaded by mistake (prior to embargo restrictions). The MTL-P CPU in question appears to be a laptop/mobile variant given its "H" designation. We are also looking at another example of Team Blue's new SKU naming system with this Core Ultra 7 1003H processor - the company has confirmed that revised branding will debut as part of the Meteor Lake family.

The previously leaked Core model (5 1003H) also sported an "Ultra" tag, so it is possible that only high-end examples have been outed online over the past month. Puget System Lightroom Classic benchmark results that were produced by the Core Ultra 7 1003H CPU were not exactly next level - scoring only 534.5 points overall - this could indicate that a prototype unit was benched. An older Core i7-8665U laptop processor only lagged behind by 32.5 points. The test platform was fitted with 16 GB (2 x 8 GB) of DDR5-5600 memory, and ran in a Windows 11 Enterprise (22621) OS environment. Intel's latest marketing spiel is bigging up the potential of Meteor Lake's AI acceleration capabilities, via the built-in neural VPU.

ASUS ROG Ally's Ryzen Z1 Extreme Custom APU Verified by Benchmark Info

An intriguing entry has appeared on the Geekbench Browser site, the information was uploaded with a timestamp from this morning (11:07 am on April 20 to be specific) pointing to a mobile ASUS device that was tested in GeekBench 5. The archived info dump reveals that the subject matter of the benchmark was the ASUS ROG Ally handheld gaming console, which has received a lot of attention in recent weeks - with it being touted as a very serious alternative to Valve's Steam Deck, a handheld gaming PC that is quite popular with enthusiasts. The ROG Ally will need to offer a potent hardware package if it stands to compete directly with the Steam Deck, and the latest information confirms that this new contender is very promising in that department. Geekbench 5 awarded an impressive OpenCL score of 35498 to the RC71L variant of the ROG Ally, an RC71X-assigned model is known to exist but details of its exact nature have not been revealed. This particular ROG Alloy unit was running Windows 11 Home (64-bit) under the operating system's performance power plan.

The new entry on Geekbench Browser shows that the Ally is packing an AMD Ryzen Z1 Extreme APU, which appears to be a customized version of the Ryzen 7 7840U APU mobile platform chipset - previous rumors have indicated that the latter would be in the driving seat. Both Phoenix range SoCs share the basic 8 cores and 16 thread makeup, but the Z1 Extreme is capable of boosting up to 5.062 GHz from a base frequency of 3.30 GHz. AMD's Radeon 780M iGPU (RDNA 3) is expected to deal with the Ally's graphical aspect, but the benchmark info dump only provides scant details about the GPU (codenamed "gfx1103") - most notably the presence of six computer units, an 800 MHz max frequency, and access to 8.20 GB of video memory. Number crunching boffins have calculated that the Ally could field 768 FP32 cores, courtesy of the dual issue SIMD design inherent to RDNA 3.

3DMark Gets AMD FidelityFX Super Resolution 2 (FSR 2) Feature Test

UL Benchmarks today released an update to 3DMark that adds a Feature Test for AMD FidelityFX Super Resolution 2 (FSR 2), the company's popular upscaling-based performance enhancement. This was long overdue, as 3DMark has had a Feature Test for DLSS for years now; and as of October 2022, it even got one for Intel XeSS. The new FSR 2 Feature Test uses a scene from the Speed Way DirectX 12 Ultimate benchmark, where it compares fine details of a vehicle and a technic droid between native resolution with TAA and FSR 2, and highlights the performance uplift. To use the feature test, you'll need any GPU that supports DirectX 12 and FSR 2 (that covers AMD, NVIDIA, and Intel Arc). For owners of 3DMark who purchased it before October 12, 2022, they'll need to purchase the Speed Way upgrade to unlock the AMD FSR feature test.

OpenAI Unveils GPT-4, Claims to Outperform Humans in Certain Academic Benchmarks

We've created GPT-4, the latest milestone in OpenAI's effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5's score was around the bottom 10%. We've spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload. A year ago, we trained GPT-3.5 as a first "test run" of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time. As we continue to focus on reliable scaling, we aim to hone our methodology to help us predict and prepare for future capabilities increasingly far in advance—something we view as critical for safety.
Return to Keyword Browsing
May 1st, 2024 06:28 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts