News Posts matching #Benchmark

Return to Keyword Browsing

3DMark Gets AMD FidelityFX Super Resolution 2 (FSR 2) Feature Test

UL Benchmarks today released an update to 3DMark that adds a Feature Test for AMD FidelityFX Super Resolution 2 (FSR 2), the company's popular upscaling-based performance enhancement. This was long overdue, as 3DMark has had a Feature Test for DLSS for years now; and as of October 2022, it even got one for Intel XeSS. The new FSR 2 Feature Test uses a scene from the Speed Way DirectX 12 Ultimate benchmark, where it compares fine details of a vehicle and a technic droid between native resolution with TAA and FSR 2, and highlights the performance uplift. To use the feature test, you'll need any GPU that supports DirectX 12 and FSR 2 (that covers AMD, NVIDIA, and Intel Arc). For owners of 3DMark who purchased it before October 12, 2022, they'll need to purchase the Speed Way upgrade to unlock the AMD FSR feature test.

OpenAI Unveils GPT-4, Claims to Outperform Humans in Certain Academic Benchmarks

We've created GPT-4, the latest milestone in OpenAI's effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5's score was around the bottom 10%. We've spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload. A year ago, we trained GPT-3.5 as a first "test run" of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time. As we continue to focus on reliable scaling, we aim to hone our methodology to help us predict and prepare for future capabilities increasingly far in advance—something we view as critical for safety.

AMD Ryzen 9 7945HX Beats Core i9-13950HX In Gaming Performance, Dragon Range Equipped Laptops Available Now

AMD has announced the immediate availability of its Ryzen 7045 HX-series (Dragon Range) processors for high performance laptop devices. In a Youtube video released on March 10, AMD's Jason Banta has announced the availability of the world's most powerful mobile processor, the Ryzen 9 7945HX. He listed the OEM partners who have integrated the 7945HX into flagship level laptop models. He also declared that this range topping CPU is a competition beater. Gaming benchmark tests have demonstrated that the Ryzen 9 7945HX beats Intel's Raptor Lake Core i9-13950HX by an average margin of 10%.

Intel Xeon W-3400/2400 "Sapphire Rapids" Processors Run First Benchmarks

Thanks to the attribution of Puget Systems, we have a preview of Intel's latest Xeon W-3400 and Xeon W-2400 workstation processors based on Sapphire Rapids core technology. Delivering up to 56 cores and 112 threads, these CPUs are paired with up to eight TeraBytes of eight-channel DDR5-4800 memory. For expansion, they offer up to 112 PCIe 5.0 lanes come with up to 350 Watt TDP; some models are unlocked for overclocking. This interesting HEDT family for workstation usage comes at a premium with an MSRP of $5,889 for the top-end SKU, and motherboard prices are also on the pricey side. However, all of this should come as no surprise given the expected performance professionals expect from these chips. Puget Systems has published test results that include: Photoshop, After Effects, Premiere Pro, DaVinci Resolve, Unreal Engine, Cinebench R23.2, Blender, and V-Ray. Note that Puget Systems said that: "While this post has been an interesting preview of the new Xeon processors, there is still a TON of testing we want to do. The optimizations Intel is working on is of course at the top, but there are several other topics we are highly interested in." So we expect better numbers in the future.
Below, you can see the comparison with AMD's competing Threadripper Pro HEDT SKUs, along with power usage using different Windows OS power profiles:

Intel Publishes Sorting Library Powered by AVX-512, Offers 10-17x Speed Up

Intel has recently updated its open-source C++ header file library for high-performance SIMD-based sorting to support the AVX-512 SIMD instruction set. Extending the capability of regular AVX2 support, the sorting functions now implement 512-bit extensions to offer greater performance. According to Phoronix, the NumPy Python library for mathematics that underpins a lot of software has updated its software base to use the AVX-512 boosted sorting functionality that yields a fantastic uplift in performance. The library uses AVX-512 to vectorize the quicksort for 16-bit and 64-bit data types using the extended instruction set. Benchmarked on an Intel Tiger Lake system, the NumPy sorting saw a 10-17x increase in performance.

Intel's engineer Raghuveer Devulapalli changed the NumPy code, which was merged into the NumPy codebase on Wednesday. Regarding individual data types, the new implementation increases 16-bit int sorting by 17x and 32-bit data type sorting by 12-13x, while float 64-bit sorting for random arrays has experienced a 10x speed up. Using the x86-simd-sort code, this speed-up shows the power of AVX-512 and its capability to enhance the performance of various libraries. We hope to see more implementations of AVX-512, as AMD has joined the party by placing AVX-512 processing elements on Zen 4.

Alleged NVIDIA AD106 GPU Tested in 3DMark and AIDA64

Benchmarks and specifications of an alleged NVIDIA AD106 GPU have tipped up on Chiphell, although the original poster has since removed all the details. Thanks to @harukaze5719 on Twitter, who posted the details, we still get an insight into what we might be able to expect from NVIDIA's upcoming mid-range cards. All these details should be taken as is, as the original source isn't exactly what we'd call trustworthy. Based on the data in the TPU GPU database, the GPU in question should be the GeForce RTX 4070 Mobile with much higher clock speeds or an equivalent desktop part that offers more CUDA cores than the RTX 4060 Ti. Whatever the specific AD106 GPU is, it's being compared to the GeForce RTX 2080 Super and the RTX 3070 Ti.

The GPU was tested in AIDA64 and 3DMark and it beats the RTX 2080 Super in all of the tests, while drawing some 55 W less power at the same time. In some of the benchmarks the wins are within the margin of testing error, for example when it comes to the memory performance in AIDA64. However, we're looking at a GPU connected to only half the memory bandwidth here, as the AD106 GPU only has a 128-bit memory bus, compared to 256-bit for the RTX 2080 Super, although the memory clocks are much higher, but the overall memory bandwidth is still nearly 36 percent higher in the RTX 2080 Super. Yet, the AD106 GPU manages to beat the RTX 2080 Super in all of the memory benchmarks in AIDA64.

BAPCo Releases SYSmark 30, the Latest Generation of the Premier PC Performance Metric Featuring New Applications and Scenarios

BAPCo, a non-profit consortium of leading PC hardware manufacturers, released SYSmark 30, the latest generation of the premier PC benchmark that measures and compares system performance using real-world applications and workloads.

The Office Application scenario features updated workloads for popular office suite-style applications. The General Productivity scenario features tasks like web browsing, file compression, and application installation. The new Photo Editing scenarios measure the responsiveness of creative photo management and manipulation usage models. The Advanced Content Creation scenario heavily uses photo and video editing applications, including multitasking.

First Alleged AMD Radeon RX 7900-series Benchmarks Leaked

With only a couple of days to go until the AMD RX 7900-series benchmarks go live, some alleged benchmarks from both the RX 7900 XTX and RX 7900 XT have leaked on Twitter. The two cards are being compared to a NVIDIA RTX 4080 card in no less than seven different game titles, all running at 4K resolution. The games are God of War, Cyberpunk 2077, Assassin's Creed Valhalla, Watchdogs Legion, Red Dead Redemption 2, Doom Eternal and Horizon Zero Dawn. The cards were tested on a system with a Core i9-12900K CPU which was paired with 32 GB of RAM of unknown type.

It's too early to draw any real conclusions from this test, but in general, the RX 7900 XTX comes out on top, ahead of the RTX 4080, so no surprises here. The RX 7900 XT is either tied with the RTX 4080 or a fair bit slower, with the exception being Red Dead Redemption 2, where the RTX 4080 is the slowest card, although it also appears to have some issues, since the one percent lows are hitting 2 FPS. Soon, the reviews will be out and everything will become more clear, but it appears that AMD's RX 7900 XTX will give NVIDIA's RTX 4080 a run for its money, if these benchmarks are anything to go by.

Update Dec 11th: The original tweet has been removed, for unknown reasons. It could be because the numbers were fake, or because they were in breach of AMD's NDA.

AMD 4th Generation EPYC "Genoa" Processors Benchmarked

Yesterday, AMD announced its latest addition to the data center family of processors called EPYC Genoa. Named the 4th generation EPYC processors, they feature a Zen 4 design and bring additional I/O connectivity like PCIe 5.0, DDR5, and CXL support. To disrupt the cloud, enterprise, and HPC offerings, AMD decided to manufacture SKUs with up to 96 cores and 192 threads, an increase from the previous generation's 64C/128T designs. Today, we are learning more about the performance and power aspects of the 4th generation AMD EPYC Genoa 9654, 9554, and 9374F SKUs from 3rd party sources, and not the official AMD presentation. Tom's Hardware published a heap of benchmarks consisting of rendering, compilation, encoding, parallel computing, molecular dynamics, and much more.

In the comparison tests, we have AMD EPYC Milan 7763, 75F3, and Intel Xeon Platinum 8380, a current top-end Intel offering until Sapphire Rapids arrives. Comparing 3rd-gen EPYC 64C/128T SKUs with 4th-gen 64C/128T EPYC SKUs, the new generation brings about a 30% increase in compression and parallel compute benchmarks performance. When scaling to the 96C/192T SKU, the gap is widened, and AMD has a clear performance leader in the server marketplace. For more details about the benchmark results, go here to explore. As far as comparison to Intel offerings, AMD leads the pack as it has a more performant single and multi-threaded design. Of course, beating the Sapphire Rapids to market is a significant win for team red, so we are still waiting to see how the 4th generation Xeon stacks up against Genoa.

Intel Delivers Leading AI Performance Results on MLPerf v2.1 Industry Benchmark for DL Training

Today, MLCommons published results of its industry AI performance benchmark in which both the 4th Generation Intel Xeon Scalable processor (code-named Sapphire Rapids) and Habana Gaudi 2 dedicated deep learning accelerator logged impressive training results.


"I'm proud of our team's continued progress since we last submitted leadership results on MLPerf in June. Intel's 4th gen Xeon Scalable processor and Gaudi 2 AI accelerator support a wide array of AI functions and deliver leadership performance for customers who require deep learning training and large-scale workloads." Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

NVIDIA RTX 4080 20-30% Slower than RTX 4090, Still Smokes the RTX 3090 Ti: Leaked Benchmarks

Benchmarks of NVIDIA's upcoming GeForce RTX 4080 (formerly known as the RTX 4080 16 GB) are already out as the leaky taps in the Asian tech forumscape know no bounds. Someone with access to an RTX 4080 sample and drivers on ChipHell forums, put it through a battery of synthetic and gaming tests. The $1,200 MSRP graphics card was tested on 3DMark Time Spy, Port Royal, and games that include Forza Horizon 5, Call of Duty Modern Warfare II, Cyberpunk 2077, Borderlands 3, and Shadow of the Tomb Raider.

The big picture: the RTX 4080 is found to be halfway between the RTX 3090 Ti and the RTX 4090. At stock settings, and in 3DMark Time Spy Extreme (4K), it has 71% the performance of an RTX 4090, whereas the RTX 3090 Ti is 55% that of the RTX 4090. With its "power limit" slider maxed out, the RTX 4080 inches 2 percentage-points closer to the RTX 4090 (73% that of the RTX 4090), and with a bit of manual OC, it adds another 4 percentage-points. Things change slightly with 3DMark Port Royal, where the RTX 4080 is 69% the performance of the RTX 4090 in a test where the RTX 3090 Ti does 58% that of the RTX 4090.

Basemark Debuts a Unique Benchmark for Comparisons Between Android, iOS, Linux, MacOS and Windows Devices

Basemark launched today GPUScore Sacred Path. It is the world's only cross-platform GPU benchmark that includes the latest GPU technologies like Variable Rate Shading (VRS). Sacred Path supports all the relevant device categories - ranging from premium mobile phones to high-end gaming PCs and discrete graphics cards, including full support of the major operating systems, such as Android, iOS, Linux, macOS and Windows.

This benchmark is of great importance for application vendors, device manufacturers, GPU vendors and IT Media. Game developers need a thorough understanding of performance across the device range to optimize the use of the same assets across a maximum device range. GPU vendors and device manufacturers can compare their products with competitor products, which allows them to develop new product ranges with the correct targeting. In addition, Sacred Path is a true asset for media reviewing any GPU-equipped devices.

3DMark Speed Way DirectX 12 Ultimate Benchmark is Launching on October 12

3DMark Speed Way is a new GPU benchmark that showcases the graphics technology that will power the next generation of gaming experiences. We're excited to announce Speed Way, sponsored by Lenovo Legion, is releasing on October 12. Our team has been working hard to get Speed Way ready for you to use benchmarking, stress testing, and comparing the new PC hardware coming this fall.

From October 12 onward, Speed Way will be included in the price when you buy 3DMark from Steam or our own online store. Since we released Time Spy in 2016, 3DMark users have enjoyed many free updates, including Time Spy Extreme, the 3DMark CPU Profile, 3DMark Wild Life, and multiple tests demonstrating new DirectX features. With the addition of Speed Way, the price of 3DMark on Steam and 3DMark Advanced Edition will go up from $29.99 to $34.99.

UL Launches New 3DMark Feature Test for Intel XeSS

We're excited to release a new 3DMark feature test for Intel's new XeSS AI-enhanced upscaling technology. This new feature test is available in 3DMark Advanced and Professional Editions. 3DMark feature tests are special tests designed to highlight specific techniques, functions, or capabilities. The Intel XeSS feature test shows you how XeSS affects performance.

The 3DMark Intel XeSS frame inspector tool helps you compare image quality with an interactive side-by-side comparison of XeSS and native-resolution rendering. Check out the images below to see an example comparison of native resolution rendering and XeSS in the new 3DMark feature test.

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Basemark Debuts World's First Mobile Device Benchmark with Variable Rate Shading

Basemark launched today its second GPUScore graphics benchmark, called The Expedition. The Expedition targets high-end smartphones and other mobile devices running on Android or iOS. It utilizes the latest mobile GPU technologies, like Variable Rate Shading on supporting devices. As for graphics APIs, The Expedition supports Vulkan and Metal. The Expedition uses state-of-the-art rendering algorithms, similar to ones seen in the latest mobile games. Every run of GPUScore: The Expedition runs exactly the same content regardless of hardware and operating system. This combination makes the test results truly comparable with high accuracy and reliability.

The difference in graphics performance between desktops and mobile devices is getting narrower, as consumers want smartphones and other mobile devices with superior graphics performance. Consequently, graphics processors used in handheld devices are rapidly evolving. This raises the importance of new graphics performance benchmarks that test the latest devices correctly. Relevant measurements give the consumers an accurate understanding of the graphics performance, which is a major selling point.

AMD Zen 4 EPYC CPU Benchmarked Showing a 17% Single Thread Performance Increase from Zen 3

The next-generation flagship AMD GENOA EPYC CPU has recently appeared on Geekbench 5 in a dual-socket configuration for a total of 192 cores and 384 threads. The processors were installed in an unknown Suma 65GA24 motherboard running at 3.51 GHz and paired with 768 GB of DDR5 memory. This setup achieved a single-core score of 1460 and multi-core result of 96535 which places the processor approximately 17% ahead of an equivalently clocked 128 core EPYC 7763 in single-threaded performance. The Geekbench listing also includes an OPN code of 100-000000997-01 which most likely corresponds to the flagship AMD EPYC 9664 with a max TDP of 400 W according to existing leaks.

Intel Arc A580 Hits AotS Benchmark Database, Roughly Matches RTX 3050

Intel Arc A580 is an upcoming entry-mainstream desktop graphics card based on the Xe-HPG "Alchemist" graphics architecture, and positioned between the A380 and A750. Based on the larger 6 nm DG2-512 silicon than the one powering the A380, the A580 is endowed with 16 Xe Cores, or double the SIMD muscle of the A380, with 2,048 unified shaders. The card enjoys 8 GB of GDDR6 memory across a 128-bit bus, which at 16 Gbps data-rate produces 256 GB/s bandwidth.

A leaked Ashes of the Singularity benchmark database entry reveals that the A580 scores roughly 95 FPS at 1080p on average, with 110 FPS in the normal batch, around 102 FPS in the medium batch, and around 78 FPS in the heavy batch. The benchmark used the Vulkan API, and an unknown 16-thread Intel processor with 32 GB of memory. These scores put the A580 roughly at par with the GeForce RTX 3050 "Ampere" in this test, which would make it a reasonable solution for playing popular online games at 1080p with medium-high settings, or AAA games at medium settings.

Intel's 13th Gen Raptor Lake ES CPU gets Benchmarked

Just hours ago a CPU-Z screenshot of an Intel Raptor Lake ES CPU appeared and the same CPU now appears to have been put through a full battery of benchmark tests, courtesy of Expreview. This upcoming 13th gen Core CPU from Intel is limited to a maximum clock speed of 3.8 GHz and as such, was tested against a Core i9-12900K that was clocked at the same speed, for a fair comparison. Both CPUs were used with an unknown Z690 motherboard, 32 GB of DDR5 5200 MHz memory with unknown timings and a GeForce RTX 3090 Founders Edition graphics card. According to Expreview, the 13th gen CPU is on average around 20 percent faster than the 12th gen CPU, although the extra eight E-Cores might have something to do with that in certain benchmarks.

In Sisoft Sandra 2021 the ES sample is as much as 51.5 percent faster in the double precision floating point test, which is the extreme outlier, but it's ahead by around 15-25 percent in most of the other tests. In several other tests, it's ahead by anything from as little as less than three percent to as much as 25 percent, with more multithreaded types of benchmarks seeing the largest gains, as expected. However, in some of the single threaded tests, Alder Lake is edging out Raptor Lake by 10 percent or more, for example in Pov-Ray and Cinebench. Most of the game tests favour Intel's 12th gen over the 13th gen ES sample, although it's possible that the limited clock speeds are holding back the Raptor Lake CPU. The two are either neck in neck or Alder Lake being ahead with anything from a couple of percent to almost nine percent. Keep in mind that it's still early days and everything from UEFI support to drivers will be improved before Raptor Lake launches later this year. There's also the limited clock speed which is likely to play a significant role in the final performance as well, but this does at least provide a first taste of what's to come. Head over to Expreview for their full set of benchmarks.

Apple M2 CPU & GPU Benchmarks Surface on Geekbench

The recently announced Apple M2 processor which is set to feature in the new MacBook Air and 13-inch MacBook Pro models has been benchmarked. The processor appeared in numerous Geekbench 5 CPU & GPU tests where the chip scored a maximum single-core result of 1919 points and 8928 points in multi-core representing an 11% and 18% CPU performance improvement respectively from the M1. The chip brings significant GPU performance increases achieving a Geekbench Metal score of 30627 points which is a ~42% increase from the M1 partially due to a larger 10-core GPU compared to the 8-core GPU on the M1. These initial numbers largely align with claims from Apple of an 18% CPU and 35% GPU improvement over the original M1.

First Intel Arc A730M Powered Laptop Goes on Sale, in China

The first benchmark result of an Intel Arc A730M laptop made an appearance online and the mysterious laptop used to run 3DMark turned out to be from a Chinese company called Machenike. The laptop itself appears to go under the name of Dawn16 Discovery Edition and features a 16-inch display with a native resolution of 2560 x 1600, with a 165 Hz refresh rate. CPU wise, Machenike went with a Core i7-12700H, which is a 6+8 core CPU with 20 threads, where the performance cores top out at 4.7 GHz. The CPU has been paired with 16 GB of 4800 MHz DDR5 memory and the system also has a PCIe 4.0 NVMe SSD of some kind, with a max read speed of 3500 MB/s, which isn't particularly impressive. Other features include Thunderbolt 4 support, WiFi 6E and Bluetooth 5.2, as well as an 80 Whr battery pack.

However, none of the above is particularly unique and what matters here is of course the Intel Arc A730M GPU. It has been paired with 12 GB of GDDR6 memory with a 192-bit interface, at 14 Gbps according to the specs. The memory bandwidth is said to be 336 GB/s. The company also provided a couple of performance metrics, with a 3DMark TimeSpy figure of 10002 points and a 3DMark Fire Strike figure of 23090 points. The TimeSpy score is a few points slower than the numbers posted earlier, but helps verify the earlier test result. Other interesting nuggets of information include support for 8k60 12-bit HDR video decoding for AV1, HEVC, AVC and VP9, as well as 8k 10-bit HDR encoding for said formats. Here a figure for the Puget Benchmark in what appears to be Photoshop (PS) is provided, where it scores 1188 points. The laptop is up for what appears to be pre-order, with a price tag of 7,499 RMB, or about US$1,130.

AMD's Integrated GPU in Ryzen 7000 Gets Tested in Linux

It appears that one of AMD's partners has a Ryzen 7000 CPU or APU, with integrated graphics up and running in Linux. Based on details leaked, courtesy of the partner testing the chip using the Phoronix Test Suite and submitting the results to the OpenBenchmarking database. The numbers are by no means impressive, suggesting that this engineering sample isn't running at the proper clock speeds. For example, it only scores 63.1 FPS in Enemy Territory: Quake Wars, where a Ryzen 9 6900HX manages 182.1 FPS, where both GPUs have been allocated 512 MB of system memory as the minimum graphics memory allocation.

The integrated GPU goes under the model name of GFX1036, with older integrated RDNA2 GPUs from AMD having been part of the GFX103x series. It's reported to have a clock speed of 2000/1000 MHz, although it's presumably running at the lower of the two clock speeds, if not even slower, as it's only about a third of the speed or slower, than the GPU in the Ryzen 9 6900HX. That said, the GPU in the Ryzen 7000-series is as far as anyone's aware, not really intended for gaming, since it's a very stripped down GPU that is meant to mainly be for desktop use and media usage, so it's possible that it'll never catch up with the current crop of integrated GPUs from AMD. We'll hopefully find out more in less than two weeks time, when AMD has its keynote at Computex.

GPU Hardware Encoders Benchmarked on AMD RDNA2 and NVIDIA Turing Architectures

Encoding video is one of the significant tasks that modern hardware performs. Today, we have some data of AMD and NVIDIA solutions for the problem that shows how good GPU hardware encoders are. Thanks to Chips and Cheese tech media, we have information about AMD's Video Core Next (VCN) encoder found in RDNA2 GPUs and NVIDIA's NVENC (short for NVIDIA Encoder). The site managed to benchmark AMD's Radeon RX 6900 XT and NVIDIA GeForce RTX 2060 GPUs. The AMD card features VCN 3.0, while the NVIDIA Turing card features a 6th generation NVENC design. Team red is represented by the latest work, while there exists a 7th generation of NVENC. C&C tested this because it means all that the reviewer possesses.

The metric used for video encoding was Netflix's Video Multimethod Assessment Fusion (VMAF) metric composed by the media giant. In addition to hardware acceleration, the site also tested software acceleration done by libx264, a software library used for encoding video streams into the H.264/MPEG-4 AVC compression format. The libx264 software acceleration was running on AMD Ryzen 9 3950X. Benchmark runs included streaming, recording, and transcoding in Overwatch and Elder Scrolls Online.
Below, you can find benchmarks of streaming, recording, transcoding, and transcoding speed.

Basemark Launches World's First Cross-Platform Raytracing Benchmark - GPUScore Relic of Life

Basemark launched today GPUScore, an all-new GPU (graphics processing unit) performance benchmarking suite for a wide device range from smartphones to high-end gaming PCs. GPUScore supports all modern graphics APIs, such as Vulkan, Metal and DirectX, and operating systems such as Windows, Linux, macOS, Android and iOS.

GPUScore will consist of three different testing suites. Today, the first one of these was launched, named Relic of Life. It is available immediately. Basemark will introduce the two other GPUScore testing suites during the following months. Relic of Life is ideal for benchmarking high-end gaming PCs' discrete graphics cards' GPUs. It requires hardware accelerated ray tracing, supports Vulkan and DirectX, and is available for both Windows and Linux. GPUScore: Relic of Life is an ideal benchmark for comparing Vulkan and DirectX accelerated ray tracing performance.

Samsung RDNA2-based Exynos 2200 GPU Performance Significantly Worse than Snapdragon 8 Gen1, Both Power Galaxy S22 Ultra

The Exynos 2200 SoC powering the Samsung Galaxy S22 Ultra in some regions such as the EU, posts some less-than-stellar graphics performance numbers, for all the hype around its AMD-sourced RDNA2 graphics solution, according to an investigative report by Erdi Özüağ, aka "FX57." Samsung brands this RDNA2-based GPU as the Samsung Xclipse 920. Further, Özüağ's testing found that the Exynos 2200 is considerably slower than the Qualcomm Snapdragon 8 Gen 1 powering the S22 Ultra in certain other regions, including the US and India. He has access to both kinds of the S22 Ultra.

In the UL Benchmarks 3DMark Wildlife test, the Exynos 2200 posted a score of 6684 points, compared to 9548 points by the Snapdragon 8 Gen 1 (a difference of 42 percent). What's even more interesting, is that the Exynos 2200 is barely 7 percent faster than the previous-gen Exynos 2100 (Arm Mali GPU) powering the S21 Ultra, which scored 6256 points. The story repeats with the GFXBench "Manhattan" off-screen render benchmark. Here, the Snapdragon 8 Gen 1 is 30 percent faster than the Exynos 2200, which performs on-par with the Exynos 2100. Find a plethora of other results in the complete review comparing the two flavors of the S22 Ultra.
Return to Keyword Browsing
May 15th, 2024 19:10 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts