News Posts matching #Benchmark

Return to Keyword Browsing

3DMark Speed Way DirectX 12 Ultimate Benchmark is Launching on October 12

3DMark Speed Way is a new GPU benchmark that showcases the graphics technology that will power the next generation of gaming experiences. We're excited to announce Speed Way, sponsored by Lenovo Legion, is releasing on October 12. Our team has been working hard to get Speed Way ready for you to use benchmarking, stress testing, and comparing the new PC hardware coming this fall.

From October 12 onward, Speed Way will be included in the price when you buy 3DMark from Steam or our own online store. Since we released Time Spy in 2016, 3DMark users have enjoyed many free updates, including Time Spy Extreme, the 3DMark CPU Profile, 3DMark Wild Life, and multiple tests demonstrating new DirectX features. With the addition of Speed Way, the price of 3DMark on Steam and 3DMark Advanced Edition will go up from $29.99 to $34.99.

UL Launches New 3DMark Feature Test for Intel XeSS

We're excited to release a new 3DMark feature test for Intel's new XeSS AI-enhanced upscaling technology. This new feature test is available in 3DMark Advanced and Professional Editions. 3DMark feature tests are special tests designed to highlight specific techniques, functions, or capabilities. The Intel XeSS feature test shows you how XeSS affects performance.

The 3DMark Intel XeSS frame inspector tool helps you compare image quality with an interactive side-by-side comparison of XeSS and native-resolution rendering. Check out the images below to see an example comparison of native resolution rendering and XeSS in the new 3DMark feature test.

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Basemark Debuts World's First Mobile Device Benchmark with Variable Rate Shading

Basemark launched today its second GPUScore graphics benchmark, called The Expedition. The Expedition targets high-end smartphones and other mobile devices running on Android or iOS. It utilizes the latest mobile GPU technologies, like Variable Rate Shading on supporting devices. As for graphics APIs, The Expedition supports Vulkan and Metal. The Expedition uses state-of-the-art rendering algorithms, similar to ones seen in the latest mobile games. Every run of GPUScore: The Expedition runs exactly the same content regardless of hardware and operating system. This combination makes the test results truly comparable with high accuracy and reliability.

The difference in graphics performance between desktops and mobile devices is getting narrower, as consumers want smartphones and other mobile devices with superior graphics performance. Consequently, graphics processors used in handheld devices are rapidly evolving. This raises the importance of new graphics performance benchmarks that test the latest devices correctly. Relevant measurements give the consumers an accurate understanding of the graphics performance, which is a major selling point.

AMD Zen 4 EPYC CPU Benchmarked Showing a 17% Single Thread Performance Increase from Zen 3

The next-generation flagship AMD GENOA EPYC CPU has recently appeared on Geekbench 5 in a dual-socket configuration for a total of 192 cores and 384 threads. The processors were installed in an unknown Suma 65GA24 motherboard running at 3.51 GHz and paired with 768 GB of DDR5 memory. This setup achieved a single-core score of 1460 and multi-core result of 96535 which places the processor approximately 17% ahead of an equivalently clocked 128 core EPYC 7763 in single-threaded performance. The Geekbench listing also includes an OPN code of 100-000000997-01 which most likely corresponds to the flagship AMD EPYC 9664 with a max TDP of 400 W according to existing leaks.

Intel Arc A580 Hits AotS Benchmark Database, Roughly Matches RTX 3050

Intel Arc A580 is an upcoming entry-mainstream desktop graphics card based on the Xe-HPG "Alchemist" graphics architecture, and positioned between the A380 and A750. Based on the larger 6 nm DG2-512 silicon than the one powering the A380, the A580 is endowed with 16 Xe Cores, or double the SIMD muscle of the A380, with 2,048 unified shaders. The card enjoys 8 GB of GDDR6 memory across a 128-bit bus, which at 16 Gbps data-rate produces 256 GB/s bandwidth.

A leaked Ashes of the Singularity benchmark database entry reveals that the A580 scores roughly 95 FPS at 1080p on average, with 110 FPS in the normal batch, around 102 FPS in the medium batch, and around 78 FPS in the heavy batch. The benchmark used the Vulkan API, and an unknown 16-thread Intel processor with 32 GB of memory. These scores put the A580 roughly at par with the GeForce RTX 3050 "Ampere" in this test, which would make it a reasonable solution for playing popular online games at 1080p with medium-high settings, or AAA games at medium settings.

Intel's 13th Gen Raptor Lake ES CPU gets Benchmarked

Just hours ago a CPU-Z screenshot of an Intel Raptor Lake ES CPU appeared and the same CPU now appears to have been put through a full battery of benchmark tests, courtesy of Expreview. This upcoming 13th gen Core CPU from Intel is limited to a maximum clock speed of 3.8 GHz and as such, was tested against a Core i9-12900K that was clocked at the same speed, for a fair comparison. Both CPUs were used with an unknown Z690 motherboard, 32 GB of DDR5 5200 MHz memory with unknown timings and a GeForce RTX 3090 Founders Edition graphics card. According to Expreview, the 13th gen CPU is on average around 20 percent faster than the 12th gen CPU, although the extra eight E-Cores might have something to do with that in certain benchmarks.

In Sisoft Sandra 2021 the ES sample is as much as 51.5 percent faster in the double precision floating point test, which is the extreme outlier, but it's ahead by around 15-25 percent in most of the other tests. In several other tests, it's ahead by anything from as little as less than three percent to as much as 25 percent, with more multithreaded types of benchmarks seeing the largest gains, as expected. However, in some of the single threaded tests, Alder Lake is edging out Raptor Lake by 10 percent or more, for example in Pov-Ray and Cinebench. Most of the game tests favour Intel's 12th gen over the 13th gen ES sample, although it's possible that the limited clock speeds are holding back the Raptor Lake CPU. The two are either neck in neck or Alder Lake being ahead with anything from a couple of percent to almost nine percent. Keep in mind that it's still early days and everything from UEFI support to drivers will be improved before Raptor Lake launches later this year. There's also the limited clock speed which is likely to play a significant role in the final performance as well, but this does at least provide a first taste of what's to come. Head over to Expreview for their full set of benchmarks.

Apple M2 CPU & GPU Benchmarks Surface on Geekbench

The recently announced Apple M2 processor which is set to feature in the new MacBook Air and 13-inch MacBook Pro models has been benchmarked. The processor appeared in numerous Geekbench 5 CPU & GPU tests where the chip scored a maximum single-core result of 1919 points and 8928 points in multi-core representing an 11% and 18% CPU performance improvement respectively from the M1. The chip brings significant GPU performance increases achieving a Geekbench Metal score of 30627 points which is a ~42% increase from the M1 partially due to a larger 10-core GPU compared to the 8-core GPU on the M1. These initial numbers largely align with claims from Apple of an 18% CPU and 35% GPU improvement over the original M1.

First Intel Arc A730M Powered Laptop Goes on Sale, in China

The first benchmark result of an Intel Arc A730M laptop made an appearance online and the mysterious laptop used to run 3DMark turned out to be from a Chinese company called Machenike. The laptop itself appears to go under the name of Dawn16 Discovery Edition and features a 16-inch display with a native resolution of 2560 x 1600, with a 165 Hz refresh rate. CPU wise, Machenike went with a Core i7-12700H, which is a 6+8 core CPU with 20 threads, where the performance cores top out at 4.7 GHz. The CPU has been paired with 16 GB of 4800 MHz DDR5 memory and the system also has a PCIe 4.0 NVMe SSD of some kind, with a max read speed of 3500 MB/s, which isn't particularly impressive. Other features include Thunderbolt 4 support, WiFi 6E and Bluetooth 5.2, as well as an 80 Whr battery pack.

However, none of the above is particularly unique and what matters here is of course the Intel Arc A730M GPU. It has been paired with 12 GB of GDDR6 memory with a 192-bit interface, at 14 Gbps according to the specs. The memory bandwidth is said to be 336 GB/s. The company also provided a couple of performance metrics, with a 3DMark TimeSpy figure of 10002 points and a 3DMark Fire Strike figure of 23090 points. The TimeSpy score is a few points slower than the numbers posted earlier, but helps verify the earlier test result. Other interesting nuggets of information include support for 8k60 12-bit HDR video decoding for AV1, HEVC, AVC and VP9, as well as 8k 10-bit HDR encoding for said formats. Here a figure for the Puget Benchmark in what appears to be Photoshop (PS) is provided, where it scores 1188 points. The laptop is up for what appears to be pre-order, with a price tag of 7,499 RMB, or about US$1,130.

AMD's Integrated GPU in Ryzen 7000 Gets Tested in Linux

It appears that one of AMD's partners has a Ryzen 7000 CPU or APU, with integrated graphics up and running in Linux. Based on details leaked, courtesy of the partner testing the chip using the Phoronix Test Suite and submitting the results to the OpenBenchmarking database. The numbers are by no means impressive, suggesting that this engineering sample isn't running at the proper clock speeds. For example, it only scores 63.1 FPS in Enemy Territory: Quake Wars, where a Ryzen 9 6900HX manages 182.1 FPS, where both GPUs have been allocated 512 MB of system memory as the minimum graphics memory allocation.

The integrated GPU goes under the model name of GFX1036, with older integrated RDNA2 GPUs from AMD having been part of the GFX103x series. It's reported to have a clock speed of 2000/1000 MHz, although it's presumably running at the lower of the two clock speeds, if not even slower, as it's only about a third of the speed or slower, than the GPU in the Ryzen 9 6900HX. That said, the GPU in the Ryzen 7000-series is as far as anyone's aware, not really intended for gaming, since it's a very stripped down GPU that is meant to mainly be for desktop use and media usage, so it's possible that it'll never catch up with the current crop of integrated GPUs from AMD. We'll hopefully find out more in less than two weeks time, when AMD has its keynote at Computex.

GPU Hardware Encoders Benchmarked on AMD RDNA2 and NVIDIA Turing Architectures

Encoding video is one of the significant tasks that modern hardware performs. Today, we have some data of AMD and NVIDIA solutions for the problem that shows how good GPU hardware encoders are. Thanks to Chips and Cheese tech media, we have information about AMD's Video Core Next (VCN) encoder found in RDNA2 GPUs and NVIDIA's NVENC (short for NVIDIA Encoder). The site managed to benchmark AMD's Radeon RX 6900 XT and NVIDIA GeForce RTX 2060 GPUs. The AMD card features VCN 3.0, while the NVIDIA Turing card features a 6th generation NVENC design. Team red is represented by the latest work, while there exists a 7th generation of NVENC. C&C tested this because it means all that the reviewer possesses.

The metric used for video encoding was Netflix's Video Multimethod Assessment Fusion (VMAF) metric composed by the media giant. In addition to hardware acceleration, the site also tested software acceleration done by libx264, a software library used for encoding video streams into the H.264/MPEG-4 AVC compression format. The libx264 software acceleration was running on AMD Ryzen 9 3950X. Benchmark runs included streaming, recording, and transcoding in Overwatch and Elder Scrolls Online.
Below, you can find benchmarks of streaming, recording, transcoding, and transcoding speed.

Basemark Launches World's First Cross-Platform Raytracing Benchmark - GPUScore Relic of Life

Basemark launched today GPUScore, an all-new GPU (graphics processing unit) performance benchmarking suite for a wide device range from smartphones to high-end gaming PCs. GPUScore supports all modern graphics APIs, such as Vulkan, Metal and DirectX, and operating systems such as Windows, Linux, macOS, Android and iOS.

GPUScore will consist of three different testing suites. Today, the first one of these was launched, named Relic of Life. It is available immediately. Basemark will introduce the two other GPUScore testing suites during the following months. Relic of Life is ideal for benchmarking high-end gaming PCs' discrete graphics cards' GPUs. It requires hardware accelerated ray tracing, supports Vulkan and DirectX, and is available for both Windows and Linux. GPUScore: Relic of Life is an ideal benchmark for comparing Vulkan and DirectX accelerated ray tracing performance.

Samsung RDNA2-based Exynos 2200 GPU Performance Significantly Worse than Snapdragon 8 Gen1, Both Power Galaxy S22 Ultra

The Exynos 2200 SoC powering the Samsung Galaxy S22 Ultra in some regions such as the EU, posts some less-than-stellar graphics performance numbers, for all the hype around its AMD-sourced RDNA2 graphics solution, according to an investigative report by Erdi Özüağ, aka "FX57." Samsung brands this RDNA2-based GPU as the Samsung Xclipse 920. Further, Özüağ's testing found that the Exynos 2200 is considerably slower than the Qualcomm Snapdragon 8 Gen 1 powering the S22 Ultra in certain other regions, including the US and India. He has access to both kinds of the S22 Ultra.

In the UL Benchmarks 3DMark Wildlife test, the Exynos 2200 posted a score of 6684 points, compared to 9548 points by the Snapdragon 8 Gen 1 (a difference of 42 percent). What's even more interesting, is that the Exynos 2200 is barely 7 percent faster than the previous-gen Exynos 2100 (Arm Mali GPU) powering the S21 Ultra, which scored 6256 points. The story repeats with the GFXBench "Manhattan" off-screen render benchmark. Here, the Snapdragon 8 Gen 1 is 30 percent faster than the Exynos 2200, which performs on-par with the Exynos 2100. Find a plethora of other results in the complete review comparing the two flavors of the S22 Ultra.

Intel Sapphire Rapids Xeon with DDR5 Memory Spotted in AIDA64 and Cinebench R15

Intel's next-generation Xeon processors code-named Sapphire Rapids are on track to hit the market this year. These new processors are supposed to bring a wide array of new and improved features and a chance for Intel to show off its 10 nm SuperFin manufacturing process in the server market. Thanks to the Twitter user YuuKi_AnS, we have some of the first tests run in AIDA64 and Cinebench R15 benchmark suites. Yuuki managed to get ahold of DDR5-enabled Sapphire Rapids Xeon with 48 cores and 96 threads, equipped with a base frequency of 2.3 GHz and boost speeds of 3.3 GHz. The processor tested was an engineering sample with a Q-SPEC designation of "QYFQ" and made for Intel Socket E (LGA-4677). This CPU sample was locked at 270 Watt TDP.

Below, you can see the performance results of this processor, tested in the AIDA64 cache and memory benchmark and Cinebench R15 bench test. There is a comparison between AMD's Milan-X and Xeon Platinum 8380, so the numbers are more in check of what you can expect from the final product.

Intel Core i5-12490F Beats Core i5-12400F By 15% in Early Performance Benchmarks

A few days ago, we reported a strange Intel Core i5-12490F processor that appeared in the Chinese marketplace. The processor uses the C0 silicon that Intel sits on a pile of and repurposes it to make these odd chips for Asian markets. As we found out, this C0 silicon is a heavily cut-down version, with only six high-performance P-cores present. Compared to the regular Core i5-12400F, it has a bigger L3 cache arriving at 20 MB and slightly higher clock speeds where the base stands at 3.0 G and a boost frequency that manages to ramp up to 4.6 GHz. As a reference, the regular Core i5-12400F has 18 MB of L3 cache, a base frequency of 2.5 GHz, and a boost speed of 4.4 GHz.

Thanks to the early benchmarks, we have the performance numbers in two cases where Intel's Core i5-12490F model is compared to the regular Core i5-12400F. According to the GeekBench data, the first case proves that higher clock speeds of the strange processor, coupled with higher L3 cache, prove to be of help as the single-threaded performance grows by 10%. In comparison, the multi-threaded results show an even more considerable improvement at 15%. The second test shows smaller margins compared to the Core i5-12500, where the Core i5-12490F processor now only leads by 2.5% and 5% in single-threaded and multi-threaded workloads, respectively. This indicates that we have to wait for more benchmarks to see how this design stands in the Alder Lake family and see just how big of an improvement comes from higher frequencies and bigger L3 cache.

TOP500 Update Shows No Exascale Yet, Japanese Fugaku Supercomputer Still at the Top

The 58th annual edition of the TOP500 saw little change in the Top10. The Microsoft Azure system called Voyager-EUS2 was the only machine to shake up the top spots, claiming No. 10. Based on an AMD EPYC processor with 48 cores and 2.45GHz working together with an NVIDIA A100 GPU and 80 GB of memory, Voyager-EUS2 also utilizes a Mellanox HDR Infiniband for data transfer.

While there were no other changes to the positions of the systems in the Top10, Perlmutter at NERSC improved its performance to 70.9 Pflop/s. Housed at the Lawrence Berkeley National Laboratory, Perlmutter's increased performance couldn't move it from its previously held No. 5 spot.

UL Announces 3DMark SSD Storage Benchmark

For more than 20 years, 3DMark has been gamers' first choice for benchmarking the latest graphics cards and processors. Today, we're taking 'The Gamer's Benchmark' into a new area with the 3DMark Storage Benchmark, a dedicated component test for measuring the gaming performance of SSDs, hybrid drives, and other storage devices.

With fast modern SSD storage, loading times are shorter, levels restart faster, and there are fewer interruptions to your gameplay. PC gamers can now choose from a wide range of high-performance storage options from the fastest PCI Express 4.0 and NVMe devices down to cheaper SATA SSDs and high-capacity hybrid drives.

BAPCo Adds Android Support to CrossMark

BAPCo, a non-profit consortium of leading PC hardware manufacturers, today added support for the Android OS for CrossMark. Released in March, CrossMark has rapidly established itself as a cross-architecture performance benchmark that simplifies system performance and responsiveness measurement using common and relevant workloads for Windows, iOS, macOS and Android. CrossMark is based on widely used open-source applications to assess system performance scores in the areas of Productivity, Creativity and Responsiveness.

Adding Android support to the ecosystem allows CrossMark to accurately and objectively measure both system performance and responsiveness, which has traditionally been a time-consuming, difficult process to billions of new devices. CrossMark allows users to quickly install the software natively on their system and be up and running in a matter of minutes - with measurement runs as quick as 5 minutes to complete. At its conclusion, CrossMark reports an overall system performance and responsiveness score, as well as several key sub-scores indicative of typical day-to-day system performance. With over 2,200+ submitted and published results on BAPCo's online database users can compare their scores to a wide range of systems across multiple architectures.

Another Day, Another Intel Core i9-12900K Benchmark Leak

Remember that Core i9-12900K CPU-Z leak from last week? It had the multi-threaded score blurred out and now we know why. A new CPU-Z screenshot has shown up on Twitter and although the single threaded score is still beating the AMD Ryzen 5950X baseline single core score by a comfortable margin, it's behind when we're switching to the multi-threaded score.

It shouldn't really come as a surprise that eight big and eight small CPU cores doesn't beat AMD's 16 big cores, but this was apparently expected by some. This is not saying that Intel doesn't get close as you can see, but it's also worth keeping in mind that Intel runs on 24 threads vs. AMD's 32 threads. The Core i9-12900K is said to be running on stock clocks, but no other information was provided. Once again, take this for what it is while we wait for the actual launch date and proper benchmarks.

Intel Core i9-12900K Beats AMD Ryzen 9 5950X in Leaked Geekbench Score

We recently saw the Intel Core i7-12700 appear on Geekbench 5 where it traded blows with the AMD Ryzen 7 5800X, we have now discovered the flagship Core i9-12900K also making an appearance. The benchmarked Intel Core i9-12900K features a hybrid design with 8 high-performance cores, 8 high-efficiency cores, and 24 threads running at a base clock of 3.2 GHz. The test was performed on a Windows 11 Pro machine allowing for full use of the Intel Thread Director technology paired with 32 GB of DDR5 memory. The processor achieved a single-core score of 1834/1893 in the two tests which gives it the highest score on the official Benchmarks coming in 12% faster than the Ryzen 9 5950X. The processor also achieved an impressive multi-core score of 17299/17370 which places it 3% above the Ryzen 9 5950X and 57% above the previous generation flagship 8-core i9-11900K. These leaked benchmarks highlight the impressive potential of Intel's upcoming 12th Generation Core series which is expected to launch in November.

3DMark Updated with New CPU Benchmarks for Gamers and Overclockers

UL Benchmarks is expanding 3DMark today by adding a set of dedicated CPU benchmarks. The 3DMark CPU Profile introduces a new approach to CPU benchmarking that shows how CPU performance scales with the number of cores and threads used. The new CPU Profile benchmark tests are available now in 3DMark Advanced Edition and 3DMark Professional Edition.

The 3DMark CPU Profile introduces a new approach to CPU benchmarking. Instead of producing a single number, the 3DMark CPU Profile shows how CPU performance scales and changes with the number of cores and threads used. The CPU Profile has six tests, each of which uses a different number of threads. The benchmark starts by using all available threads. It then repeats using 16 threads, 8 threads, 4 threads, 2 threads, and ends with a single-threaded test. These six tests help you benchmark and compare CPU performance for a range of threading levels. They also provide a better way to compare different CPU models by looking at the results from thread levels they have in common.

NVIDIA RTX 3080 Ti Tested in Ashes of the Singularity

The upcoming NVIDIA RTX 3080 Ti has already been tested in the old, tried and true Ashes of the Singularity benchmark - the first game to ever support DirectX 12. Considering the proximity of this Ashes of the SIngularity bench run with the projected announcement date for the RTX 3080 Ti itself, on June 4th, it may mean that this is a reviewers' sample that's been tested.

DDR5-6400 RAM Benchmarked on Intel Alder Lake Platform, Shows Major Improvement Over DDR4

As the industry is preparing for a shift to the new DDR standard, companies are trying to adopt the new technology and many companies are manufacturing the latest DDR5 memory modules. One of them is Shenzhen Longsys Electronics Co. Ltd, a Chinese manufacturer of memory chips, which has today demonstrated the power of DDR5 technology. Starting with this year, client platforms are expected to make a transition to the new standard, with the data center/server platform following. Using Intel's yet unreleased Alder Lake-S client platform, Longsys has been able to test its DDR5 DIMMs running at an amazing 6400 MHz speed and the company got some very interesting results.

Longsys has demoed a DDR5 module with 32 GB capacity, CAS Latency (CL) of 40 CL, operating voltage of 1.1 V, and memory modules clocked at 6400 MHz. With this being an impressive memory module, this is not the peak of DDR5. According to JEDEC specification, DDR5 will come with up to 8400 MHz speeds and capacities that are up to 128 GB per DIMM. Longsys has run some benchmarks, using an 8-core Alder Lake CPU, in AIDA64 and Ludashi. The company then proceeded to compare these results with DDR4-3200 MHz CL22 memory, which Longsys also manufactures. And the results? In AIDA64 tests, the new DDR5 module is faster anywhere from 12-36%, with the only regression seen in latency, where DDR5 is doubling it. In synthetic Ludashi Master Lu benchmark, the new DDR5 was spotted running 112% faster. Of course, these benchmarks, which you can check out here, are provided by the manufacturer, so you must take them with a grain of salt.

Blizzard Benchmarks NVIDIA's Reflex Technology in Overwatch

Blizzard, a popular game developer, has today implemented NVIDIA's latest technology for latency reduction into its first-person shooter—Overwatch. Called NVIDIA Reflex, the technology aims to reduce system latency by combining the NVIDIA GPUs with G-SYNC monitors, and specially certified peripherals, all of which can be found on the company website. NVIDIA Reflex dynamically reduces system latency by combining both GPU and game optimizations, which game developers implement, and the gamer is left with a much more responsive system that can edge out a competitive advantage. Today, we get to see just how much the new technology helps in the latest Overwatch update that brings NVIDIA's Reflex with it.

Blizzard has tested three NVIDIA GPUs: GeForce RTX 3080, RTX 2060 SUPER, and GTX 1660 SUPER. All three GPUs cover three different segments, so they are a good sign of what you can expect from your system. Starting from the GeForce GTX 1660 Super, the system latency, which was measured in milliseconds, was cut by over 50%. The middle-end RTX 2060 SUPER GPU experienced a similar gain, while the RTX 3080 was seen with the smallest gain, however, it did achieve the lowest latency out of all GPUs tested. You can check out the results for yourself below.

Intel Rocket Lake Early Gaming Benchmarks Show Incremental Improvements

We have recently received some early gaming benchmarks for the upcoming Intel Core i7-11700K after German retailer MindFactory released the chip early. The creator of CapFrameX has managed to get their hands on one of these processors and has put it to the test comparing it with the Intel Core i9-10900K in some gaming benchmarks. Intel has promised double-digit IPC improvements with the new Rocket Lake generation of processors however if the results from this latest benchmark are representative of the wider picture those improvements might be a bit more modest then Intel claims.

The processors were paired with an RTX 3090 and 32 GB of 3200 MHz memory as this is the new stock maximum speed supported versus 2933 MHz on the Core i9-10900K. The two processors were put to the test in Crysis Remastered, Cyberpunk 2077, and Star Wars: Jedi Fallen Order, with the i7-11700K coming ahead in all three tests by ~ 2% - 9%. These tests are unverified and might not be fully representative of performance but they give us a good indication of what Intel has to offer with these new 11th generation chips.
Return to Keyword Browsing
Dec 4th, 2024 03:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts