News Posts matching #3DMark

Return to Keyword Browsing

NVIDIA GeForce RTX 2050 & MX550 Laptop Graphics Cards Benchmarked

The recently announced NVIDIA GeForce RTX 2050, MX570, and MX550 Ampere graphics cards have recently been benchmarked in 3DMark TimeSpy. The RTX 2050 and MX570 both feature the Ampere GA107 GPU with 2048 CUDA cores paired with 4 GB and 2 GB of 64-bit GDDR6 memory respectively. The MX550 uses the TU117 Turing GPU with 1024 CUDA cores running at 1320 MHz paired with 2 GB of 64-bit GDDR6 12 Gbps memory. The RTX 2050 and MX570 performed similarly in the 3DMark TimeSpy benchmark achieving a graphics score of 3369 while the MX550 scores 2510 points. These new laptop graphics cards will be officially launching in Spring 2022.

EVGA RTX 3090 Kingpin & Intel Core i9-12900K Set New 3DMark Port Royal Record

The 3DMark Port Royal single card benchmark has a new record of 20,014 set by South Korean overclocker biso biso for Team EVGA. The overclocker used an Intel Core i9-12900K running at 5.4 GHz on an EVGA Z690 Dark Kingpin motherboard paired with the EVGA GeForce RTX 3090 Kingpin overclocked to 2,895 MHz. The system also featured 16 GB of DDR5 memory running at 6000 MHz from SK Hynix along with liquid nitrogen for cooling and Thermal Grizzly Kryonaut Extreme thermal paste on the CPU and GPU. This new record beats the previous record of 19,600 also from biso biso which featured the Core i9-10900K and RTX 3090 Kingpin.
EVGAArmed with the latest hardware, including an EVGA Z690 DARK K|NGP|N motherboard, and an EVGA GeForce RTX 3090 K|NGP|N running at a blistering 2,895 MHz GPU clock, extreme overclocker "biso biso" set a new single-GPU standard for 3DMark Port Royal with a score of 20,014! This marks the first ever 3DMark Port Royal (single card) score over 20,000 and a testiment to the capabilities of the highest performing EVGA products.

3DMark Receives DirectX 12 Ultimate Sampler Feedback Feature Test

DirectX 12 Ultimate is the next generation for gaming graphics. It adds powerful new capabilities to DirectX 12, including DirectX Raytracing Tier 1.1, Mesh Shaders, Sampler Feedback, and Variable Rate Shading (VRS). These features help game developers improve visual quality, boost frame rates, reduce loading times, and create vast, detailed worlds.

3DMark already has dedicated tests for DirectX Raytracing, Mesh Shaders, and Variable Rate Shading. Today, we're adding a Sampler Feedback feature test, making 3DMark the first publicly available application to include all four major DirectX 12 Ultimate features. Experience DirectX 12 Ultimate today, only with 3DMark! Measure performance and compare image quality on your PC with our DirectX 12 Ultimate feature tests. Each test also has an interactive mode that lets you change settings and visualizations in real time.

Samsung Exynos SoC with RDNA2 Graphics Scores Highest Mobile Graphics Score

We recently reported that Samsung would be announcing their next-generation flagship Exynos processor with AMD RDNA2 graphics next month. We heard that the RDNA2 GPU was expected to be ~30% faster than the Mali-G78 GPU present in Galaxy S21 Ultra however according to a new 3DMark Wild Life benchmark it would appear the new processor scores 56% higher. This result would give the upcoming Exynos processor the fastest graphics available in any Android phone even matching/beating out the Apple A14 Bionic found in the iPhone 12. This early performance benchmark paints a very positive picture for the upcoming processor however we still don't know how the score will be affected under sustained load or if this will performance will even be replicated in the final product.

3DMark Updated with New CPU Benchmarks for Gamers and Overclockers

UL Benchmarks is expanding 3DMark today by adding a set of dedicated CPU benchmarks. The 3DMark CPU Profile introduces a new approach to CPU benchmarking that shows how CPU performance scales with the number of cores and threads used. The new CPU Profile benchmark tests are available now in 3DMark Advanced Edition and 3DMark Professional Edition.

The 3DMark CPU Profile introduces a new approach to CPU benchmarking. Instead of producing a single number, the 3DMark CPU Profile shows how CPU performance scales and changes with the number of cores and threads used. The CPU Profile has six tests, each of which uses a different number of threads. The benchmark starts by using all available threads. It then repeats using 16 threads, 8 threads, 4 threads, 2 threads, and ends with a single-threaded test. These six tests help you benchmark and compare CPU performance for a range of threading levels. They also provide a better way to compare different CPU models by looking at the results from thread levels they have in common.

Intel Core i9-11900KB Beast Canyon NUC 11 Extreme Benchmarked

We now have preliminary benchmarks for the unreleased NUC 11 Extreme Beast Canyon NUC in 3DMark. We have already seen various leaks of the upcoming device detailing the available configurations and options. The NUC 11 Extreme will be offered with a choice of four processors with the highest being the upcoming Intel Core i9-11900KB which is the star of today's benchmarks. The i9-11900KB equipped NUC was paired with a desktop RTX 3060 and was put through its paces in the Fire Strike and Time Spy 3DMark benchmarks. The i9-11900KB is a specialty processor designed for use in NUC products with the new add-in card form factor and is largely comparable to the desktop i9-11900.

The Intel Core i9-11900KB is an 8 core 16 thread mobile processor with a base clock of 3.3 GHz, a boost clock of 4.9 GHz and a thermal velocity boost of 5.3 GHz. The chip features 24 MB of L3 cache and comes with a configurable TDP of 55 W/65 W. The base clock is higher than the 2.5 GHz found on the desktop i9-11900 while the boost clock is 300 MHz less and the thermal velocity boost is 100 MHz higher. The Intel Core i9-11900KB averages 93% - 101% of the performance of the i9-11900 which is to be expected considering the clock speeds.

UL Benchmarks Announces 3DMark Wild Life Extreme Cross-platform Benchmark

Last year UL Benchmarks released 3DMark Wild Life, a cross-platform benchmark for Android, iOS and Windows. Today we are releasing 3DMark Wild Life Extreme, a more demanding cross-platform benchmark for comparing the graphics performance of mobile computing devices such as Windows notebooks, Always Connected PCs powered by Windows 10 on Arm, Apple Mac computers powered by the M1 chip, and the next generation of smartphones and tablets.

3DMark Wild Life Extreme is a new cross-platform benchmark for Apple, Android and Windows devices. Run Wild Life Extreme to test and compare the graphics performance of the latest Windows notebooks, Always Connected PCs powered by Windows 10 on Arm, Apple Mac computers powered by the M1 chip, and the next generation of smartphones and tablets.

DOWNLOAD: 3DMark for Windows v2.18.7181

Intel Xe HPG Graphics Card Could Compete with Big Navi & Ampere

Intel started shipping their first Xe DG1 graphics card to OEMs earlier this year which features 4 GB of LPDDR4X and 80 execution units. While these initial cards aren't high performance and only compete with entry-level products like the NVIDIA MX350 they demonstrated Intel's ability to produce a discrete GPU. We have recently received some more potential performance information about Intel's next card the DG2 from chief GPU architect, Raja Koduri. Koduri posted an image of him with the team at Intel's Folsom lab working on the Intel Iris Pro 5200 iGPU back in 2012 and noted that now 9 years later their new GPU is over 20 times faster. The Intel Iris Pro 5200 scores 1015 on Videocard Benchmarks and ~1,400 points on 3DMark Fire Strike, if we assume these scores will be 20x more we get values of ~20,300 in Videocard Benchmarks and 28,000 points in Fire Strike. While these values don't extrapolate perfectly they provide a good indication of potential performance placing the GPU in the same realm of the NVIDIA RTX 3070 and AMD RX 6800.

NVIDIA GeForce RTX 3070 Memory-Modded with 16GB

PC enthusiast and overclocker VIK-on pulled off a daring memory chip mod on his Palit GeForce RTX 3070 GamingPro OC graphics card, swapping its 8 GB of 14 Gbps GDDR6 memory with 16 GB of it, using eight replacement 16 Gbit chips. The modded card is able to recognize the 16 GB of memory, is able to utilize it like other 16 GB graphics cards (such as the Radeon RX 6800), and is fairly stable with benchmarks and stress tests, although not initially stable. It did spring up some black-screens. VIK-on later discovered that locking the clock-speeds using EVGA Precision-X stabilizes the card, so it performs as expected.

The mod involves a physical replacement of the card's stock 8 Gbit memory chips with 16 Gbit ones; and shorting certain straps on the PCB that let it recognize the desired memory chip brand and density. After the mod, the GeForce driver and GPU-Z are able to read 16 GB of video memory, and the card is able to handle stress tests such as FurMark. The card was initially underperforming in 3DMark, putting out a TimeSpy score of just 8356 points; but following the clock-speed lock fix, is able to score around 13000 points. The video presentation can be watched from the source link below. Kudos to VIK-on!

UL Benchmarks Announces DirectX 12 3DMark Mesh Shader Test

DirectX 12 Ultimate adds powerful new features and capabilities to DirectX 12 including DirectX Raytracing Tier 1.1, Mesh Shaders, Sampler Feedback, and Variable Rate Shading (VRS). After DirectX 12 Ultimate was announced, we started adding new tests to 3DMark to show how games can benefit from these new features. Our latest addition is the 3DMark Mesh Shader feature test, a new test that shows how game developers can boost frame rates by using mesh shaders in the graphics pipeline.

Kosin Demonstrates RTX 3090 Running Via Ryzen Laptop's M.2 Slot

Kosin a Chinese subsidiary of Lenovo has recently published a video showing how they modded their Ryzen notebook to run an RTX 3090 from the NVMe M.2 slot. Kosin used their Xiaoxin Air 14 laptop with a Ryzen 5 4600U processor for the demonstration. The systems internal M.2 NVMe SSD was removed and an M.2 to PCIe expansion cable was attached allowing them to connect the RTX 3090. Finally, the laptop housing was modified to allow the PCIe cable to exit the chassis and a desktop power supply was attached to the RTX 3090 for power.

The system booted and correctly detected and utilized the attached RTX 3090. The system performed admirably scoring 14,008 points in 3DMark TimeSpy, for comparison the RTX 3090 paired with a desktop Ryzen 5 3600 scores 15,552, and when paired with a Ryzen 7 5800X scores 17,935. While this is an extreme example pairing an RTX 3090 with a mid-range mobile processor it goes to show the amount of performance achievable over the NVMe M.2 connector. The x4 PCIe 3.0 link of the laptop's M.2 slot could handle a maximum of 4 GB/s, while the x16 PCIe 3.0 slot on previous generation processors offered 16 GB/s, and the new x16 PCIe 4.0 connector doubles that providing 32 GB/s of available bandwidth.

NVIDIA RTX 2070 Modded to Support 16GB Memory

PC enthusiast VIK-on pulled off a sophisticated memory mod for the GeForce RTX 2070, doubling its memory amount to 16 GB. In a detailed video presentation (linked below), VIK-on demonstrated how he carefully removed the 8 Gb Micron-made GDDR6 memory chips of his card, with 16 Gb Samsung-made chips he bought off AliExpress for $200. Memory replacement mods are extremely difficult to pull off, as you first de-solder the existing chips using a hot air gun while keeping the contacts on the PCB intact (ensuring no pins short); and solder the replacement BGA memory chips in place.

In addition, a set of "jumpers" on the PCB need to be modified to make it recognize the Samsung memory. The resulting card booted to desktop successfully, with GPU-Z reading its full 16 GB memory size. The card successfully made it through 3DMark TimeSpy, albeit with 30% lower performance than a normal RTX 2070 (6176 points vs. ~9107 points). The card would also crash Furmark. Still, it's mighty impressive that the "TU106" recognizes 16 GB of addressable memory (which means all its memory channels are intact), without the need for any BIOS mods, which is impossible to pull off.
Watch the VIK-on video presentation here.

NVIDIA GeForce RTX 3060 Ti Fire Strike and Time Spy Scores Surface

3DMark scores of the upcoming NVIDIA GeForce RTX 3060 Ti were leaked to the web by VideoCardz. The RTX 3060 Ti was put through standard 3DMark Fire Strike and Time Spy benchmark runs. In the DirectX 11-based Fire Strike benchmark, the card allegedly scores 30706 points, with 146.05 FPS in GT1 and 122 FPS in GT2. With the newer DirectX 12-based Time Spy test, it allegedly scores 12175 points, with 80.82 FPS in GT1, and 68.71 FPS in GT2. There are no system specs on display, but the scores put the RTX 3060 Ti slightly ahead of the previous-generation high-end GeForce RTX 2080 Super.

The GeForce RTX 3060 Ti, bound for a December 2 launch, is an upcoming performance-segment graphics card based on the "Ampere" architecture, and is carved out of the same 8 nm "GA104" silicon as the RTX 3070. It reportedly packs 4,864 "Ampere" CUDA cores, 38 second-gen RT cores, 152 third-gen Tensor cores, and the same memory configuration as the RTX 3070—8 GB of 14 Gbps GDDR6 across a 256-bit wide bus. NVIDIA is targeting a "<$399" price-point, making the card at least 43% cheaper than the RTX 2080 Super.

UL Benchmarks Updates 3DMark with Estimated Game Performance Data

UL Benchmarks late Wednesday released a major update to 3DMark. The new version 2.16.7094 adds a redesigned Results tab, and introduces the Estimated Game Performance feature, which lets you extrapolate your 3DMark score onto real-world game performance. The feature debuts with game performance estimations for five popular titles, "Apex Legends," "Battlefield V," "Fortnite," "Grand Theft Auto V," and "Red Dead Redemption 2." After a benchmark run, simply select one of these games from a drop-down menu, and a resolution+preset combo from another; and 3DMark will tell give you an estimated frame-rate number, taking into account your overall 3DMark score, and performance numbers of sub-systems (graphics, physics, etc). In this article, UL Benchmarks explains the correlation between a 3DMark score and corresponding estimated frame-rates for "Fortnite."

DOWNLOAD: 3DMark 2.16.7094

Radeon RX 6800 XT Overclocked to 2.80 GHz on LN2, Crushes 3DMark Fire Strike Record

One of the secret sauces of AMD's new "Big Navi" Radeon RX 6800 series GPUs is the ability for the GPU to sustain high engine clock speeds, with the company leveraging boost frequencies above 2.00 GHz in certain scenarios. It was only a matter of time before professional overclockers got their hands on an RX 6800 XT, and paired it with an LN2 evaporator. TecLab_Takukou is the new king of the 3DMark Fire Strike leaderboard thanks to their skills in running an RX 6800 XT at insane 2.80 GHz engine clocks. Takukou overclocked the RX 6800 XT to 2.80 GHz core, and 2150 MHz (17.2 Gbps) memory. This particular configuration yielded a Fire Strike score of 48890 points.

In a separate feat, with the RX 6800 XT running at 2.75 GHz, Takukou chased down the HWBot 3DMark Fire Strike leaderboard, with 49456 points. For both the 2.80 GHz and 2.75 GHz feats, the rest of the system included an LN2-cooled AMD Ryzen 9 5950X processor running at 5.60 GHz all-core, 32 GB of DDR4-3800 memory, and an MSI MEG X570 GODLIKE motherboard. From the looks of it, Takukou is using a reference-design RX 6800 XT board, so we can only hope what else can be accomplished from custom-design RX 6800 XT boards, such as the Sapphire NITRO+, PowerColor Red Devil, or ASUS ROG Strix O16G. Takukou leads the 3DMark Fire Strike leaderboard, followed by another RX 6800 XT-powered close-second, Lucky_n00b (47932 points, 2.65 GHz engine clock on air). Safedisc is third (47725 points, RTX 3090 @ 2.38 GHz).

Overclocked AMD Radeon RX 6800 XT Sets World Record on Air

AMD's Radeon RX 6800 XT graphics card debuted yesterday and in the overall test results, we saw that it runs just a few percent behind NVIDIA's direct competitor - GeForce RTX 3080. However, when it comes to overclocking and world records, the card has just set one. Popular extreme overclocker Alva Jonathan aka "LUCKY_NOOB", has managed to overclock the Radeon RX 6800 XT GPU and set a new world record with the card. Paired with LN2 cooled Ryzen 9 5950X clocked at 5.4 GHz, the graphics card was cooled by... air cooler? Indeed it was. Lucky has managed to clock the RX 6800 XT at 2650 MHz using the reference air cooler. With that system, he managed to score 47932 points in 3DMark FireStrike.

The overclocker has modified 3DMark's tessellation to presumably give the Radeon card more performance, so the score isn't valid on the official 3DMark charts. What gives the overclocker an idea of a world record is the fact that HWBOT actually accepts those numbers, which ranks it higher than the NVIDIA GeForce RTX 3090 graphics card that scored 47725 points. Despite the modifications, it is impressive to see AMD's card rank that high, and as more overclockers are getting their hands on these cards, it is a question if we are going to see the 3 GHz barrier broken by a Radeon card.

Intel "Tiger Lake" Gen12 Xe iGPU Compared with AMD "Renoir" Vega 8 in 3DMark "Night Raid"

Last week, reports of Intel's Gen12 Xe integrated graphics solution catching up with AMD's Radeon Vega 8 iGPU found in its latest Ryzen 4000U processors in higher-tier 3DMark tests sparked quite some intrigue. AMD's higher CPU core-count bailed the processor out in overall 3DMark 11 scores. Thanks to Thai PC enthusiast TUM_APISAK, we now have a face-off between the Core i7-1165G7 "Tiger Lake-U" processor (15 W), against AMD Ryzen 7 4800U (15 W), and the mainstream-segment Ryzen 7 4800HS (35 W), in 3DMark "Night Raid."

The "Night Raid" test is designed to evaluate iGPU performance, and takes advantage of DirectX 12. The Core i7-1165G7 falls behind both the Ryzen 7 4800U and the 4800HS in CPU score, owing to its lower CPU core count, despite higher IPC. The i7-1165G7 is a 4-core/8-thread chip featuring "Willow Cove" CPU cores, facing off against 8-core/16-thread "Zen 2" CPU setups on the two Ryzens. Things get interesting with graphics tests, where the Radeon Vega 8 solution aboard the 4800U scores 64.63 FPS in GT1, and 89.41 FPS in GT2; compared to just 27.79 FPS in GT1 and 32.05 FPS in GT2, by the Gen12 Xe iGPU in the i7-1165G7.

Intel 8-core/16-thread "Rocket Lake-S" Processor Engineering Sample 3DMarked

The "Rocket Lake-S" microarchitecture by Intel sees the company back-port its next-generation "Willow Cove" CPU core to the existing 14 nm++ silicon fabrication process in the form of an 8-core die with a Gen12 Xe iGPU. An engineering sample of one such processor made it to the Futuremark database. Clocked at 3.20 GHz with 4.30 GHz boost frequency, the "Rocket Lake-S" ES was put through 3DMark "Fire Strike" and "Time Spy," with its iGPU in play, instead of a discrete graphics card.

In "Fire Strike," the "Rocket Lake-S" ES scores 18898 points in the physics test, 1895 points in the graphics tests, and an overall score of 1746 points. With "Time Spy," the overall score is 605, with a CPU score of 4963 points, and graphics score of 524. The 11th generation Core "Rocket Lake-S" processor is expected to be compatible with existing Intel 400-series chipset motherboards, and feature a PCI-Express gen 4.0 root complex. Several 400-series chipset motherboards have PCIe gen 4.0 preparation for exactly this. The increased IPC from the "Willow Cove" cores is expected to make the 8-core "Rocket Lake-S" a powerful option for gaming and productivity tasks that don't scale across too many cores.

Intel "Elkhart Lake" Processor Put Through 3DMark

One of the first performance benchmarks of Intel's upcoming low-power processor codenamed "Elkhart Lake," surfaced on the Futuremark database, courtesy of TUM_APISAK. The chip scores 571 points, with a graphics score of 590 and physics score of 3801. The graphics score of the Gen11-based iGPU is behind the Intel UHD 630 gen 9.5 iGPU found in heavier desktop processors since "Kaby Lake," but we predict it's being dragged behind by the CPU (3801 points physics vs. roughly 17000 points of a 6-core "Coffee Lake" processor. The chip goes on to score 170 points in Time Spy, with 148 points graphics- and 1131 points physics-scores. Perhaps Cloud Gate would've been a more apt test.

The "Elkhart Lake" silicon is built on Intel's 10 nm silicon fabrication process, and will power the next generation of Pentium Silver and Celeron processors. The chip features up to 4 CPU cores based on the "Tremont" low-power architecture, and an iGPU based on the newer Gen11 architecture. It features a single-channel memory controller that supports DDR4 and LPDDR4/x memory types. The chip in these 3DMark tests is a 4-core variant, likely a Pentium Silver engineering sample, with its CPU clocked at 1.90 GHz, and is paired with LPDDR4x memory. The chip comes in 5 W, 9 W, and 12 W TDP variants.

AMD Ryzen 9 3900XT and Ryzen 7 3800XT Benchmarks Surface

AMD's 3rd generation Ryzen "Matisse Refresh" processors surfaced on the Futuremark online database, as dug up by TUM_APISAK, where someone with access to them allegedly posted some performance numbers. Interestingly, the clock-speeds as read by the Futuremark SystemInfo module appear very different from what were previously reported. The 3800XT is shown featuring a 3.80 GHz nominal clock, boosting up to 4.70 GHz, while the 3900XT has a 3.90 GHz nominal clock, boosting up to the same 4.70 GHz as the 3800XT. APISAK reports that the 3800XT scores 25135 points in the FireStrike physics test.

A WCCFTech report presents screenshots of Cinebench R20 single-thread performance scores of the 3900XT, where it is shown beating the i9-10900K (in a single-threaded test). The 3800XT is within striking distance of the i9-10900K in this test, and beats the i7-10700KF. This single-threaded performance figure suggests that AMD's design focus with "Matisse Refresh" has been to shore up single-threaded and less-parallelized application performance, in other words, gaming performance.

Core i3-10100 vs. Ryzen 3 3100 Featherweight 3DMark Showdown Surfaces

AMD's timely announcement of the Ryzen 3 "Matisse" processor series could stir things up in the entry-level as Intel kitted its 10th generation Core i3 processors as 4-core/8-thread. Last week, a head-to-head Cinebench comparison between the i3-10300 and 3300X ensued, and today we have a 3DMark Firestrike and Time Spy comparison between their smaller siblings, the i3-10100 and the 3100, courtesy of Thai PC enthusiast TUM_APISAK. The two were benchmarked on Time Spy and Fire Strike on otherwise constant hardware: an RTX 2060 graphics card, 16 GB of memory, and a 1 TB Samsung 970 EVO SSD.

With Fire Strike, the 3100-powered machine leads in overall 3DMark score (by 0.31%), CPU-dependent Physics score (by 13.7%), and the Physics test. The i3-10100 is ahead by 1.4% in the Graphics score thanks to a 1.6% lead in graphics test 1, and 1.4% lead in graphics test 2. Over to the more advanced Time Spy test, which uses the DirectX 12 API that better leverages multi-core CPUs, we see the Ryzen 3 3100 post a 0.63% higher overall score, 1.5% higher CPU score; while the i3-10100 powered machines post within 1% higher graphics score. These numbers may suggest that the i3-10100 and the 3100 are within striking distance of each other and that either is a good pick for gamers, until you look at pricing. Intel's official pricing for the i3-10100 is $122 (per chip in 1,000-unit tray), whereas AMD lists the SEP price of the Ryzen 3 3100 at $99 (the Intel chip is at least 22% pricier), giving AMD a vast price-performance advantage that's hard to ignore, more so when you take into account value additions such as an unlocked multiplier and PCIe gen 4.0.

Intel Gen12 Xe iGPU Could Match AMD's Vega-based iGPUs

Intel's first integrated graphics solution based on its ambitious new Xe graphics architecture, could match AMD's "Vega" architecture based iGPU solutions, such as the one found in its latest Ryzen 4000 series "Renoir" iGPUs, according to leaked 3DMark FireStrike numbers put out by @_rogame. Benchmark results of a prototype laptop based on Intel's "Tiger Lake-U" processor surfaced on the 3DMark database. This processor embeds Intel's Gen12 Xe iGPU solution, which is purported to offer significant performance gains over current Gen11 and Gen9.5 based iGPUs.

The prototype 2-core/4-thread "Tiger Lake-U" processor with Gen12 graphics yields a 3DMark FireStrike score of 2,196 points, with a graphics score of 2,467, and 6,488 points physics score. These scores are comparable to 8 CU Radeon Vega iGPU solutions. "Renoir" tops out at 8 CUs, but shores up performance to the 11 CU "Picasso" levels by other means. Besides tapping into the 7 nm process to increase engine clocks, improve the boosting algorithm, and modernizing the display- and multimedia engines; AMD's iGPU is largely based on the same 3-year old "Vega" architecture. Intel Gen12 Xe makes its debut with the "Tiger Lake" microarchitecture slated for 2021.

UL Benchmarks Outs 3DMark Feature Test for Variable-Rate Shading Tier-2

UL Benchmarks today announced an update to 3DMark, with the expansion of the Variable-Rate Shading (VRS) feature-test with support for VRS Tier-2. A component of DirectX 12, VRS Tier 1 is supported by NVIDIA "Turing" and Intel Gen11 graphics architectures (Ice Lake's iGPU). VRS Tier-2 is currently supported only by NVIDIA "Turing" GPUs. VRS Tier-2 adds a few performance enhancements such as lower levels of shading for areas of the scene with low contrast to their surroundings (think areas under shadow), yielding performance gains. The 3DMark VRS test runs in two passes, pass-1 runs with VRS-off to provide a point of reference; and pass-2 with VRS-on, to test performance gained. The 3DMark update with VRS Tier-2 test will apply for the Advanced and Professional editions.

DOWNLOAD: 3DMark v2.11.6846

AMD Radeon RX 5500 (OEM) Tested, Almost As Fast as RX 580

German publication Heise.de got its hands on a Radeon RX 5500 (OEM) graphics card and put it through their test bench. The numbers yielded show exactly what caused NVIDIA to refresh its entry-level with the GeForce GTX 1650 Super and the GTX 1660 Super. The RX 5500, in Heise's testing was found matching the previous-generation RX 580, and NVIDIA's current-gen GTX 1660 (non-Super). When compared to factory-overclocked RX 580 NITRO+ and GTX 1660 OC, the RX 5500 yielded similar 3DMark Firestrike performance, with 12,111 points, compared to 12,744 points of the RX 580 NITRO+, and 12,525 points of the GTX 1660 OC.

The card was put through two other game tests at 1080p, "Shadow of the Tomb Raider," and "Far Cry 5." In SoTR, the RX 5500 put out 59 fps, which was slightly behind the 65 fps of the RX 580 NITRO+, and 69 fps of the GTX 1660 OC. In "Far Cry 5," it scored 72 fps, which again is within reach of the 75 fps of the RX 580 NITRO+, and 85 fps of the GTX 1660 OC. It's important to once again note that the RX 580 and GTX 1660 in this comparison are factory-overclocked cards, while the RX 5500 is ticking a stock speeds. Heise also did some power testing, and found the RX 5500 to have a lower idle power-draw than the GTX 1660 OC, at 7 W compared to 10 W of the NVIDIA card; and 12 W of the RX 580 NITRO+. Gaming power-draw is also similar to the GTX 1660, with the RX 5500 pulling 133 W compared to 128 W of the GTX 1660. This short test shows that the RX 5500 is in the same league as the RX 580 and GTX 1660, and explains how NVIDIA had to make its recent product-stack changes.

Intel Core i9-10980XE "Cascade Lake-X" Benchmarked

One of the first reviews of Intel's new flagship HEDT processor, the Core i9-10980XE, just hit the web. Lab501.ro got their hands on a freshly minted i9-10980XE and put it through their test bench. Based on the "Cascade Lake-X" silicon, the i9-10980XE offers almost identical IPC to "Skylake-X," but succeeds the older generation with AI-accelerating DLBoost instruction-set, an improved multi-core boosting algorithm, higher clock speeds, and most importantly, a doubling in price-performance achieved by cutting the cores-per-Dollar metric by half, across the board.

Armed with 18 cores, the i9-10980XE is ahead of the 12-core Ryzen 9 3900X in rendering and simulation tests, although not by much (for a chip that has 50% more cores). This is probably attributed to the competing AMD chip being able to sustain higher all-core boost clock speeds. In tests that not only scale with cores, but are also hungry for memory bandwidth, such as 7-zip and Media, Intel extends its lead thanks to its quad-channel memory interface that's able to feed its cores with datasets faster.
Return to Keyword Browsing
May 21st, 2024 21:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts