News Posts matching #3DMark

Return to Keyword Browsing

Intel Gen12 Xe iGPU Could Match AMD's Vega-based iGPUs

Intel's first integrated graphics solution based on its ambitious new Xe graphics architecture, could match AMD's "Vega" architecture based iGPU solutions, such as the one found in its latest Ryzen 4000 series "Renoir" iGPUs, according to leaked 3DMark FireStrike numbers put out by @_rogame. Benchmark results of a prototype laptop based on Intel's "Tiger Lake-U" processor surfaced on the 3DMark database. This processor embeds Intel's Gen12 Xe iGPU solution, which is purported to offer significant performance gains over current Gen11 and Gen9.5 based iGPUs.

The prototype 2-core/4-thread "Tiger Lake-U" processor with Gen12 graphics yields a 3DMark FireStrike score of 2,196 points, with a graphics score of 2,467, and 6,488 points physics score. These scores are comparable to 8 CU Radeon Vega iGPU solutions. "Renoir" tops out at 8 CUs, but shores up performance to the 11 CU "Picasso" levels by other means. Besides tapping into the 7 nm process to increase engine clocks, improve the boosting algorithm, and modernizing the display- and multimedia engines; AMD's iGPU is largely based on the same 3-year old "Vega" architecture. Intel Gen12 Xe makes its debut with the "Tiger Lake" microarchitecture slated for 2021.

UL Benchmarks Outs 3DMark Feature Test for Variable-Rate Shading Tier-2

UL Benchmarks today announced an update to 3DMark, with the expansion of the Variable-Rate Shading (VRS) feature-test with support for VRS Tier-2. A component of DirectX 12, VRS Tier 1 is supported by NVIDIA "Turing" and Intel Gen11 graphics architectures (Ice Lake's iGPU). VRS Tier-2 is currently supported only by NVIDIA "Turing" GPUs. VRS Tier-2 adds a few performance enhancements such as lower levels of shading for areas of the scene with low contrast to their surroundings (think areas under shadow), yielding performance gains. The 3DMark VRS test runs in two passes, pass-1 runs with VRS-off to provide a point of reference; and pass-2 with VRS-on, to test performance gained. The 3DMark update with VRS Tier-2 test will apply for the Advanced and Professional editions.

DOWNLOAD: 3DMark v2.11.6846

AMD Radeon RX 5500 (OEM) Tested, Almost As Fast as RX 580

German publication Heise.de got its hands on a Radeon RX 5500 (OEM) graphics card and put it through their test bench. The numbers yielded show exactly what caused NVIDIA to refresh its entry-level with the GeForce GTX 1650 Super and the GTX 1660 Super. The RX 5500, in Heise's testing was found matching the previous-generation RX 580, and NVIDIA's current-gen GTX 1660 (non-Super). When compared to factory-overclocked RX 580 NITRO+ and GTX 1660 OC, the RX 5500 yielded similar 3DMark Firestrike performance, with 12,111 points, compared to 12,744 points of the RX 580 NITRO+, and 12,525 points of the GTX 1660 OC.

The card was put through two other game tests at 1080p, "Shadow of the Tomb Raider," and "Far Cry 5." In SoTR, the RX 5500 put out 59 fps, which was slightly behind the 65 fps of the RX 580 NITRO+, and 69 fps of the GTX 1660 OC. In "Far Cry 5," it scored 72 fps, which again is within reach of the 75 fps of the RX 580 NITRO+, and 85 fps of the GTX 1660 OC. It's important to once again note that the RX 580 and GTX 1660 in this comparison are factory-overclocked cards, while the RX 5500 is ticking a stock speeds. Heise also did some power testing, and found the RX 5500 to have a lower idle power-draw than the GTX 1660 OC, at 7 W compared to 10 W of the NVIDIA card; and 12 W of the RX 580 NITRO+. Gaming power-draw is also similar to the GTX 1660, with the RX 5500 pulling 133 W compared to 128 W of the GTX 1660. This short test shows that the RX 5500 is in the same league as the RX 580 and GTX 1660, and explains how NVIDIA had to make its recent product-stack changes.

Intel Core i9-10980XE "Cascade Lake-X" Benchmarked

One of the first reviews of Intel's new flagship HEDT processor, the Core i9-10980XE, just hit the web. Lab501.ro got their hands on a freshly minted i9-10980XE and put it through their test bench. Based on the "Cascade Lake-X" silicon, the i9-10980XE offers almost identical IPC to "Skylake-X," but succeeds the older generation with AI-accelerating DLBoost instruction-set, an improved multi-core boosting algorithm, higher clock speeds, and most importantly, a doubling in price-performance achieved by cutting the cores-per-Dollar metric by half, across the board.

Armed with 18 cores, the i9-10980XE is ahead of the 12-core Ryzen 9 3900X in rendering and simulation tests, although not by much (for a chip that has 50% more cores). This is probably attributed to the competing AMD chip being able to sustain higher all-core boost clock speeds. In tests that not only scale with cores, but are also hungry for memory bandwidth, such as 7-zip and Media, Intel extends its lead thanks to its quad-channel memory interface that's able to feed its cores with datasets faster.

AMD Ryzen 9 3950X Beats Intel Core i9-10980XE by 24% in 3DMark Physics

AMD's upcoming Ryzen 9 3950X socket AM4 processor beats Intel's flagship 18-core processor, the Core i9-10980XE, by a staggering 24 percent at 3DMark Physics, according to a PC Perspective report citing TUM_APISAK. The 3950X is a 16-core/32-thread processor that's drop-in compatible with any motherboard that can run the Ryzen 9 3900X. The i9-10980XE is an 18-core/36-thread HEDT chip that enjoys double the memory bus width as the AMD chip, and is based on Intel's "Cascade Lake-X" silicon. The AMD processor isn't at a tangible clock-speed advantage. The 3950X has a maximum boost frequency of 4.70 GHz, while the i9-10980XE isn't much behind, at 4.60 GHz, but things differ with all-core boost.

When paired with 16 GB of dual-channel DDR4-3200 memory, the Ryzen 9 3950X powered machine scores 32,082 points in the CPU-intensive physics tests of 3DMark. In comparison, the i9-10980XE, paired with 32 GB of quad-channel DDR4-2667 memory, scores just 25,838 points as mentioned by PC Perspective. Graphics card is irrelevant to this test. It's pertinent to note here that the 3DMark physics test scales across practically any number of CPU cores/threads, and the AMD processor could be benefiting from a higher all-core boost frequency than the Intel chip. Although AMD doesn't mention a number in its specifications, the 3950X is expected to have an all-core boost frequency that's north of 4.00 GHz, as its 12-core sibling, the 3900X, already offers 4.20 GHz all-core. In contrast, the i9-10980XE has an all-core boost frequency of 3.80 GHz. This difference in boost frequency, apparently, even negates the additional 2 cores and 4 threads that the Intel chip enjoys, in what is yet another example of AMD having caught up with Intel in the IPC game.

Intel Iris Plus Graphics G7 iGPU Beats AMD RX Vega 10: Benchmarks

Intel is taking big strides forward with its Gen11 integrated graphics architecture. Its performance-configured variant, the Intel Iris Plus Graphics G7, featured in the Core i7-1065G7 "Ice Lake" processor, is found to beat AMD Radeon RX Vega 10 iGPU, found in the Ryzen 7 2700U processor ("Raven Ridge"), by as much as 16 percent in 3DMark 11, a staggering 23 percent in 3DMark FireStrike 1080p. Notebook Check put the two iGPUs through these, and a few game tests to derive an initial verdict that Intel's iGPU has caught up with AMD's RX Vega 10. AMD has since updated its iGPU incrementally with the "Picasso" silicon, providing it with higher clock speeds and updated display and multimedia engines.

The machines tested here are the Lenovo Ideapad S540-14API for the AMD chip, and Lenovo Yoga C940-14IIL with the i7-1065G7. The Iris Plus G7 packs 64 Gen11 execution units, while the Radeon RX Vega 10 has 640 stream processors based on the "Vega" architecture. Over in the gaming performance, and we see the Intel iGPU 2 percent faster than the RX Vega 10 at Bioshock Infinite at 1080p, 12 percent slower at Dota 2 Reborn 1080p, and 8 percent faster at XPlane 11.11.

Leaked 3DMark Time Spy Result shows Radeon RX 5700 XT matching GeForce RTX 2070

Reviewers should have received their Radeon "Navi" review samples by now, so it's just natural that the number of leaks is increasing. WCCFTech has spotted one such leak in the 3DMark Time Spy database. The card which is just labeled "Generic VGA" achieved a final score of 8575 points, GPU score of 8719 and 7843 CPU points, which is almost identical to WCCFTech's own comparison benchmarks for the GeForce RTX 2070 Founders Edition (8901). The Vega 64 scored 7427, which leads WCCFTech to believe this must be Radeon RX 5700 XT. The result has since been removed from the 3DMark database, which also suggests it's for an unreleased product.

UL Releases PCI Express Feature Test For 3DMark Ahead of PCIe 4.0 Hardware

With PCI-Express 4.0 graphics cards and motherboards soon to arrive, UL has released their PCI Express feature test for 3DMark. This latest addition has been designed to verify the bandwidth available to the GPU over a computer's PCI Express interface. To accomplish this, the test will make bandwidth the limiting factor for performance and does so by uploading a large amount of vertex and texture data to the GPU for each frame. The end goal is to transfer enough data over the PCIe 4.0 interface to thoroughly saturate it. Once the test is complete, the end result will be a look at the average bandwidth achieved during the test.

UL Announces New 3DMark Benchmarks for Testing PCIe Performance Across Generations

UL Benchmarks via its 3DMark product have announced that they'll be introducing a new, comprehensive test that aims to test PCIe bandwidth across generations. Citing the introduction of PCIe 4.0 to the masses - soon available in the consumer market via AMD's Ryzen 3000 series release - UL wants users to be able to know what a difference this makes towards allowing for more complex games and scenarios that aren't data-constrained by PCIe 3.0.

The 3D Mark PCIe Performance Test will be made available this summer for free for 3DMark Advanced Edition and for 3DMark Professional Edition customers with a valid annual license.

G.SKILL DDR4 Memory Achieves DDR4-5886 and 23 Overclocking Records

G.SKILL International Enterprise Co., Ltd., the world's leading manufacturer of extreme performance memory and gaming peripherals, is excited to announce that 23 overclocking records in various benchmark categories were broken during the Computex 2019 time frame, including the world record for the fastest memory frequency, all using G.SKILL DDR4 memory kits built with high performance Samsung 8Gb components, the latest Intel processors, and high performance motherboards.

This week at the G.SKILL Computex booth, a new world record for fastest memory frequency was set by Toppc, a renowned professional extreme overclocker, reaching an incredible DDR4-5886MHz using the Trident Z Royal memory on a MSI MPG Z390I GAMING EDGE AC motherboard and an Intel Core i9-9900K processor. At the end of Computex 2019, the top two results for the fastest memory frequency are set by team MSI using an identical hardware setup.

AMD Announces 3rd Generation Ryzen Desktop Processors

AMD CEO Dr. Lisa Su at her 2019 Computex keynote address announced the 3rd generation Ryzen desktop processor family, which leverages the company's Zen 2 microarchitecture, and are built on the 7 nm silicon fabrication process at TSMC. Designed for the AM4 CPU socket, with backwards compatibility for older AMD 300-series and 400-series chipset motherboards, these processors are multi-chip modules of up to two 8-core "Zen 2" CPU chiplets, and a 14 nm I/O controller die that packs the dual-channel DDR4 memory controller and PCI-Express gen 4.0 root complex, along with some SoC connectivity. AMD claims an IPC increase of 15 percent over Zen 1, and higher clock speeds leveraging 7 nm, which add up to significantly higher performance over the current generation. AMD bolstered the core's FPU (floating-point unit), and doubled the cache sizes.

AMD unveiled three high-end SKUs for now, the $329 Ryzen 7 3700X, the $399 Ryzen 7 3800X, and the $499 Ryzen 9 3900X. The 3700X and 3800X are 8-core/16-thread parts with a single CPU chiplet. The 3700X is clocked at 3.60 GHz with 4.40 GHz maximum boost frequency, just 65 Watts TDP and will be beat Intel's Core i7-9700K both at gaming and productivity. The 3800X tops that with 3.90 GHz nominal, 4.50 GHz boost, 105W TDP, and beat the Core i9-9900K at gaming and productivity. AMD went a step further at launched the new Ryzen 9 brand with the 3900X, which is a 12-core/24-thread processor clocked at 3.80 GHz, which 4.60 boost, 72 MB of total cache, 105W TDP, and performance that not only beats the i9-9900K, but also the i9-9920X 12-core/24-thread HEDT processor despite two fewer memory channels. AMD focused on gaming performance with Zen 2, with wider FPU, improved branch prediction, and several micro-architectural improvements contributing to a per-core performance that's higher than Intel's. The processors go on sale on 7/7/2019.

NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs

NVIDIA today announced that it is extending DXR (DirectX Raytracing) support to several GeForce GTX graphics models beyond its GeForce RTX series. These include the GTX 1660 Ti, GTX 1660, GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 6 GB. The GTX 1060 3 GB and lower "Pascal" models don't support DXR, nor do older generations of NVIDIA GPUs. NVIDIA has implemented real-time raytracing on GPUs without specialized components such as RT cores or tensor cores, by essentially implementing the rendering path through shaders, in this case, CUDA cores. DXR support will be added through a new GeForce graphics driver later today.

The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.
Update: Article updated with additional test data from NVIDIA.

G.SKILL Announces OC World Cup 2019 Competition

G.SKILL International Enterprise Co., Ltd., the world's leading manufacturer of extreme performance memory and gaming peripherals, is excited to announce the 6th Annual OC World Cup 2019. The online qualifier competition stage will be held from March 13, 2019 until April 16, 2019 on hwbot.org. The top 9 winners of the online qualifier will be qualified to join the live competition at the G.SKILL booth during Computex 2019 week from May 29th to 31st and compete for a chunk of the $25,000 USD cash prize pool.

With the participation of top overclockers from around the world and carefully designed rules, G.SKILL OC World Cup is considered as one of the most challenging overclocking competition by professional overclockers. The G.SKILL OC World Cup consists of three rounds: Online Qualifier, Live Qualifier, and Grand Final. The top 9 winners of the Online Qualifier will receive eligibility to enter the Live Qualifier stage during Computex 2019 and demonstrate their finest LN2 extreme overclocking skills at the G.SKILL booth.

3DMark Adds NVIDIA DLSS Feature Performance Test to Port Royal

Did you see the NVIDIA keynote presentation at CES this year? For us, one of the highlights was the DLSS demo based on our 3DMark Port Royal ray tracing benchmark. Today, we're thrilled to announce that we've added this exciting new graphics technology to 3DMark in the form of a new NVIDIA DLSS feature test. This new test is available now in 3DMark Advanced and Professional Editions.

3DMark feature tests are specialized tests for specific technologies. The NVIDIA DLSS feature test helps you compare performance and image quality with and without DLSS processing. The test is based on the 3DMark Port Royal ray tracing benchmark. Like many games, Port Royal uses Temporal Anti-Aliasing. TAA is a popular, state-of-the-art technique, but it can result in blurring and the loss of fine detail. DLSS (Deep Learning Super Sampling) is an NVIDIA RTX technology that uses deep learning and AI to improve game performance while maintaining visual quality.

NVIDIA Releases GeForce 418.81 WHQL Software

NVIDIA today released GeForce 418.81 WHQL software. The drivers add support for mobile versions of GeForce RTX 20-series GPUs. The desktop version adds optimization for 3DMark Port Royal benchmark, in addition to its DLSS (deep learning supersampling) AA setting. The drivers add or improve NVIDIA SLI support for "Anthem," "Assetto Corsa Competizione," "Battlefleet Gothic: Armada 2," "Life is strange Season 2," "NBA 2K19," and "Space Hulk Tactics." CUDA version 10 is included with these drivers.

Among the issues fixed are HDR not being enabled by default in Gamestream when an HDR display is connected to the client and PC. 3D performance and frame-rate overlays accidentally appearing on Twitter UWP app is fixed. Random flickering in games with G-Sync enabled is fixed. Also fixed is a strange issue in which when a G-Sync display (one with NVIDIA G-Sync hardware) is hotplugged, and a G-Sync Compatible (read: VESA Adaptive Sync) display is connected, the right half of the G-Sync display goes blank. Grab the drivers from the link below.
DOWNLOAD: NVIDIA GeForce 418.81 WHQL

UL Corporation Announces 3D Mark Port Royal Raytracing Suite is Now Available - Benchmark Mode On!

Perhaps gliding through the tech-infused CES week, UL Corporation has just announced that the much-expected Port Royal, the world's first dedicated real-time ray tracing benchmark for gamers, is now available. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques, and enables both performance benchmarking for cutthroat competition throughout the internet (and our own TPU forums, of course), but is also an example of what to expect from ray tracing in upcoming games - ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

3DFX's Canceled Rampage GPU Pictured, Put Through the Paces in 3D Mark 2001

3DFX is a well-known name for most in our community, I'd wager (I don't have the data to back me up on that, but bare with me). The company is one of the highest-profile defunct companies that vied for a place in the desktop, high-performance GPU space back in the day, and brought its guns bearing on NVIDIA and then ATI. The Rampage was the last GPU ever developed by the company, and looked to compete with NVIDIA's GeForce3. That never saw the light of day, though, with the company shutting its doors before development became viable for market release.

DSOGaming has some images of some of the Rampage GPUs that survived 3DFX's closure, though, and the graphics card is shown running Max Payne, Unreal Tournament & 3DMark 2001. For those of you that ever had a 3DFX graphics card, these should bring you right down memory lane. Enjoy.

3DMark Port Royal Ray-tracing Benchmark Release Date and Pricing Revealed

UL Benchmarks released more information on pricing and availability of its upcoming addition to the 3DMark benchmark suite, named "Port Royal." The company revealed that the benchmark will officially launch on January 8, 2019. The Port Royal upgrade will cost existing 3DMark paid (Advanced and Professional) users USD $2.99. 3DMark Advanced purchased from January 8th onward at $29.99 will include Port Royal. 3DMark Port Royal is an extreme-segment 3D graphics benchmark leveraging DirectX 12 and DirectX Raytracing (DXR). UL Benchmarks stated that Port Royal was developed with inputs from industry giants including NVIDIA, AMD, Intel, and Microsoft.

UL Benchmarks Unveils 3DMark "Port Royal" Ray-tracing Benchmark

Port Royal is the name of the latest component of UL Benchmarks 3DMark. Designed to take advantage of the DirectX Raytracing (DXR) API, this benchmark features an extreme poly-count test-scene with real-time ray-traced elements. Screengrabs of the benchmark depict spacecraft entering and leaving mirrored spheres suspended within a planet's atmosphere, which appear to be docks. It's also a shout out to of a number of space-sims such as "Star Citizen," which could up their production in the future by introducing ray-tracing. The benchmark will debut at the GALAX GOC Grand Final on December 8, where the first public run will be powered by a GALAX GeForce RTX 2080 Ti HOF graphics card. It will start selling in January 2019.

Latest 3DMark Update adds Night Raid DX12 Benchmark for Integrated Graphics

With update 2.6.6174, released today, 3DMark now includes a new benchmark dubbed Night Raid. This latest addition to the popular 3DMark suite offers DX12 performance testing for laptops, tablets and other devices with integrated graphics. It also offers full support for ARM based processors in the latest always-connected PCs running Microsoft's Windows 10 on ARM. Users running 3DMark Basic Edition which is free will have access to this latest addition upon installing the update.

The Night Raid benchmark continues the trend of offering two graphics tests and a CPU test. While not as visually stunning as previous entries this is to be expected considering it is targeted at integrated graphics processors and entry level systems. Even so, it makes use of numerous graphical features with graphics test 1 including; dynamic reflections, ambient occlusion, and deferred rendering. Graphics test 2 features; tessellation, complex particle systems and depth of field effects with forward-rendering. Finally, the CPU test will measures performance through a combination of physics simulation, occlusion culling, and procedural generation.

UL Benchmarks Kicks Huawei Devices from its Database over Cheating

UL Benchmarks de-listed several popular Huawei devices from its database over proof of cheating in its benchmarks. Over the month, it was found that several of Huawei's devices, such as P20 Pro, Nova 3, and Play; overclocked their SoCs while ignoring all power and thermal limits, to achieve high benchmark scores, when it detected that a popular benchmark such as 3DMark, was being run. To bust this, UL Benchmarks tested the three devices with "cloaked" benchmarks, or "private benchmarks" as they call it. These apps are identical in almost every way to 3DMark, but lack the identification or branding that lets Huawei devices know when to overclock themselves to cheat the test.

The results were startling. When the devices have no clue that a popular benchmark is being run (or if has no way of telling that 3DMark is being run), it chugs along at its "normal" speed, which is 35% to 36% lower. The rules that bind device manufacturers from advertising UL's 3DMark scores explicitly state that the device must not detect the app and optimize its hardware on the fly to ace the test. Huawei responded to UL by stating that it will unlock a new "performance mode" to users that lets them elevate their SoCs to the same high clocks for any application.

Intel Core i9-9900K 3DMark Numbers Emerge: Beats Ryzen 7 2700X

Some of the first benchmark numbers of Intel's upcoming 8-core/16-thread socket LGA1151 processor, the Core i9-9900K, surfaced, from Thai professional overclocker TUM APISAK. A 3DMark database submission sees the processor score 10,719 points in the CPU tests, with an overall score of 9,862 points, when paired with a GeForce GTX 1080 Ti graphics card. According to WCCFTech, the CPU score is about 2,500 points higher than the 6-core/12-thread Core i7-8700K, and about 1,500 points higher than the 8-core/16-thread AMD Ryzen 7 2700X. The tested processor features 8 cores, 16 threads, a nominal clock of 3.10 GHz, and boost frequency of 5.00 GHz, as measured by 3DMark's internal SysInfo module. Intel is expected to launch the Core i9-9900K on 1st August, 2018.

Save 75% on 3DMark, PCMark 10, and VRMark on Steam This Week

It's a big day for us. Today, we are looking back on 20 years as Futuremark and looking forward to our first day as UL, a name we now share with our parent company. To mark the beginning of this new chapter in our story, I'm happy to announce that we are running a week-long sale on Steam. From now until next Monday, you can save 75% on our 3DMark, VRMark, and PCMark 10 benchmarks. Or save even more when you buy them together in our benchmark bundle.

Futuremark Sets a Date for UL Rebranding

Futuremark set a date for its re-branding to align with its parent company UL. April 23, 2018 is when Futuremark products and services will be sold under the new branding scheme. Futuremark became part of UL in 2014. UL is an independent, global company with more than 10,000 professionals in 40 countries. UL offers a wide range of testing, inspection, auditing, and certification services and solutions to help customers, purchasers and policymakers manage risk and complexity in modern markets. A set of FAQs associated with the change are answered below.

AMD's RX 560X Leaked in 3DMark - RX 500X Series Just a New OEM-Exclusive Rebadge

News of an upcoming AMD RX 500X series sprung like wildfire yesterday, as users and publications alike tried to quench a thirst for some more GPU solutions from the red camp. However, if recent leaked numbers and information from 3D Mark prove correct, it seems the RX 500X series will likely bring some disappointments to those who were hoping for a refined, Vega-imbued AMD Polaris redesign, or simply push for better clocks and power/performance ratios under a new manufacturing process.
Return to Keyword Browsing
Dec 19th, 2024 06:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts