News Posts matching #3DMark

Return to Keyword Browsing

GIGABYTE AORUS RTX 5080 MASTER's Optional Fourth Fan Lowers Temps by 2 °C

The new ASUS ROG Astral GeForce RTX 5090 and RTX 5080 designs have attracted a lot of attention, due to an unusual cooling configuration that includes a backplate-mounted fourth fan. At CES 2025, MSI teased onlookers with a placard adorned with a GeForce RTX 32G "Lightning" Special Edition model—featuring a "FiveFrozr" cooling solution. A traditional triple-fan setup is placed in the expected shroud location, but two additional units are integrated into the card's backplate. According to recent reports and early reviews, GIGABYTE has deployed a somewhat related system, albeit entirely optional (depending on user discretion). The Taiwanese manufacturer sent its AORUS RTX 5090 and 5080 MASTER models to market last week. These premium card designs feature the company's new "Screen Cooling Plus" system. CES press material claims that the "extra air-boosting fan" grants more airflow.

GIGABYTE's fourth fan has flown under the radar, but major hardware news outlets have just picked up on initial impressions. Singapore's HardwareZone appreciated the inclusion of an optional extra—with their AORUS RTX 5080 MASTER sample—but criticized GIGABYTE's slightly undercooked implementation. Their reviewer did not evaluate whether the modular part made any difference in terms of reducing temperatures—instead, they opined: "to further improve cooling, the card also comes bundled with a separate 120 mm RGB fan that you can place on the back of the card to pull air out—a design reminiscent of the ROG Astral RTX 5080's built-in cooling solution. It's a practical touch but not an elegant one, as it means having to deal with additional cables to tidy up since—oddly enough—the card itself does not come with a power connector for the extra fan." GLITCHED.online, a South African tech site, took GIGABYTE's AORUS RTX 5080 MASTER ICE card for a test drive—they found that the extra bit of cooling potential made a difference, but it was "almost unnoticeable." We hope that GIGABYTE will send review samples to TPU's W1zzard in the near future. Will the fourth fan make any difference on the AORUS RTX 5090 MASTER model?

UL Solutions Adds Support for DLSS 4 and DLSS Multi Frame Generation to the 3DMark NVIDIA DLSS Feature Test

We're excited to announce that in today's update to 3DMark, we're adding support for DLSS 4 and DLSS Multi Frame generation to the NVIDIA DLSS feature test. The NVIDIA DLSS feature test and this update were developed in partnership with NVIDIA. The 3DMark NVIDIA DLSS feature test lets you compare performance and image quality brought by enabling DLSS processing. If you have a new GeForce RTX 50 Series GPU, you'll also be able to compare performance with and without the full capabilities of DLSS 4.

You can choose to run the NVIDIA DLSS feature test using DLSS 4, DLSS 3 or DLSS 2. DLSS 4 includes the new DLSS Multi Frame Generation feature, and you can choose between several image quality modes—Quality, Balanced, Performance, Ultra Performance and DLAA. These modes are designed for different resolutions, from Full HD up to 8K. DLSS Multi Frame Generation uses AI to boost frame rates with up to three additional frames generated per traditionally rendered frame. In the 3DMark NVIDIA DLSS feature test, you are able to choose between 2x, 3x and 4x Frame Generation settings if you have an NVIDIA GeForce RTX 50 series GPU.

NVIDIA GeForce RTX 5090 3DMark Performance Reveals Impressive Improvements

The RTX 50-series gaming GPUs have the gaming community divided. While some appreciate the DLSS 4 and MFG technologies driving impressive improvements in FPS through AI wizardry, others are left disappointed by the seemingly poor improvements in raw performance. For instance, when DLSS and MFG are taken out of the equation, the RTX 5090, RTX 5080, and RTX 5070 are around 33%, 15%, and 20% faster than their predecessors respectively in gaming performance. That said, VideoCardz has tapped into its sources, and revealed the 3DMark scores for the RTX 5090 GPU, and the results certainly do appear to exceed expectations.

In the non-ray traced Steel Nomad test at 4K, the RTX 5090 managed to score around 14,133 points, putting it roughly 53% ahead of its predecessor. In the Port Royal test, which does utilize ray tracing, the RTX 5090 raked in 36,667 points - a 40% improvement over the RTX 4090. The results are much the same in the older Time Spy and Fire Strike tests as well, indicating at roughly a 31% and 38% jump in performance respectively. Moreover, according to the benchmarks, the RTX 5090 appears to be roughly twice as powerful as the RTX 4080 Super. Of course, synthetic benchmarks do not entirely dictate gaming performance, and VideoCardz clearly mentions that gaming performance (without MFG) will witness a substantially more modest improvement. There is no denying that Blackwell's vastly superior memory bandwidth is helping a lot with the synthetic tests, with the 33% extra shaders doing the rest of the work.

AMD Radeon RX 9070 XT Alleged Benchmark Leaks, Underwhelming Performance

Recent benchmark leaks have revealed that AMD's upcoming Radeon RX 9070 XT graphics card may not deliver the groundbreaking performance initially hoped for by enthusiasts. According to leaked 3DMark Time Spy results shared by hardware leaker @All_The_Watts, the RDNA 4-based GPU achieved a graphics score of 22,894 points. The benchmark results indicate that the RX 9070 XT performs only marginally better than AMD's current RX 7900 GRE, showing a mere 2% improvement. It falls significantly behind the RX 7900 XT, which maintains almost a 17% performance advantage over the new card. These findings contradict earlier speculation that suggested the RX 9070 XT would compete directly with NVIDIA's RTX 4080.

However, synthetic benchmarks tell only part of the story. The GPU's real-world gaming performance remains to be seen, and rumors indicate that the RX 9070 XT may offer significantly improved ray tracing capabilities compared to its RX 7000 series predecessors. This could be crucial for market competitiveness, particularly given the strong ray tracing performance of NVIDIA's RTX 40 and the upcoming RTX 50 series cards. The success of the RX 9070 XT depends on how well it can differentiate itself through features like ray tracing while maintaining an attractive price-to-performance ratio in an increasingly competitive GPU market. We expect these scores not to be the final tale in the AMD RDNA 4 story, as we must wait and see what AMD delivers during CES. Third-party reviews and benchmarks will give the final verdict in the RDNA 4 market launch.

UL Adds New DirectStorage Test to 3DMark

Today we're excited to launch the 3DMark DirectStorage feature test. This feature test is a free update for the 3DMark Storage Benchmark DLC. The 3DMark DirectStorage feature test helps gamers understand the potential performance benefits that Microsoft's DirectStorage technology could have for their PC's gaming performance.

DirectStorage is a Microsoft technology for Windows PCs with PCIe SSDs that reduces the overhead when loading game data. DirectStorage can be used to reduce game loading times when paired with other technologies such as GDeflate, where the GPU can be used to decompress certain game assets instead of the CPU. On systems running Windows 11, DirectStorage can bring further benefits with BypassIO, lowering a game's CPU overhead by reducing the CPU workload when transferring data.

GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

NVIDIA quietly released a variant of the GeForce RTX 4070 featuring slower 20 Gbps GDDR6 memory, replacing the 21 Gbps GDDR6X that the original RTX 4070 comes with. Wccftech has access to a GALAX branded RTX 4070 GDDR6, and put it through benchmarks focused on comparing it to a regular RTX 4070. Memory type and speed are the only changes in specs, the core-configuration isn't changed, nor is the GPU clock speed. Wccftech's testing shows that the RTX 4070 GDDR6 measures within 0-1% slower than the RTX 4070 (GDDR6X) at 1080p and 1440p resolutions; while the difference between the two is about 2% at 4K Ultra HD.

Wccftech's test-bed is comprehensive, with 27 game tests, each across 3 resolutions; and 7 synthetic tests. The synthetic tests are mainly part of the 3DMark test suite, including Speed Way, Fire Strike, Time Spy, Port Royal, and their presets. Here, the RTX 4070 GDDR6 is nearly identical in performance, with a 0-0.2% delta with the RTX 4070 GDDR6X. In the game tests, performance varies by resolution. 1080p has 0-1% performance delta, with the only noteworthy outliers being "Metro Exodus" (extreme preset), where the RTX 4070 GDDR6 loses 4.2%, and "Alan Wake 2," where it loses 2.3%.

Intel Ships 0x129 Microcode Update for 13th and 14th Generation Processors with Stability Issues

Intel has officially started shipping the "0x129" microcode update for its 13th and 14th generation "Raptor Lake" and "Raptor Lake Refresh" processors. This critical update is currently being pushed to all OEM/ODM partners to address the stability issues that Intel's processors have been facing. According to Intel, this microcode update fixes "incorrect voltage requests to the processor that are causing elevated operating voltage." Intel's analysis shows that the root cause of stability problems is caused by too high voltage during operation of the processor. These increases to voltage cause degradation that increases the minimum voltage required for stable operation. Intel calls this "Vmin"—it's a theoretical construct, not an actual voltage, think "speed for an airplane required to fly". The latest 0x129 microcode patch will limit the processor's voltage to no higher than 1.55 V, which should avoid further degradation. Overclocking is still supported, enthusiasts will have to disable the eTVB setting in their BIOS to push the processor beyond the 1.55 V threshold. The company's internal testing shows that the new default settings with limited voltages with standard run-to-run variations show minimal performance impact, with only a single game (Hitman 3: Dartmoor) showing degradation. For a full statement from Intel, see the quote below.

Colorful Presents iGame Lab Project: Highest-Performance GeForce RTX 4090 GPUs Limited to 300 Pieces, OC'd to 3.8 GHz

At Computex 2024, Colorful has launched an ultra-exclusive new graphics card - the iGame Lab 4090. This limited edition GPU is squarely targeted at hardcore overclockers and performance enthusiasts willing to pay top dollar for the absolute best. With only 300 units produced globally, the iGame Lab 4090 represents the pinnacle of Colorful's engineering efforts. Each chip was hand-selected from thousands after rigorous binning to ensure premium silicon capable of extreme overclocks. The card's striking aesthetics feature a clean white shroud with silver accent armor. Beyond the intricate design, the real draw is performance. The iGame Lab 4090 has already shattered records, with professional overclocker CENs pushing it past 3.8 GHz under 3D load. It set a new world record 3DMark Time Spy Extreme score of 24,103 points. Out of the box, the card features a base clock of 2235 MHz, a boost clock of 2520 MHz, and a turbo mode of 2625 MHz, all while being a 3-slot design.

AMD Ryzen 7 8700G Loves Memory Overclocking, which Vastly Favors its iGPU Performance

Entry level discrete GPUs are in trouble, as the first reviews of the AMD Ryzen 7 8700G desktop APU show that its iGPU is capable of beating the discrete GeForce GTX 1650, which means it should also beat the Radeon RX 6500 XT that offers comparable performance. Based on the 4 nm "Hawk Point" monolithic silicon, the 8700G packs the powerful Radeon 780M iGPU based on the latest RDNA3 graphics architecture, with as many as 12 compute units, worth 768 stream processors, 48 TMUs, and an impressive 32 ROPs; and full support for the DirectX 12 Ultimate API requirements, including ray tracing. A review by a Chinese tech publication on BiliBili showed that it's possible for an overclocked 8700G to beat a discrete GTX 1650 in 3DMark TimeSpy.

It's important to note here that both the iGPU engine clock and the APU's memory frequency are increased. The reviewer set the iGPU engine clock to 3400 MHz, up from its 2900 MHz reference speed. It turns out that much like its predecessor, the 5700G "Cezanne," the new 8700G "Hawk Point" features a more advanced memory controller than its chiplet-based counterpart (in this case the Ryzen 7000 "Raphael"). The reviewer succeeded in a DDR5-8400 memory overclock. A combination of the two resulted in a 17% increase in the Time Spy score over stock speeds; which is how the chip manages to beat the discrete GTX 1650 (comparable performance to the RX 6500 XT at 1080p).

UL Solutions Previews Upcoming 3DMark Steel Nomad Benchmark

Thank you to the 3DMark community - the gamers, overclockers, hardware reviewers, tech-heads and those in the industry using our benchmarks, who have joined us in discovering what the cutting edge of PC hardware can do over this last quarter of a century. Looking back, it's amazing how far graphics have come, and we're very excited to see what the next 25 years bring.

After looking back, it's time to share a sneak peek of what's coming next. Here are some preview screenshots for 3DMark Steel Nomad, our successor to 3DMark Time Spy. It's been more than seven years since we launched Time Spy, and after more than 42 million submitted results, we think it's time for a new heavy non-ray tracing benchmark. Steel Nomad will be our most demanding non-ray tracing benchmark and will not only support Windows using DirectX 12, but also macOS and iOS using Metal, Android using Vulkan, and Linux using Vulkan for Enterprise and reviewers. To celebrate 3DMark's 25th year, the scene will feature some callbacks to many of our previous benchmarks. We hope you have fun finding them all!

UL Solutions Launches 3DMark Solar Bay, New Cross-Platform Ray Tracing Benchmark

We're excited to announce the launch of 3DMark Solar Bay, a new cross-platform benchmark for testing ray traced graphics performance on Windows PCs and high-end Android devices. This benchmark measures games-related graphics performance by rendering a demanding, ray-traced scene in real time. Solar Bay is available now for Android on the Google Play Store and for Windows on Steam, Epic Games or directly from UL Solutions.

Compare ray tracing performance across platforms
Ray tracing is the showcase technology for Solar Bay, simulating real-time reflections. Compared to traditional rasterization, ray-traced scenes produce far more realistic lighting. While dedicated desktop and laptop graphics processing units (GPUs) have supported ray tracing for several years, it's only recently that integrated GPUs and Android devices have been capable of running real-time ray-traced games at frame rates acceptable to gamers.

Curious MSI GeForce RTX 3080 Ti 20 GB Card pops up on FB Marketplace

An unusual MSI RTX 3080 Ti SUPRIM X graphics card is up for sale, second hand, on Facebook Marketplace—the Sydney, Australia-based seller is advertising this component as a truly custom model with a non-standard allocation of VRAM: "Yes this is 20 GB not 12 GB." The used item is said to be in "good condition" with its product description elaborating on a bit of history: "There are some scuff marks from the previous owner, but the card works fine. It is an extremely rare collector's item, due to NVIDIA cancelling these variants a month before release. This is not an engineering sample card—this was a finished OEM product that got cancelled, unfortunately." The seller is seeking AU$1100 (~$740 USD), after a reduction from the original asking price of AU$1,300 (~$870 USD).

MSI and Gigabyte were reportedly on the verge of launching GeForce RTX 3080 Ti 20 GB variants two years ago, but NVIDIA had a change of heart (probably due to concerns about costs and production volumes) and decided to stick with a public release of the standard 12 GB GPU. Affected AIBs chose to not destroy their stock of 20 GB cards—these were instead sold to crypto miners and shady retailers. Wccftech points out that mining-oriented units have identifying marks on their I/O ports.

Leaked AMD Radeon RX 7700 & RX 7800 GPU Benchmarks Emerge

A set of intriguing 3DMark Time Spy benchmark results have been released by hardware leaker All_The_Watts!!—these are alleged to have been produced by prototype Radeon RX 7700 and Radeon RX 7800 graphics cards (rumored to be based on variants of the Navi 32 GPU). The current RDNA 3 lineup of mainstream GPUs is severely lacking in middle ground representation, but Team Red is reported to be working on a number of models to fill in the gap. We expect a number of leaks to emerge as we get closer to a rumored product reveal scheduled for late August (to coincide with Gamescon).

The recently released 3DMark Time Spy scores reveal that the alleged Radeon RX 7700 candidate scored 15,465 points, while the RX 7800 achieved 18,197 points—both running on an unspecified test system. The results (refer to the Tom's Hardware-produced chart placed below) are not going to generate a lot of excitement at this stage when compared to predecessors and some of the competition—evaluation samples are not really expected to be optimized to a great degree. We hope to see finalized products with decent drivers putting in a good appearance and performing better later on this year.

AMD Radeon RX 7600 Slides Down to $249

The AMD Radeon RX 7600 mainstream graphics card slides a little closer to its ideal price, with an online retailer price-cut sending it down to $249, about $20 less than its MSRP of $269. The cheapest RX 7600 graphics card in the market right now is the MSI RX 7600 MECH 2X Classic, going for $249 on Amazon; followed by the XFX RX 7600 SWFT 210 at $258, and the ASRock RX 7600 Challenger at $259.99.

The sliding prices of the RX 7600 should improve its prospects against the upcoming NVIDIA GeForce RTX 4060, which leaked 3DMark benchmarks show to be around 17% faster than the previous-generation RTX 3060 (12 GB) and 30% faster than its 8 GB variant. Our real-world testing puts the RX 7600 about 15% faster than the RTX 3060 (12 GB) at 1080p, which means there could be an interesting square-off between the RTX 4060 and RX 7600. NVIDIA has announced $299 as the baseline price for the RTX 4060, which should put pressure on AMD partners to trim prices of the RX 7600 to below the $250-mark.

3DMark Now Available on Epic Games Store

We're excited to announce that 3DMark is now also available for purchase in the Epic Games Store from today, June 20, 2023. 3DMark is a computer benchmarking tool for gamers, overclockers and system builders who want to get more out of their hardware. With its wide range of benchmarks, tests and features, 3DMark has everything you need to test the performance of your gaming PC.

When purchasing 3DMark through the Epic Games Store, it includes all current 3DMark GPU and CPU benchmarks released since the application's launch over a decade ago. Our latest GPU benchmark, Speed Way, tests ray-traced gaming performance using the latest DirectX 12 Ultimate API for Windows 10 and Windows 11. 3DMark offers more than just benchmarking tools. Test your system stability with stress tests, explore how new engine technologies affect visuals and performance with interactive mode, or compete for top PC performance with your friends and the 3DMark community as you chase a spot in the 3DMark Hall of Fame.

NVIDIA H100 Hopper GPU Tested for Gaming, Slower Than Integrated GPU

NVIDIA's H100 Hopper GPU is a device designed for pure AI and other compute workloads, with the least amount of consideration for gaming workloads that involve graphics processing. However, it is still interesting to see how this 30,000 USD GPU fairs in comparison to other gaming GPUs and whether it is even possible to run games on it. It turns out that it is technically feasible but not making much sense, as the Chinese YouTube channel Geekerwan notes. Based on the GH100 GPU SKU with 14,592 CUDA, the H100 PCIe version tested here can achieve 204.9 TeraFLOPS at FP16, 51.22 TeraFLOPS at FP32, and 25.61 TeraFLOPS at FP64, with its natural power laying in accelerating AI workloads.

However, how does it fare in gaming benchmarks? Not very well, as the testing shows. It scored 2681 points in 3DMark Time Spy, which is lower than AMD's integrated Radeon 680M, which managed to score 2710 points. Interestingly, the GH100 has only 24 ROPs (render output units), while the gaming-oriented GA102 (highest-end gaming GPU SKU) has 112 ROPs. This is self-explanatory and provides a clear picture as to why the H100 GPU is used for computing only. Since it doesn't have any display outputs, the system needed another regular GPU to provide the picture, while the computation happened on the H100 GPU.

3DMark Gets AMD FidelityFX Super Resolution 2 (FSR 2) Feature Test

UL Benchmarks today released an update to 3DMark that adds a Feature Test for AMD FidelityFX Super Resolution 2 (FSR 2), the company's popular upscaling-based performance enhancement. This was long overdue, as 3DMark has had a Feature Test for DLSS for years now; and as of October 2022, it even got one for Intel XeSS. The new FSR 2 Feature Test uses a scene from the Speed Way DirectX 12 Ultimate benchmark, where it compares fine details of a vehicle and a technic droid between native resolution with TAA and FSR 2, and highlights the performance uplift. To use the feature test, you'll need any GPU that supports DirectX 12 and FSR 2 (that covers AMD, NVIDIA, and Intel Arc). For owners of 3DMark who purchased it before October 12, 2022, they'll need to purchase the Speed Way upgrade to unlock the AMD FSR feature test.

Intel Xeon W9-3495X Can Pull up to 1,900 Watts in Extreme OC Scenarios

Intel's latest Xeon processors based on Sapphire Rapids uArch have arrived in the hands of overclockers. Last week, we reported that the Intel Xeon W9-3495X is officially a world record holder for achieving the best scores in Cinebench R23 and R20, Y-Cruncher, 3DMark CPU test, and Geekbench 3. However, today we have another extreme overclocking attempt to beat the world record, with little more details about power consumption and what the new SKU is capable of. Elmor, an overclocker working with ASUS, has tried to break the world record and overclocked the Intel Xeon W9-3495X CPU to 5.5 GHz on all 56 cores. What is more impressive is the power that the processor can consume.

With a system powered by two Superflower Leadex 1,600 Watt power supply units, the CPU consumed almost 1,900 Watts of power from the wall. To manage to cool this heat output, liquid nitrogen was used, and the CPU stayed at a cool negative 95 degrees Celsius. The motherboard of choice for this attempt was ASUS Pro WS W790E-SAGE SE, paired with eight GSKILL Zeta R5 DDR5 R-DIMMs modules. And results were incredible, as the CPU achieved 132,220 points in Cinebench R23. However, the world record of the previous week has remained intact, as Elmor 's result is a bit behind last week's score of 132,484 points. Check the video below for more info.

3% of AMD Radeon Users May Experience Unusually Low 3DMark TimeSpy Performance, Driver Fix Underway

About 3% of AMD Radeon graphics card users may experience lower than usual 3DMark TimeSpy performance, says UL Benchmarks, developers of the 3DMark graphics benchmark suite. The issue came to light when a Google developer noticed that his RX 7900 XTX exhibited lower than expected performacne in TimeSpy, and took it up with UL. While the 3DMark developer hasn't been able to reproduce the issue on their end, it mentions that AMD is aware of it, has had more luck in reproducing it, and is working on a driver-level fix. For now, UL offers no solution other than to roll back to older driver versions and try testing again.

Intel Xeon W9-3495X Sets World Record, Dethrones AMD Threadripper

When Intel announced the appearance of the 4th generation Xeon-W processors, the company announced that the clock multiplier was left unlocked, available for overclockers to try and push these chips even harder. However, it was only a matter of time before we saw the top-end Xeon-W SKU take a chance at beating the world record in Cinebench R23. The Intel Xeon W9-3495X SKU is officially the world record score holder with 132,484 points in Cinebench R23. The overclocker OGS from Greece managed to push all 56 cores and 112 threads of the CPU to 5.4 GHz clock frequency using liquid nitrogen (LN2) cooling setup. Using ASUS Pro WS W790E-SAGE SE motherboard and G-SKILL Zeta R5 RAM kit, the OC record was set on March 8th.

The previous record holder of this position was AMD with its Threadripper Pro 5995WX with 64 cores and 128 threads clocked at 5.4 GHz. Not only did Xeon W9-3495X set the Cinebench R23 record, but the SKU also placed the newest record for Cinebench R20, Y-Cruncher, 3DMark CPU test, and Geekbench 3 as well.

Alleged NVIDIA AD106 GPU Tested in 3DMark and AIDA64

Benchmarks and specifications of an alleged NVIDIA AD106 GPU have tipped up on Chiphell, although the original poster has since removed all the details. Thanks to @harukaze5719 on Twitter, who posted the details, we still get an insight into what we might be able to expect from NVIDIA's upcoming mid-range cards. All these details should be taken as is, as the original source isn't exactly what we'd call trustworthy. Based on the data in the TPU GPU database, the GPU in question should be the GeForce RTX 4070 Mobile with much higher clock speeds or an equivalent desktop part that offers more CUDA cores than the RTX 4060 Ti. Whatever the specific AD106 GPU is, it's being compared to the GeForce RTX 2080 Super and the RTX 3070 Ti.

The GPU was tested in AIDA64 and 3DMark and it beats the RTX 2080 Super in all of the tests, while drawing some 55 W less power at the same time. In some of the benchmarks the wins are within the margin of testing error, for example when it comes to the memory performance in AIDA64. However, we're looking at a GPU connected to only half the memory bandwidth here, as the AD106 GPU only has a 128-bit memory bus, compared to 256-bit for the RTX 2080 Super, although the memory clocks are much higher, but the overall memory bandwidth is still nearly 36 percent higher in the RTX 2080 Super. Yet, the AD106 GPU manages to beat the RTX 2080 Super in all of the memory benchmarks in AIDA64.

NVIDIA RTX 4080 20-30% Slower than RTX 4090, Still Smokes the RTX 3090 Ti: Leaked Benchmarks

Benchmarks of NVIDIA's upcoming GeForce RTX 4080 (formerly known as the RTX 4080 16 GB) are already out as the leaky taps in the Asian tech forumscape know no bounds. Someone with access to an RTX 4080 sample and drivers on ChipHell forums, put it through a battery of synthetic and gaming tests. The $1,200 MSRP graphics card was tested on 3DMark Time Spy, Port Royal, and games that include Forza Horizon 5, Call of Duty Modern Warfare II, Cyberpunk 2077, Borderlands 3, and Shadow of the Tomb Raider.

The big picture: the RTX 4080 is found to be halfway between the RTX 3090 Ti and the RTX 4090. At stock settings, and in 3DMark Time Spy Extreme (4K), it has 71% the performance of an RTX 4090, whereas the RTX 3090 Ti is 55% that of the RTX 4090. With its "power limit" slider maxed out, the RTX 4080 inches 2 percentage-points closer to the RTX 4090 (73% that of the RTX 4090), and with a bit of manual OC, it adds another 4 percentage-points. Things change slightly with 3DMark Port Royal, where the RTX 4080 is 69% the performance of the RTX 4090 in a test where the RTX 3090 Ti does 58% that of the RTX 4090.

UL Benchmarks Launches 3DMark Speedway DirectX 12 Ultimate Benchmark

UL Solutions is excited to announce that our new DirectX 12 Ultimate Benchmark 3DMark Speed Way is now available to download and buy on Steam and on the UL Solutions website. 3DMark Speed Way is sponsored by Lenovo Legion. Developed with input from AMD, Intel, NVIDIA, and other leading technology companies, Speed Way is an ideal benchmark for comparing the DirectX 12 Ultimate performance of the latest graphics cards.

DirectX 12 Ultimate is the next-generation application programming interface (API) for gaming graphics. It adds powerful new capabilities to DirectX 12, helping game developers improve visual quality, boost frame rates, reduce loading times and create vast, detailed worlds. 3DMark Speed Way's engine demonstrates what the latest DirectX API brings to ray-traced gaming, using DirectX Raytracing tier 1.1 for real-time global illumination and real-time ray-traced reflections, coupled with new performance optimizations like mesh shaders.

3DMark Speed Way DirectX 12 Ultimate Benchmark is Launching on October 12

3DMark Speed Way is a new GPU benchmark that showcases the graphics technology that will power the next generation of gaming experiences. We're excited to announce Speed Way, sponsored by Lenovo Legion, is releasing on October 12. Our team has been working hard to get Speed Way ready for you to use benchmarking, stress testing, and comparing the new PC hardware coming this fall.

From October 12 onward, Speed Way will be included in the price when you buy 3DMark from Steam or our own online store. Since we released Time Spy in 2016, 3DMark users have enjoyed many free updates, including Time Spy Extreme, the 3DMark CPU Profile, 3DMark Wild Life, and multiple tests demonstrating new DirectX features. With the addition of Speed Way, the price of 3DMark on Steam and 3DMark Advanced Edition will go up from $29.99 to $34.99.

UL Launches New 3DMark Feature Test for Intel XeSS

We're excited to release a new 3DMark feature test for Intel's new XeSS AI-enhanced upscaling technology. This new feature test is available in 3DMark Advanced and Professional Editions. 3DMark feature tests are special tests designed to highlight specific techniques, functions, or capabilities. The Intel XeSS feature test shows you how XeSS affects performance.

The 3DMark Intel XeSS frame inspector tool helps you compare image quality with an interactive side-by-side comparison of XeSS and native-resolution rendering. Check out the images below to see an example comparison of native resolution rendering and XeSS in the new 3DMark feature test.
Return to Keyword Browsing
Feb 20th, 2025 08:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts