NVIDIA DLSS Test in Final Fantasy XV 82

NVIDIA DLSS Test in Final Fantasy XV

(82 Comments) »

Introduction

NVIDIA Logo


One of the most interesting features of NVIDIA's new Turing GeForce RTX cards is Deep Learning Super-Sampling (DLSS).

DLSS basically takes AI's proficiency in analyzing images and finding optimized solutions to bring about an end to regular manifestations of anti-aliasing. In the wake of post-processing AA techniques, performance hits have been reduced tremendously compared to, say, good old MSAA—the performance impact of modern FXAA (Fast Approximate Anti-Aliasing) or TAA (Temporal Anti-Aliasing), for instance, is ridiculously low. However, these methods are not without their problems; TAA in itself is prone to rendering errors and blurring of images due to the way it works (essentially, it combines two frames' motion vectors, which results in temporal image artifacts and reduced detail).

DLSS is, unlike any other anti-aliasing technique, an image upscale algorithm with a Deep Neural Network (DNN) approach; it uses NVIDIA's Tensor Cores to determine the best upscale result on a per-frame basis, rendering the image at a lower resolution and then inferring the correct edges and smoothing for each pixel. But there is much magic here: it is not all being done locally on your computer.

DLSS is possible only after NVIDIA has generated and sampled what it calls a "ground truth" images—the best iteration and highest image quality image you can engender in your mind, rendered at a 64x supersampling rate. The neural network goes on to work on thousands of these pre-rendered images for each game, applying AI techniques for image analysis and picture quality optimization. After a game with DLSS support (and NVIDIA NGX integration) is tested and retested by NVIDIA, a DLSS model is compiled. This model is created via a permanent back propagation process, which is essentially trial and error as to how close generated images are to the ground truth. Then, it is transferred to the user's computer (weighing in at mere MBs) and processed by the local Tensor cores in the respective game (even deeper GeForce Experience integration). It essentially trains the network to perform the steps required to take the locally generated image as close to the ground truth image as possible, which is all done via an algorithm that does not really have to be rendered.



DLSS is only available at 4K resolution. When enabled, the game internally renders at a lower resolution, which increases performance. The DLSS engine then grabs the rendered frame and sends it off to the Deep Learning NGX cores, where it gets upscaled and the machine-learning models "fill in" additional information for each pixel to enable the anti-aliasing effect.

Yesterday, Square Enix released a patch for Final Fantasy XV, which finally enables the DLSS functionality in the game. It was previously available in the benchmark, but the benchmark's fixed movement nature made it easy to optimize for DLSS because all possible frames can be tested and analyzed since the player has no control over movement.

Comparison Images

The following comparisons show FF XV set to highest details, but motion blur is off and post-processing is set to low. We used NVIDIA's 417.35 WHQL drivers. You can zoom in by using the mouse wheel and pan the image around when zoomed in to take a look at smaller details.




Conclusion

The verdict on DLSS can only be "it depends". First of all, it comes with a significant performance improvement over plain 4K. Our RTX 2080 Ti ran at 50 FPS, which was a bit too low for comfort. With DLSS enabled, the game runs at a nice and steady 67 FPS (a 33% performance increase). That's a huge difference that alone can be the deciding factor for many.

When looking at image quality, we can see that DLSS is a little bit softer in high-contrast areas (look at the flowers in the top-left balcony). On the other hand, it does visibly reduce aliasing artifacts from straight lines (look at the roof above the flowers). It handles that case much better than the other anti-aliasing methods.

Nearby textures lack a little bit of detail (look at the bricks on the stairs right in front of the player) because internally, the game no longer renders at 4K, but a lower resolution; the differences are small, though.

A major improvement that DLSS has over temporal anti-aliasing (TAA) is that motion vectors aren't used, a technique that leads to artifacts on TAA. For example, when you turn the camera and something moves in the opposite direction, you'll see small rendering errors around objects. The reason for that is that TAA tries to calculate the movement of the frame, which obviously can fail when things move in different directions—something that's common in gaming.

Compared to upscaled 1440p gaming on a 4K monitor, everything looks better, especially texts in the HUD are much crisper because they are rendered on top of the final DLSS image at the native 4K resolution (look at the player health bars in the bottom right). Upscaled 1440p does run faster than 4K+DLSS though, by another 33%.

Overall, I have to say that I'm pleased with how DLSS looks; the biggest caveat is game support, which is extremely lacking at this time. NVIDIA has confirmed several times that many game studios are working on DLSS support, so it looks like it'll just be a matter of time until it sees wider adoption. Personally, I see DLSS as an additional quality dial, to balance FPS vs quality, especially for weaker hardware. You can now increase settings on models and textures that would have previously driven FPS down below playable levels. DLSS will in turn bring the framerates back up.
Discuss(82 Comments)
Dec 23rd, 2024 22:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts