Death Stranding Director's Cut: XeSS vs. DLSS vs. FSR 2.0 Comparison Review 17

Death Stranding Director's Cut: XeSS vs. DLSS vs. FSR 2.0 Comparison Review

(17 Comments) »

Introduction

Followed by our previous testing of Intel's Xe Super Sampling (XeSS) for Shadow of the Tomb Raider and Marvel's Spider-Man Remastered, Death Stranding Director's Cut is the next AAA-game to receive official XeSS support through a game update. The latest update also added official support for AMD's FidelityFX Super Resolution 2.0 (FSR 2.0). XeSS, DLSS and FSR 2.0 work on the principle of getting the game to render everything except the HUD and post-FX at a lower resolution than the display is capable of, and then upscaling it using sophisticated algorithms that make the output look as if it was rendered at native resolution. In order to run this game at maximum graphics settings and reasonable framerates at native resolution, quite a powerful GPU is required, which is why upscaling solutions are so important. But depending on the game, there are subtle differences in the implementation of Intel's Xe Super Sampling (XeSS), NVIDIA's Deep Learning Super Sampling (DLSS) and AMD's FidelityFX Super Resolution 2.0 (FSR 2.0), so we are keen to have a look at these in this game.



Below, you will find comparison screenshots at 4K, 1440p, 1080p, and in different DLSS, FSR 2.0 and XeSS quality modes. For those who want to see how XeSS, DLSS and FSR 2.0 perform in motion, watch our side-by-side comparison video. The video can help uncover issues like shimmering or temporal instability, which are not visible in the screenshots.

All tests were made using a GeForce RTX 3060 GPU at Very High graphics settings; motion blur and depth of field were disabled for better image viewing. DLSS was manually updated to version 2.4.12 by swapping the DLL file.

Screenshots




Side-by-Side Comparison Video


Conclusion

In Death Stranding Director's Cut, the in-game TAA solution, DLSS and FSR 2.0 implementations use a sharpening filter in the render path, and the game has the ability to tweak the sharpening values through separate sliders. Unfortunately, the XeSS implementation does not support a separate sharpening filter slider and does not use any sharpening filter in its render path. To keep it fair in our testing, we disabled all sharpening for all available upscaling and anti-aliasing solutions.

With FSR 2.0, DLSS or XeSS active, the image quality is very noticeably upgraded at all resolutions compared to native TAA. FSR 2.0, DLSS and XeSS deal better with small thin objects that are far away, like wires, for example, and the quality of steel objects is improved by eliminating shimmering at lower resolutions. Also, all of the available upscaling solutions have better quality than the built-in anti-aliasing solution. In the FSR 2.0, DLSS and XeSS image, most of the edges of the game geometry are smoothed well, whereas the native TAA image has a more pixelated look. What's the most interesting difference between FSR 2.0, DLSS and XeSS is the overall texture detail. FSR 2.0 has a significantly better texture detail in the overall image, compared to DLSS, XeSS and also native TAA, across all resolutions, and that's not because of the sharpening filters in the FSR 2.0 image, as we turned that down in our testing.

XeSS comes with three upscaling kernels that are optimized for various architectures. The first is the kernel that gets used on Intel Arc GPUs with XMX engines. This is the most advanced model, too, that not only performs better in terms of FPS, but also offers the best upscaling quality, Intel calls this "Advanced XeSS upscaling model". Intel also provides an optimized kernel for Intel Integrated Graphics, and another compatibility kernel, used for all other architectures that support Shader Model 6.4, e.g. all recent AMD and NVIDIA cards. These use the "Standard XeSS upscaling model," which is a bit simpler, with lower performance and quality compared to what you get on Arc GPUs (this is the model we're running on our RTX 3060). If DP4a instructions aren't available, like on Radeon RX 5700 XT, slower INT24 instructions are used instead.

Speaking of XeSS, compared to DLSS and FSR 2.0, the XeSS render quality in terms of overall image detail is comparable to what DLSS and FSR 2.0 can output, but with some differences in temporal stability. One of the most noticeable differences in image quality between XeSS, DLSS and FSR 2.0 is how XeSS deals with ghosting. XeSS has noticeable ghosting issues and black trails on the flying chyral crystal particles and flying cryptobiotes similar to what DLSS 2.1 had in the past. On the DLSS side, this issue was fixed with the updates to the DLSS render pipeline, and no doubt, it can be fixed in Xess too. What's also important to note is that we are testing XeSS with an RTX GPU using the "standard" kernel instead of the Intel Arc GPU kernel, which uses the XMX engines and an advanced XeSS upscaling model, which may affect our image quality results.

Interestingly, when using XeSS, there are some major differences in performance gains, compared to DLSS or FSR 2.0, which essentially had equal or very similar performance gains in most games. As we are testing XeSS with an RTX 3060 GPU, which does not have the XMX instruction set, designed to accelerate XeSS workloads on Intel's Arc GPUs, the performance gains are less than what we can expect on Arc GPUs, so keep that in mind. That said, the actual performance increase difference between XeSS and DLSS or FSR 2.0 is about 13% at 4K Quality mode, in favor of DLSS or FSR 2.0. However, compared to native 4K resolution, XeSS manages to deliver up to 25% more performance while using the DP4a instruction set that's compatible with all GPU architectures, which is still a quite decent performance uplift.
Discuss(17 Comments)
Dec 22nd, 2024 02:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts