AMD FSR 2.0 Quality & Performance Review - The DLSS Killer 198

AMD FSR 2.0 Quality & Performance Review - The DLSS Killer

Image Quality Comparison »

Introduction

AMD Logo

AMD FidelityFX Super Resolution 2.0 (FSR 2.0) is available from tomorrow, May 12, with the latest version of "Deathloop." Announced earlier this Spring, FSR 2.0 is a major update to AMD's performance enhancement rivaling NVIDIA DLSS, which lets you improve framerates at minimal loss to image quality. Both FSR and DLSS work on the principle of getting the game to render everything except the HUD and post-FX at a lower resolution than the display is capable of and upscaling it using sophisticated algorithms that make the output look as if it were rendered at native resolution. AMD and NVIDIA take different paths to achieving this goal.



In this review, we're comparing FSR 2.0 and its various performance levels with the image quality of FSR 1.0 and NVIDIA DLSS / DLAA.

FSR 1.0 Recap

Let's start with a quick history lesson. AMD's first version of FidelityFX Super Resolution, aka "FSR 1.0" (our review), launched roughly one year ago, takes into account only the currently displayed image, and upscales it using the Lanczos algorithm. Even though it's a relatively simple approach, the results were astonishing, especially considering the limited complexity of the algorithm. Working with only a single frame means FSR 1.0 had only limited information to work with—this was NVIDIA's strongest argument for DLSS. DLSS, on the other hand, takes a sequence of images, i.e., it has knowledge of the recent history, which gives it additional information it fuses into the single output frame, making it more than an upscaler. The drawback of such temporal algorithms is that the scene in a video game is not static and rather changes all the time, which can often lead to ghosting artifacts, but more on that later.

FSR 2.0

In order to generate a better output image, FSR needs more data, which is why AMD created a temporal upscaling algorithm that is conceptually roughly similar to NVIDIA's DLSS 2.0. Much like FSR 1.0, FSR 2.0 is a shader-based technology, which relies on programmable shaders and hence is hardware-agnostic—it is designed to work on all hardware, even graphics cards from NVIDIA and Intel. DLSS, on the other hand, is an NVIDIA-exclusive technology that uses the AI-math-optimized Tensor Cores available in newer generations of NVIDIA hardware.

Under the hood, FSR 2.0 still uses the Lanczos algorithm to upscale the low-resolution image to the final render size. Instead of taking just the current frame as input, data from several frames is combined into a single buffer, which is then upscaled using Lanczos. The frames the game developers feed to FSR have to be slightly jittered, which means the camera moves by a tiny tiny sub-pixel-sized amount to pick up additional information for each frame. FSR will then assign an "importance" score to each pixel that not only takes into account how old that information is, but also the distance to the target pixel.


FSR 2.0 sits at the same location in the game's rendering pipeline as FSR 1.0. It takes in the 3D scene ("Color Buffer") rendered by the game at a lower resolution, just like FSR 1.0. What's new is that game developers now feed motion vectors and depth information into the algorithm, too, just like in DLSS 2.0.

Motion vectors are a piece of information that express how the scene has changed from one frame to the other. For example, when the player pans the camera in a first-person game, the whole "screen" moves in a specific direction—that's encoded in the motion vectors. What's stored as well is how things like enemies and animated objects in the scene have moved. Basically, it's a map of pixels that has data for each pixel, from where to where it moved between two consecutive frames.

The depth buffer is used to express the distance from the camera for each single pixel on screen. The screen image on your monitor is just lots of colored pixels, your brain then figures out what objects you're supposed to see and how far they are away. To make it easy for FSR to calculate distances, the depth buffer is generated by the game and contains a number for each pixel that doesn't represent its color, but the distance to the underlying visible object instead. This information is used by FSR 2.0 to eliminate ghosting artifacts.

Ghosting Artifacts


To address the problem of ghosting, AMD is using the depth buffer information to calculate a list of pixels that have moved in such a way that they reveal a different object that's further away. Imagine a red car driving through a winter scene. As the car moves across the screen, some red pixels will become white, revealing the snow the car moved away from, which is called "disocclusion", maybe "uncover" or "reveal" is a more approachable word to describe what's happening. If temporal information from previous frames is used naively, several previous positions of the car will become part of the current image: You'd see a red streak tailing the car—these are the ghosting artifacts everybody is afraid of. For these pixels that are revealed, no previous history information exists, so AMD discards most of the history for them. Why "most" of the history and not everything? They claim that this makes the disocclusion smoother even though it creates some very minor unnoticeable ghosting. They also use some additional blurring around that pixel to quickly bring in fresh information from surrounding pixels.

Thin Features


Objects that appear very narrow on the screen, like wires or geometry that's viewed at a steep angle, are another challenge for upscaling algorithms. With the way temporal algorithms grab information at slightly different points in space between each frame ("jittering"), this will result in unstable pixels that are flickering because the sampled information "jumps" between two objects with completely different colors.

To fix that, AMD is detecting such thin "pixel ridges," and locking them, so that they become more pronounced and appear stable. Of course, these locks have to be freed when the scene moves, and also when the disocclusion algorithm detects something else has become visible at that location.

Dynamic Resolution Scaling


AMD has focused a lot of attention on the ability of FSR 2.0 to be able to support Dynamic Resolution Scaling. DRS is when you can feed multiple input images at different resolution into the algorithm and the output can still be constructed without throwing away previous information that was recorded at a different resolution. This enables real-time resolution adjustments. For example, when an explosion occurs, the framerate will drop for a moment, while things blow up, which can often be perceived as stuttering. With dynamic resolution, these frames can be rendered at lower resolution—which results in a smoother experience thanks to higher FPS, and when things have cooled off, the game can automagically increase the rendering resolution again. Another application would be to render at lower resolution when the player moves, so that things look smooth, and dial up the resolution when the player is standing still and has time to take a closer look at the image.

To solve this challenge, the FSR 2.0 algorithm saves information used in subsequent frames at native resolution, which stays constant even during resolution scaling.

Sharpening



Just like FSR 1.0, FSR 2.0 offers an additional sharpening pass. Unlike FSR 1.0, the sharpening can be enabled, disabled, and its strength adjusted independently of the actual FSR mode. The algorithm is the same Robust Contrast Adaptive Sharpening (RCAS) method as for FSR 1.0.

Modes


Much like FSR 1.0, there are many "modes" you can select from in the settings of a supported game, which alter quality levels by adjusting the resolution at which the game is actually rendered. You're made to trade quality for performance with each step. In Deathloop, the modes available are "Quality," Balanced," and "Performance." "Ultra Quality" from FSR 1.0 has been removed because it was just "Quality" with "Sharpening," which can now be adjusted separately.

If you're unsure what mode to pick, you can use the "Adaptive Resolution" option, which is the Dynamic Resolution Scaling we just talked about. Here, the game engine dynamically adjusts render resolution based on the complexity of the 3D scene to favor a framerate target. You may also limit adaptive resolution mode to not drop below 50%, 75% or 85% of the screen resolution, so your image quality doesn't suffer too much.


Here's a list of the various modes and their scaling factors. "Ultra Performance" mode isn't available in Deathloop (at least not at this time).

AI not Required

Much like FSR 1.0, the new FSR 2.0 does not leverage AI machine-learning, so it doesn't need any AI-accelerating machinery on the GPU, unlike DLSS, which [at least officially] needs GPUs with Tensor cores (GeForce RTX-only). This means FSR 2.0 will work on NVIDIA GeForce GPUs, and what's more, AMD is even making it open-source on GPU Open, so any game developer or student can have at it under an MIT license, which allows modifications to the algorithm. AMD's benevolence is calculated as it wants to do to DLSS what FreeSync did to G-SYNC (outsold G-SYNC due to royalty-free nature and simplicity of design). Just to clarify, in this case, DLSS is royalty-free, just like FSR, but it's not open source and only NVIDIA is able to make changes and knows how the algorithms work.

There are still some minimum recommendations, so don't pull out your GeForce Fermi yet just. For 4K (i.e., upscaling to 4K,), you'll need at least a Radeon RX 5700 or GeForce RTX 2070; for 1440p, you're recommended to use at least an RX 5600 or GTX 1080, and for 1080p, it's recommended that you at least have an RX 590 or GTX 1070. The technology itself supports graphics cards all the way back to the RX 500 series "Polaris."
Next Page »Image Quality Comparison
View as single page
Dec 21st, 2024 08:23 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts