NVIDIA Turing GeForce RTX Technology & Architecture 53

NVIDIA Turing GeForce RTX Technology & Architecture

Shader Improvements »

Deep Learning Anti-Aliasing DLSS

One of the most interesting features—blame NVIDIA and their own slides for that adjectivation—with Turing has to be Deep Learning Super-Sampling (DLSS). The reason for that is that some of the loftier performance claims by NVIDIA pertain to the use of this anti-aliasing algorithm, which has much of AI embedded into it, borrowing from NVIDIA's ingenuity in that particular area of computing.

DLSS basically takes AI's proficiency in analyzing images and finding optimized solutions to bring about an end to regular manifestations of AA. In the wake of post-processing AA techniques, performance hits have been reduced tremendously compared to, say, good, old MSAA—the performance impact of modern FXAA (Fast approximation Anti-Aliasing) or TAA (Temporal Anti-Aliasing), for instance, is ridiculously low. However, these methods are not without their problems; TAA in itself is prone to rendering errors and blurring of images due to the way it works (essentially, it combines two frames' motion vectors, which results in temporal image artifacts and reduced detail).



DLSS is, essentially, an image upscale algorithm with a Deep Neural Network (DNN) approach; it uses NVIDIA's Tensor Cores to determine the best upscale result in a per-frame basis, rendering the image at a lower resolution and then inferring the correct edges and smoothing for each pixel. But there is much magic here: it is not all being done locally on your computer.



DLSS basically works after NVIDIA has generated and sampled what it calls a "ground truth" image—the best iteration and highest image quality image you can engender in your mind, rendered at a 64x supersampling rate. The neural network goes on to work on thousands of these pre-rendered images for each game, applying AI techniques for image analysis and picture quality optimization. After a game with DLSS support (and NVIDIA NGX integration) is tested and retested by NVIDIA, a DLSS model is compiled. This model is created via a permanent back propagation process, which is essentially trial and error as to how close generated images are to the ground truth. Then, it is transferred to the user's computer (weighing in at mere MBs) and processed by the local Tensor cores in the respective game (even deeper GeForce Experiecce integration). It essentially trains the network to perform the steps required to take the locally generated image as close to the ground truth image as possible, which is all done via an algorithm that does not really have to be rendered. As it stands, NVIDIA says DLSS is much better than TAA in keeping image quality, heavily reducing blurriness (caused by combining two temporally distinct frames) and other temporal artifacts.

NVIDIA also speak of a DLSS 2X setting, which renders the image at the intended target resolution and then upscales it via its AI-based neural network to quality levels approaching those of a natively rendered 64x super sample rendering.

Next Page »Shader Improvements
View as single page
Nov 19th, 2024 23:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts