Where rasterization executed on each triangle in the scene, raytracing executes on each pixel on screen, which is the fundamental difference. The issue with raytracing is that if you use a single ray per pixel, you'll get significant aliasing (blocky edges). Unlike in rasterization where various "hacky" anti-aliasing techniques are used, in raytracing, the solution is actually the physically more correct method and simply traces multiple rays per pixel, which all contribute to the final output value, so any edges are smoothed. Now, imagine full HD resolution of 1920x1080 at 60 FPS with just 16 rays per pixel and you'll have to trace 2 billion rays per second. Multiply that number by the number of triangles in the scene, because for each ray, you have to test whether it hits that triangle or not, take that times a million, and, boom, you suddenly have two quadrillion triangle tests each second, which is not happening, not on any hardware.
In order to reduce the huge number of triangle hit tests, the Bounding Volume Hierarchy algorithm was invented, which essentially puts incrementally large boxes around objects and its triangles. These boxes are then stored in a tree-like structure that keeps track of all bounding boxes for the whole scene, with the capability to quickly move from a large one to the next smaller one. These boxes are then processed simultaneously, on a level-by-level basis (thus taking advantage of parallel processing capabilities of graphics cards today). Only boxes where there is a ray hit keep being processed further while all others are immediately discarded from the workload. Increasingly smaller boxes are processed as the algorithm divides and discards them, thus reducing the area to solve until a single pixel is given as a result.
The beauty of Turing relative to NVIDIA's previous microarchitectures is that while, say, Pascal would need to perform thousands of operations using its CUDA cores with an emulated BVH approach, Turing can have multiple, parallel ray tracing calculations on dedicated hardware concurrently with shading operation on your old, trusty shader engines. Oh, and did we mention it's done in real time? This is without doubt the single biggest marketing feature of the Turing microarchitecture, and it remains to be seen how well it works out in practice. If nothing else, this heralds one of the biggest leaps in in-game illumination and should set a solid base for future generations of hardware and software alike.
AI Denoising
As mentioned on the previous page, raytracing requires at least a single ray to be cast per pixel on screen to provide a color for that pixel. Otherwise, the image will be noisy, i.e., have black pixels scattered all around the final image. If you want anti-aliasing, you would even have to cast multiple rays per pixel.
Denoising is one of the main ways to get around this limitation, providing signification visual improvement of otherwise fuzzy, noisy images resulting from sparse data and/or artifacts. NVIDIA claims their denoising filtering is particularly effective at lowering the time taken to render high quality ray-traced images that appear noiseless in real time. This comes about by the use of both AI-based and non-AI-based algorithms, as per specific scene requirements, with the expectation that all future denoising filtering will be AI-based and improving all the time.
Shadows
Shadows come into the picture naturally with ray tracing being discussed, and accurate shadow maps are among the hardest to render in real time even before ray tracing is involved. Contact hardened shadows, in particular, play a major role in imparting a more realistic render, and this has been the subject of many game engines before with implementations to varying degrees of success (Deus Ex: Mankind Divided, for instance, had a big performance deficit with this turned on without an appreciable quality increase). Contact hardening is currently approximated via techniques including Percentage Closer Soft Shadows (PCSS) and Distance Field Shadows. The former is taxing on computational power and still does not generate accurate shadows from random area lights, and the latter is limited to static geometry in order to generate a shadow map.
NVIDIA Turing RTX-based ray tracing and denoising filtering enables the replacement of shadow maps with ray-traced shadows instead, which incorporates a more practical approach to simulate contact hardened shadows from all types of light sources. The example images above show the effect of a shadow map-based rendering that helps provide a uniform blur to the shadow edges but does not account for contact hardening well. In contrast, ray-traced shadows can have completely hard edges or accurate contact hardening despite a varying angle of light incidence from the source.
Ambient Occlusion
Another aspect of improving ray-tracing performance comes in the form of ambient occlusion. In particular, the new Ray-traced Ambient Occlusion (RTAO) helps counter the absence of global illumination by highlighting creases and other surface textures in objects, but fewer artifacts compared to the otherwise-used Screen-space Ambient Occlusion (SSAO). SSAO is particularly notorious in producing artifacts or even poor resolution renders when dealing with reflections off smooth, glossy surfaces, be it cubiod or spherical in geometry. RTAO, especially combined with denoising as mentioned above, allows for more subtle effects in real time without any of these problems.