NVIDIA Turing GeForce RTX Technology & Architecture 53

NVIDIA Turing GeForce RTX Technology & Architecture

NVIDIA RTX »

The Challenge

The biggest challenge in computer graphics is to solve the visibility problem, i.e. determine which objects are visible and which aren't at a given instant in time for each rendered frame, many times per second. Two approaches to this exist: raytracing and rasterization.

Images on this page are from Scratchapixel, which provides probably the best slightly technical introduction to rendering techniques online. Head over to their site if you want to learn more.

Rasterization


Let's start with rasterization first, which is the method all games today use for rendering. It all begins with objects that are made up of triangles and get placed in the scene during the generation of each frame. In a later step, all the triangles of these objects are projected onto the screen. Basically, they are turned into a 2D representation on your screen from a 3D representation in the scene. This is performed for every single visible triangle in the scene, which can be millions in modern games.


In the next step, all these triangles are filled with the color they're supposed to have. Now, if you imagine the output, it would look like a huge mess of overlapping filled triangles that don't have their distance taken into account. For a proper view into the world, hidden triangles (and parts of partially hidden triangles) need to be removed if a closer triangle exists that obscures everything behind it.


The solution to this problem is called Z-Buffering (or Depth Buffering). Every time a pixel on screen gets filled, a second buffer records how far away that triangle is. Each pixel in this buffer is initialized with a very large value. Now, when a pixel gets written to screen, the Z-Buffer is checked first to figure out if it has a value smaller than the current pixel's distance. If that is the case, a previous triangle that's closer to the screen has already filled that pixel, and the current pixel is discarded because it is obscured.

All of this can be executed in parallel because the output for each pixel doesn't depend on the output of other pixels on screen, and all triangles can be processed in parallel because the Z-Buffer method will take care of "sorting" the triangles from back to front without ever having to compare two triangles directly.

Raytracing


Raytracing uses a completely different approach. Just like in rasterization, the virtual scene is made up of a large number of triangles that represent objects in the game world, but instead of projecting these triangles onto the screen, raytracing shoots a ray for each monitor pixel out into the game world, looking for points of intersection ("ray hits"). Since the ray propagates from the monitor into the world, it will automatically solve the visibility problem, because the first thing it intersects with will be the triangle closest to the screen. If no intersection can be found ("ray miss"), then the ray never hit an object and its pixel color is selected to represent a sky color, for example.

The beauty of this approach is that it too can be massively parallelized since each ray is fully independent, and even the triangle-hit-check of each ray doesn't depend on other triangles. You simply collect all the triangle intersections of each ray and then select the closest intersection point.

I can hear you ask now: "but what about shadows? and reflections?".

Shadows & Reflections in Raytracing


What we just described is the original raytracing algorithm. To add shadows from each ray intersection point in the scene, you cast a "shadow ray" that moves from that intersection point, straight to each light source. If that ray hits another object on the way to the light source, then light can't fall from the light source onto the origin spot because some other geometry is in the way, which means that point is shadowed.

This method will give you hard shadows, from point lights, but neither the soft shadows or area lights that were both demonstrated by NVIDIA in their RTX feature presentation. More on that later in the RTX section.


Reflections are achieved in a similar way. When a light ray hits a surface, depending on the material properties, it either gets reflected or refracted (bent). This effect can be approximated fairly easily mathematically. All you do now is trace that new ray, possibly changing its direction several times, when it hits another reflective object, until it hits an object that no longer reflects (opaque diffuse), which determines the color of the pixel on screen. The other surfaces on the way can contribute to the color of that pixel, depending on their material properties.

Path Tracing


A further refinement of raytracing is called "path tracing", which casts a large number of new rays from each ray hit, simulating the properties of materials in real life. In the real world, each surface has a certain amount of roughness, which scatters light in all directions (not just one, like in reflection). This is the technique used by movie studios to create real-life like renderings, but it comes with an immense computational cost because the number of rays is greatly multiplied. Again, here the promise of raytracing is that it can simulate these effects in a physically correct way without using the hacks that are the only available approach when rasterization is used. However, due to their computational complexity, path tracing isn't feasible, even on RTX hardware.
Next Page »NVIDIA RTX
View as single page
Sep 26th, 2024 20:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts