NVIDIA earlier this month launched its most important GeForce RTX 20-series graphics card, the RTX 2060, along the sidelines of CES 2019. With a list price of $349, this card is designed for affordable 1440p gaming with all details cranked up, including real-time raytracing RTX features. The de facto reference-design RTX 2060 rendition, dubbed Founders Edition, was reviewed last week. The RTX 2060 is a primarily partner-driven launch, which means there could be dozens of custom-design graphics card models from NVIDIA's various add-in card partners (AICs).
The RTX 2060 was rumored to come in half a dozen sub-variants based on memory size and type, although in the end, NVIDIA only launched the top-spec variant with 6 GB of GDDR6 memory. Perhaps, NVIDIA is saving the other SKUs up for when its GTX 1060 inventories are sufficiently off the shelves and spring-summer sets in. NVIDIA carved the RTX 2060 out from the same silicon as the RTX 2070, the 12 nm Turing "TU106." This means you very much do get RT cores and Tensor cores, and NVIDIA wants you to enjoy real-time ray-traced gaming with this card, particularly with RTX enabled, and NVIDIA's ambitious new image-quality innovation, DLSS (deep-learning super-sampling).
The RTX 2060 is equipped with 1,920 CUDA cores, which is a huge step up from the GTX 1060 6 GB (1,280), spread across 30 out of 36 streaming multiprocessors on the "TU106." You hence get 30 RT cores and 240 tensor cores. NVIDIA narrowed the memory bus width of this chip down to 192-bit and equipped it with 6 GB of GDDR6 memory clocked at 14 Gbps, resulting in 336 GB/s of memory bandwidth (roughly on par with that of a GTX 980 Ti).
EVGA's GeForce RTX 2060 XC Ultra is the company's highest-specced RTX 2060 variant. It comes factory overclocked with a large boost clock increase to 1830 MHz, which is the highest of any announced card so far. EVGA extended the length of their card, so it works with a larger cooler, which should help with thermals. The EVGA RTX 2060 XC Ultra will retail for $380.
The "Turing" architecture caught many of us by surprise because it wasn't visible on GPU architecture roadmaps until a few quarters ago. NVIDIA took this roadmap detour over carving out client-segment variants of "Volta" as it realized it had achieved sufficient compute power to bring its ambitious RTX Technology to the client segment. NVIDIA RTX is an all-encompassing real-time ray-tracing model for consumer graphics that seeks to bring a semblance of real-time ray tracing to 3D games.
To enable RTX, NVIDIA has developed an all-new hardware component that sits next to CUDA cores called the RT core. An RT core is a fixed-function hardware that does what NVIDIA OptiX, the spiritual ancestor of RTX, did over CUDA cores. You input the mathematical representation of a ray, and it will transverse the scene to calculate the point of intersection with any triangle in the scene. This is a computationally heavy task that would have otherwise bogged down the CUDA cores.
The other major introduction is the Tensor Core, which made its debut with the "Volta" architecture. These too are specialized components tasked with 3x3x3 matrix multiplication, which speeds up AI deep-learning neural net building and training. Its relevance to gaming is limited at this time, but NVIDIA is introducing a few AI-accelerated image-quality enhancements that could leverage Tensor operations.
The component hierarchy of a "Turing" GPU isn't much different from its predecessors, but the new-generation Streaming Multiprocessor is significantly different. It packs 64 CUDA cores, 8 Tensor Cores, and a single RT core.
TU106 Graphics Processor
The TU106 is the third-largest chip based on the "Turing" architecture, and as we mentioned earlier, it is divergent from chips such as the GP106 in that it has half the number-crunching machinery of the largest TU102 chip instead of half that of the TU104. This allows NVIDIA to design the RTX 2070 to have over 3/4th the number of CUDA cores as the RTX 2080 without wasting valuable TU104 die by disabling CUDA cores that are sometimes perfectly functional. The RTX 2060 is carved out of the same silicon by disabling some components.
At the topmost level, the GPU takes host connectivity from PCI-Express 3.0 x16 and connects to GDDR6 memory across a 192-bit wide GDDR6 memory interface.
The GigaThread engine marshals load between three GPCs (graphics processing clusters). Each GPC has a dedicated raster engine and six TPCs (texture processing clusters). A TPC shares a PolyMorph engine between two SMs. Each SM packs 64 CUDA cores, 8 Tensor cores, and an RT core.
There are, hence, 768 CUDA cores, 96 Tensor cores, and 12 RT cores per GPC, and a grand total of 2,304 CUDA cores, 288 Tensor cores, and 36 RT cores across the TU106 silicon. For the RTX 2060, NVIDIA has disabled a total of 6 streaming multiprocessors, two per GPC, resulting in a CUDA core count of 1,920, 240 tensor cores, and 30 RT cores. The memory interface is narrowed down to 192-bit, holding 6 GB of memory.
At its given memory clock of 14 Gbps, the RTX 2060 has the same memory bandwidth on tap as the GTX 980 Ti (which had a much wider memory bus) with 336 GB/s.
Features
Again, we highly recommend you read our article from the last year for intricate technical details about the "Turing" architecture feature set, which we are going to briefly summarize here.
NVIDIA RTX is a brave new feature that has triggered a leap in GPU compute power, just like other killer real-time consumer graphics features, such as anti-aliasing, programmable shading, and tessellation. It provides a programming model for 3D scenes with ray-traced elements that improve realism. RTX introduces several turnkey effects that game developers can implement with specific sections of their 3D scenes, rather than ray-tracing everything on the screen (we're not quite there yet). A plethora of next-generation GameWorks effects could leverage RTX.
Perhaps more relevant architectural features to gamers come in the form of improvements to the GPU shaders. In addition to concurrent INT and FP32 operations in the SM, "Turing" introduces Mesh Shading, Variable Rate Shading, Content-Adaptive Shading, Motion-Adaptive Shading, Texture-Space Shading, and Foveated Rendering.
Deep Learning Anti-Aliasing (DLSS) is an ingenious new post-processing AA method that leverages deep-neural networks built ad hoc with the purpose of guessing how an image could look upscaled. DNNs are built on-chip, accelerated by Tensor cores. Ground-truth data on how objects in most common games should ideally look upscaled are fed via driver updates, or GeForce Experience. The DNN then uses this ground-truth data to reconstruct detail in 3D objects. 2x DLSS image quality is comparable to 64x "classic" super sampling.