NVIDIA launched the GeForce RTX 20-series with the introduction of the GeForce RTX 2080 and RTX 2080 Ti. "Turing" is a moniker that caught us by surprise. A year ago, when everyone thought "Volta" was the successor to "Pascal" since the company already put out the TITAN Volta, "Turing" was speculated to be a crypto-currency mining chip. Little did we realize that NVIDIA's tribute to Alan Turing wouldn't be narrowed to his code-breaking skills that potentially won the Allies the war, but for his honor as the Father of Artificial Intelligence and Theoretical Computing.
Turing comes at a time when the silicon fabrication technology isn't advancing at the rate it used to four years ago, wrecking the architecture roadmaps of several semiconductor giants, including Intel, NVIDIA, AMD, and Qualcomm; forcing them to design innovative new architectures on existing foundry nodes. Brute transistor-count increases, as would have been the case with "Volta," are no longer a viable option, and NVIDIA needed a killer feature to sell new GPUs. That killer feature is the RTX Technology. This feature is so big for NVIDIA that it has changed the nomenclature of its client-segment graphics cards with the introduction of the GeForce RTX 20-series.
NVIDIA RTX is a near-turnkey real-time ray-tracing model for game developers that lets them fuse real-time ray-traced objects into 3D scenes that have been rasterized. Ray-tracing the whole scene in existence isn't quite possible yet, but the results with using RTX are still better-looking than anything rasterizing can achieve. To even get those few bits of ray tracing done right, an enormous amount of compute power is required. NVIDIA has hence deployed purpose-built hardware components on its GPUs that sit alongside all-purpose CUDA cores, called RT cores.
NVIDIA invested heavily to stay at the bleeding edge of the hardware that drives pioneering AI research, and over the years, has developed Tensor cores, specialized components that are tasked with matrix multiplication, which speed up deep-learning neural-net building and training via Tensor ops. Although it's a client-segment GPU for gaming, NVIDIA feels GPU-accelerated AI could play an increasingly big role in the company's turnkey GameWorks effects, and a new image-quality enhancement called Deep-Learning Super-Sampling (DLSS). The chips are hence endowed with Tensor cores, just like the TITAN Volta. All that it lacks compared to the $3,000 graphics card from last year is FP64 CUDA cores.
NVIDIA GeForce RTX 20-series graphics cards debut at unusually high prices compared to their predecessors, perhaps because NVIDIA doesn't count the GTX 10-series as a predecessor to begin with. These chips pack not just CUDA cores, but also RT cores and Tensor cores, adding to the transistor count; which along with generational increases in performance contributes to scorching 15%-70% increases in launch prices over the GTX 10-series. The GeForce RTX 2080 is the second-fastest graphics card from the series and is priced at $700 for the base model.
In this review, we take a look at the Palit GeForce RTX 2080 Gaming Pro OC, the second-fastest RTX 2080 product from Palit. Targeted mainly at gamers who don't care all that much for overclocking, the RTX 2080 combines a 2-slot thick (rare) dual-fan cooling solution with a PCB that closely resembles NVIDIA's reference design. You still get a small but useful 6% factory-overclock, with the GPU core clocked at 1515 MHz (reference), but GPU Boost going up to 1815 MHz.