Our exhaustive coverage of the NVIDIA GeForce RTX 20-series "Turing" debut also includes the following reviews:
NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB |
NVIDIA GeForce RTX 2080 Founders Edition 8 GB |
ASUS GeForce RTX 2080 Ti STRIX OC 11 GB |
ASUS GeForce RTX 2080 STRIX OC 8 GB |
MSI GeForce RTX 2080 Gaming X Trio 8 GB |
MSI GeForce RTX 2080 Ti Gaming X Trio 11 GB |
MSI GeForce RTX 2080 Ti Duke 11 GB |
NVIDIA RTX and Turing Architecture Deep-dive
Palit is using a large triple-slot cooler on their GeForce RTX 2080 Super JetStream. The card also comes with a more powerful VRM, which is now 10+2 phases instead of the 8+2 configuration on the Founders Edition. With an out of the box overclock of 1860 MHz boost, the Palit Super JetStream is clocked 60 MHz higher than the FE, a medium-sized overclock. Memory isn't overclocked even though the chips could certainly handle it, as our manual overclocking tests show.
Thanks to its out of the box overclock, the Palit RTX 2080 Super JetStream runs 2% faster when averaged over our test suite at 4K resolution, which isn't a lot to be honest. With those results, the RTX 2080 is the perfect choice for 1440p gaming, or 4K when you are willing to sacrifice some details settings to achieve 60 FPS. Compared to the RTX 2080 Ti, the 2080 Super JetStream is 26% behind. Compared to the Radeon RX Vega 64, which is the fastest graphics card AMD has on offer, the performance uplift is 47%.
NVIDIA only made small changes in their Boost 4.0 algorithm compared to what we saw with Pascal. For example, instead of dropping all the way to base clock when the card reaches its temperature target, there is now a grace zone in which temperatures drop slowly towards the base clock, which is reached when a second temperature cut-off point is hit. Temperatures of the Palit RTX 2080 are a bit better than the Founders Edition, by 2°C. Thermal throttling is a complete non-issue on both cards.
However, every single Turing card we tested so far will sit in its power limit all the time during gaming. This means the highest boost clocks are never reached during regular gameplay, which is in stark contrast to Pascal, where custom-designs were almost always running at peak boost clocks. It simply looks like with Turing, the bottleneck is no longer temperature, but power consumption or, rather, the BIOS-defined limit for it. Manually adjusting the power limit didn't solve the power-throttling problem, but it provided additional performance, of course, making this the easiest way to increase FPS, besides manual overclocking.
NVIDIA has once more made significant improvements in power efficiency with their Turing architecture, which has roughly 10%–15% better performance per watt compared to Pascal. Compared to AMD, NVIDIA is now almost twice as power efficient and twice as fast at the same time! The red team has some catching up to do as power, which generates heat, which requires fan noise to get rid of, is now the number one limiting factor in graphics card design.
Palit's thermal solution seems to aim at being slightly better than the Founders Edition cooler, although without the high cost, and the card succeeds at that. Temperatures are 2°C better, and noise levels are improved by 1 dBA in gaming and 2 dBA in idle. From a user's perspective, these numbers are too close to make a meaningful difference in daily use, so both coolers should be considered equal. Maybe, the idle noise improvements are noticeable in an otherwise quiet system, but I rather wish Palit had added the super-popular fan-stop-in-idle feature for a completely silent operation during idle, Internet browsing, and light gaming.
Overclocking, while just as complicated as on other Turing cards, is similar to other cards. It looks like the silicon lottery didn't give us the best GPU overclocker as the results are a few percent below what we've seen from competing cards, but the differences are small. Memory overclocks better than average though.
NVIDIA GeForce RTX doesn't just give you more performance in existing games. It introduces RTX cores, which accelerate ray tracing—a rendering technique that can give you realism that's impossible with today's rasterization rendering. Unlike in the past, NVIDIA's new technology is designed to work with various APIs from multiple vendors (Microsoft DXR, NVIDIA OptiX, Vulkan Vulkan RT), which will make it much easier for developers to get behind ray tracing. At this time, not a single game has RTX support, but the number of titles that will support it is growing by the day. We had the chance to check out a few demos and were impressed by the promise of ray tracing in games. I mentioned it before, but just to make sure: RTX will not turn games into fully ray-traced experiences. Rather, existing rendering technologies will be used to generate most of the frame, with ray tracing adding specific effects, like lighting, reflections, or shadows, for specific game objects that are tagged as "RTX" by the developer. It is up to the game developers what effect to choose and implement; they may go with one or several as long as they stay within the available performance budget of the RTX engine. NVIDIA clarified to us that games will not just have RTX "on"/"off", but rather, you'll be able to choose between several presets; for example, RTX "low", "medium", and "high". Also, unlike Gameworks, developers have full control over what and how they implement. RTX "only" accelerates ray generation, traversal, and hit calculation, which are the fundamentals, and the most complicated operations to develop; everything else is up to the developer, so I wouldn't be surprised if we see a large number of new rendering techniques developed over time as studios get more familiar with the technology.
The second big novelty of Turing is acceleration for artificial intelligence. While it was at first thought that it won't do much for gamers, the company devised a clever new anti-aliasing algorithm called DLSS (Deep Learning Super-Sampling), which utilizes Turing's artificial intelligence engine. DLSS is designed to achieve quality similar to temporal anti-aliasing and solve some of its shortcomings, while coming with a much smaller performance hit at the same time. We tested several tech demos for this feature and had difficulty telling the difference between TAA and DLSS in most scenes. The difference only became obvious in cases where TAA fails; for example, when it estimates motion vectors incorrectly. Under the hood, DLSS renders the scene at lower resolution (typically 50%, so 2880x1620 for 4K), and feeds the frame to the tensor cores, which use a predefined deep neural network to enhance that image. For each DLSS game, NVIDIA receives early builds from game developers and trains that neural network to recognize common forms and shapes of the models, textures, and terrain to build a "ground truth" database that is distributed through Game Ready driver updates. On the other hand, this means that gamers and developers are dependent on NVIDIA to train that network and provide the data with the driver for new games. Apparently, an auto-update mechanism exists that downloads new neural networks from NVIDIA without the need for a reboot or update to the graphics card driver itself.
With a price of $850, the Palit RTX 2080 Gaming Pro is $50 more expensive than the NVIDIA Founders Edition, and I don't think that's justified. Yes, it has an overclock out of the box, a more powerful VRM, and a slightly better cooler, but overall, the differences are small enough to not justify much of a price increase over the Founders Edition. The NVIDIA Founders Edition is a flashy and costly design, especially the thermal solution drives up the price quite a bit. Palit is using their own cooler, which performs a bit better than the NVIDIA version and is no doubt much cheaper to fabricate. That's why I think the Palit RTX 2080 JetStream at its current price is a bit too expensive; a more realistic price point would be $825 or so.