Our exhaustive coverage of the NVIDIA GeForce RTX 20-series "Turing" debut also includes the following reviews:
NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB |
ASUS GeForce RTX 2080 Ti STRIX OC 11 GB |
ASUS GeForce RTX 2080 STRIX OC 8 GB |
Palit GeForce RTX 2080 Gaming Pro OC 8 GB |
MSI GeForce RTX 2080 Gaming X Trio 8 GB |
MSI GeForce RTX 2080 Ti Gaming X Trio 11 GB |
MSI GeForce RTX 2080 Ti Duke 11 GB |
NVIDIA RTX and Turing Architecture Deep-dive
MSI's GeForce RTX 2080 Ti Gaming X Trio (what a name) is the highest-clocked RTX 2080 Ti custom-design variant out there. Whereas the reference design has 1545 MHz boost, and the Founders Edition 1635 MHz, the Trio is rated for 1755 MHz. We measured the average clock frequency in our test suite, and it ended up at 1926 MHz, which is about 100 MHz higher than the 1824 MHz we saw on the Founders Edition. When looking at FPS in 4K, averaged over all our games, this clock increase translates into a 6% performance increase for the Gaming X Trio, making it the fastest RTX 2080 Ti card we tested so far.
With those performance numbers, the card is an excellent choice for 4K 60 FPS gaming at highest details. The card is also more than twice as fast as the Radeon RX Vega 64—the fastest AMD can offer. Compared to the RTX 2080, the performance uplift is 35%.
NVIDIA has made only small changes in their Boost 4.0 algorithm compared to what we saw with Pascal. For example, instead of dropping all the way to base clock when the card reaches its temperature target, there is now a grace zone in which temperatures drop slowly towards the base clock, which is reached when a second temperature cut-off point is hit. Temperatures of the MSI RTX 2080 Ti Gaming X Trio are good with only 74°C under load and 75°C after manual overclocking.
However, just like every other Turing card we tested today, the MSI Gaming X Trio will sit in its power limit almost all the time during gaming. This means that the highest boost clocks are never reached during regular gameplay, which is in stark contrast to Pascal, where custom-designs were almost always running at peak boost clocks. MSI has installed two 8-pin and one 6-pin power input on their card, which is in theory good for 450 watts of power draw. In reality, the Gaming X Trio uses "only" 307 W on average, with peaks reaching around 360 W. This is because the BIOS is configured for a 300 W power limit, which can be manually increased to 330 W—why not all the way to 450 W? Actually, it might be a good thing that power draw isn't that high since the cooler would get overwhelmed otherwise. Clearly, MSI is winking at the volt-modding community.
NVIDIA has once more made significant improvements in power efficiency with their Turing architecture, which has roughly 10%–15% better performance per watt compared Pascal. Compared to AMD, NVIDIA is now almost twice as power efficient and twice as fast at the same time! The red team has some catching up to do as power, which generates heat, which requires fan noise to get rid of, is now the number one limiting factor in graphics card design.
Fan noise during gaming is alright with 36 dBA, which is just 1 dBA quieter than the Founders Edition. To be honest, I expected lower noise from such an enormous cooler. But it seems all the power, which gets turned into heat, has to be dissipated by the cooler somehow. Basically, the trade-off here is higher performance or lower noise.
Overclocking the MSI RTX 2080 Ti Gaming X Trio is more complicated, like on all Turing cards, and we gained an additional 6.7% in performance. Overclocking potential is similar to other Turing cards we tested today, maybe with the memory being a bit on the lower side, but the differences are small, with most cards reaching around 2100 MHz on the GPU and between 1950 and 2050 MHz on the memory.
NVIDIA GeForce RTX doesn't just give you more performance in existing games. It introduces RT cores, which accelerate ray tracing—a rendering technique that can give you realism that's impossible with today's rasterization rendering. Unlike in the past, NVIDIA's new technology is designed to work with various APIs, from multiple vendors (Microsoft DXR, NVIDIA OptiX, Vulkan Vulkan RT), which will make it much easier for developers to get behind ray tracing. At this time, not a single game has RTX support, but the number of titles that will support it is growing by the day. We had the chance to check out a few demos and were impressed by the promise of ray tracing in games. I mentioned it before, but just to make sure: RTX will not turn games into fully ray-traced experiences.
Rather, existing rendering technologies will be used to generate most of the frame, with ray tracing adding specific effects, like lighting, reflections, or shadows for specific game objects that are tagged as "RTX" by the developer. It is up to the game developers what effect to choose and implement; they may go with one or several as long as they stay within the available performance budget of the RTX engine. NVIDIA clarified to us that games will not just have RTX "on"/"off", but rather, you'll be able to choose between several presets; for example, RTX "low", "medium", and "high". Also, unlike Gameworks, developers have full control over what and how they implement. RTX "only" accelerates ray generation, traversal, and hit calculation, which are the fundamentals, and the most complicated operations to develop; everything else is up to the developer, so I wouldn't be surprised if we see a large number of new rendering techniques developed over time as studios get more familiar with the technology.
The second big novelty of Turing is acceleration for artificial intelligence. While it was at first thought that it won't do much for gamers, the company devised a clever new anti-aliasing algorithm called DLSS (Deep Learning Super-Sampling), which utilizes Turing's artificial intelligence engine. DLSS is designed to achieve quality similar to temporal anti-aliasing and to solve some of its shortcomings, while coming with a much smaller performance hit at the same time. We tested several tech demos for this feature and had difficulty telling the difference between TAA and DLSS in most scenes. The difference only became obvious in cases where TAA fails; for example, when it estimates motion vectors incorrectly. Under the hood, DLSS renders the scene at lower resolution (typically 50%, so for 4K, 2880x1620) and feeds the frame to the tensor cores, which use a predefined deep neural network to enhance that image. For each DLSS game, NVIDIA receives early builds from game developers and trains that neural network to recognize common forms and shapes of the models, textures, and terrain, to build a "ground truth" database that is distributed through Game Ready driver updates. On the other hand, this means that gamers and developers are dependent on NVIDIA to train that network and provide the data with the driver for new games. Apparently, an auto-update mechanism exists that downloads new neural networks from NVIDIA without the need for a reboot or update to the graphics card driver itself.
With a price of $1,250, the MSI RTX 2080 Ti Gaming X Trio is not cheap. Remember when we were shocked about Titan pricing above $1K? Guess those times are over. If you are in the market for a RTX 2080 Ti and are considering the Founders Edition for $1200, then the RTX 2080 Ti Gaming Trio from MSI should be a no-brainer. For just $50 more, you get a much better cooler, more performance, idle fan stop, a better VRM, and a little bit less gaming noise. The big question is, can you fit this monstrosity of a graphics card into your case?