NVIDIA GeForce RTX 2080 Founders Edition 8 GB Review 63

NVIDIA GeForce RTX 2080 Founders Edition 8 GB Review

(63 Comments) »

Value and Conclusion

  • The NVIDIA RTX 2080 Founders Edition is available for $800.
  • Faster than the GeForce GTX 1080 Ti
  • RTX Technology not gimmicky, brings tangible IQ improvements
  • Deep-learning feature-set
  • DLSS an effective new AA method
  • Highly energy efficient
  • Overclocked out of the box
  • Quiet in gaming
  • Backplate included
  • HDMI 2.0b, DisplayPort 1.4, 8K support
  • High price
  • No Windows 7 support for RTX, requires Windows 10 Fall 2018 Update
  • Bogged down by power limits
  • No idle fan-off
  • High non-gaming power consumption (fixable, says NVIDIA)
Our exhaustive coverage of the NVIDIA GeForce RTX 20-series "Turing" debut also includes the following reviews:
NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB | ASUS GeForce RTX 2080 Ti STRIX OC 11 GB | ASUS GeForce RTX 2080 STRIX OC 8 GB | Palit GeForce RTX 2080 Gaming Pro OC 8 GB | MSI GeForce RTX 2080 Gaming X Trio 8 GB | MSI GeForce RTX 2080 Ti Gaming X Trio 11 GB | MSI GeForce RTX 2080 Ti Duke 11 GB | NVIDIA RTX and Turing Architecture Deep-dive

NVIDIA Turing benchmarks can now finally be published after all the hype, discussion, and drama of the previous weeks. In this review, we looked at the NVIDIA GeForce RTX 2080 Founders Edition, which is the second-fastest Turing product, right behind the GeForce RTX 2080 Ti. Unlike the RTX 2080 Ti, the RTX 2080 uses the TU104 graphics processor, which is smaller and leaner to reach a lower price point. Just like the RTX 2080 Ti, the GeForce RTX 2080 features NVIDIA's full arsenal of new technologies, namely tensor cores for artificial intelligence and RT cores for hardware-accelerated ray tracing.

In terms of performance, the RTX 2080 exceeds the performance of GTX 1080 Ti by 9% both at 1440p and 4K, making it the perfect choice for 1440p gaming, or 4K when you are willing to sacrifice some details settings to achieve 60 FPS. Compared to the RTX 2080 Ti, the 2080 is around 30% behind. Compared to the Radeon RX Vega 64, which is the fastest graphics card AMD can offer, the performance uplift is 44%.

NVIDIA only made small changes in their Boost 4.0 algorithm compared to what we saw with Pascal. For example, instead of dropping all the way to base clock when the card reaches its temperature target, there is now a grace zone in which temperatures drop slowly towards the base clock, which is reached when a second temperature cut-off point is hit. Temperatures of the RTX 2080 Founders Edition are good; with only 72°C under load, the card isn't even close to thermal throttling.

However, every single Turing card we tested today will sit in its power limit all the time during gaming. This means that the highest boost clocks are never reached during regular gameplay, which is in stark contrast to Pascal, where custom-designs were almost always running at peak boost clocks. Just to clarify, the "rated" boost clock on vendor pages is a conservative value that's much lower than the highest reachable boost clock and lower than what we measured during gaming as well. The rated boost clock for the RTX 2080 FE is 1800 MHz. The peak boost clock we recorded (even if it was active for only a short moment) was 1995 MHz, with the average clock being 1897 MHz. It simply looks like that with Turing, the bottleneck is no longer temperature, but power consumption, or, rather, the BIOS-defined limit for it. Manually adjusting the power limit didn't solve the power-throttling problem, but did of course provided additional performance, making this the easiest way to increase FPS, besides manual overclocking.

NVIDIA has once more made significant improvements in power efficiency with their Turing architecture, which has roughly 10%-15% better performance per watt compared to Pascal. Compared to AMD, NVIDIA is now almost twice as power efficient and twice as fast at the same time! The red team has some catching up to do as power, which generates heat, which requires fan noise to get rid of, is now the number one limiting factor in graphics card design.

Our power consumption readings for non-gaming states, like single-monitor and multi-monitor, showed terrible numbers. Especially multi-monitor power is a major issue with 40 W, which is 5x that of the GTX 1080. When asked, NVIDIA told us that they are aware of the issue and that it will be fixed in an upcoming driver update. I specifically asked "are you just looking into it, or will it definitely be fixed", and the answer was that it will definitely be fixed. This update will also reduce fan speed in idle, which will help bring down noise levels. I just wonder why NVIDIA doesn't just add fan-stop in idle on their cards. It's one of the most popular features these days.

Gaming noise levels of the RTX 2080 Founders Edition are comparable to previous generation Founders Edition cards, which of course means that the cooler has received a long due update since on the new cooler, power draw is higher and temperatures are lower, with similar noise. Still, 35 dBA is not whisper quiet even though it is very acceptable, especially considering the good performance results and the fact that this is only a dual-slot design. This provides great opportunity for board partners to design quieter cards; we tested a few of them today with pretty impressive results.

Overclocking has once more become more complicated with this generation. Since the cards are always running in the power limiter, you can no longer just dial in stable clocks for the highest boost state to find the maximum overclock. The biggest issue is that you can't just reach that state reliably, so your testing is limited to whatever frequency your test load is running at. Nevertheless, we managed to pull through and achieved a decent overclock on our card, which translates into 9% additional real-world performance. Overclocking potential seems quite similar on most cards, with the maximum boost clock being around 2100 MHz and maximum GDDR6 clock ending up roughly between 1950 and 2050 MHz.

NVIDIA GeForce RTX doesn't just give you more performance in existing games. It introduces RT cores, which accelerate ray tracing—a rendering technique that can give you realism that's impossible with today's rasterization rendering. Unlike in the past, NVIDIA's new technology is designed to work with various APIs, from multiple vendors (Microsoft DXR, NVIDIA OptiX, Vulkan Vulkan RT), which will make it much easier for developers to get behind ray tracing. At this time, not a single game has RTX support, but the number of titles that will support it is growing by the day. We had the chance to check out a few demos and were impressed by the promise of ray tracing in games. I mentioned it before, but just to make sure: RTX will not turn games into fully ray-traced experiences. Rather, the existing rendering technologies will be used to generate most of the frame, with ray tracing adding specific effects, like lighting, reflections, or shadows for specific game objects that are tagged as "RTX" by the developer. It is up to the game developers what effect to choose and implement; they may go with one or several as long as they stay within the available performance budget of the RTX engine. NVIDIA clarified to us that games will not just have RTX "on"/"off", but rather, you'll be able to choose between several presets; for example, RTX "low", "medium", and "high". Also, unlike Gameworks, developers have full control over what and how they implement. RTX "only" accelerates ray generation, traversal, and hit calculation, which are the fundamentals, and the most complicated operations to develop; everything else is up to the developer, so I wouldn't be surprised if we see a large number of new rendering techniques developed over time as studios get more familiar with the technology.

The second big novelty of Turing is acceleration for artificial intelligence. While it was at first thought that it won't do much for gamers, the company devised a clever new anti-aliasing algorithm called DLSS (Deep Learning Super-Sampling) which utilizes Turing's artificial intelligence engine. DLSS is designed to achieve quality similar to temporal anti-aliasing and to solve some of its shortcomings, while coming with a much smaller performance hit at the same time. We tested several tech demos for this feature and had difficulty telling the difference between TAA and DLSS in most scenes. The difference only became obvious in cases where TAA fails; for example, when it estimates motion vectors incorrectly. Under the hood, DLSS renders the scene at lower resolution (typically 50%, so for 4K, 2880x1620) and feeds the frame to the tensor cores, which use a predefined deep neural network to enhance that image. For each DLSS game, NVIDIA receives early builds from game developers and trains that neural network to recognize common forms and shapes of the models, textures, and terrain to build a "ground truth" database that is distributed through Game Ready driver updates. On the other hand, this means that gamers and developers are dependent on NVIDIA to train that network and provide the data with the driver for new games. Apparently, an auto-update mechanism exists that downloads new neural networks from NVIDIA without the need for a reboot, or update to the graphics card driver itself.

At $799 for the Founders Edition and $699 as the baseline price, the GeForce RTX 2080 has a more justifiable price tag than the RTX 2080 Ti given the GTX 1080 launched at $699 for the Founders Edition. We feel the RTX 2080 is still overpriced by at least 10% despite the fact that Turing is "more than a 2017 GPU" on account of its new on-die hardware. Most factory-overclocked custom design cards could be priced north of $800, which puts it out of reach for not just those looking to upgrade from "Pascal," but also those coming from "Maxwell" and in actual need of an upgrade. Since the RTX 2080 convincingly beats the GTX 1080 Ti, choosing this card over a GTX 1080 Ti that's hovering at the $700-mark makes abundant sense. Similar leaps in technology as RTX, in the past, did not raise prices to this extent over a generation. If this is a ploy to get rid of unsold "Pascal" cards, it could backfire for NVIDIA. Every "Pascal" customer is one less "Turing RTX" customer for the foreseeable future.
Editor's Choice
Discuss(63 Comments)
View as single page
Oct 10th, 2024 20:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts