NVIDIA's new GeForce Titan X is built on the same Maxwell architecture that impressed us with such cards as the GTX 970 and GTX 980. It's just that at its heart is a brand-new graphics processor, the GM200. Like the GM204, it is built on a 28 nanometer process, but is pretty much scaled up by 50% in terms of components. The GTX Titan X uses all the shaders physically available on the GM200 silicon, making this a "have it all for $999" product. This makes the Titan X the fastest single-GPU card ever made. In our testing, we see impressive performance that's 33% faster than the GTX 980, and it does not have to rely on game and driver support like a GTX 970 SLI setup would, for instance, since it's a single-GPU. I've still included GTX 970 SLI in our test results as a more cost-optimized alternative that actually ends up 5% faster than the Titan X, even with non-SLI-scaling factored in. AMD's R9 290X is nowhere to be seen when looking back from a Titan X as the difference is about 30%.
Single GPU gaming at 4K is finally made possible with the Titan X. The card has enough horse power to drive all games at more than 30 FPS at 4K resolution. Pair that with a G-Sync 4K monitor and you'll have a stutter-free experience that looks awesome. Why is that so great? Because many games today do not support multi-GPU setups well; forums are full of people that report no scaling, negative scaling, rendering errors, etc. With the Titan X, there will be no such drama, so just enjoy the game.
The Titan X's cooling seems to hold it back. Though NVIDIA's choice to reuse the original Titan cooler does not seem optimal, the card will be running at its 84°C temperature limit almost all the time during demanding games, which causes Boost clocks to go down. I also wonder why NVIDIA chose such a low temperature limit when it would have clearly made a performance difference.
Another shortcoming of the cooler, which looks fantastic with its black-powdered coat by the way, is fan noise. While not terribly noisy, it definitely emits more noise than NVIDIA's other recent releases, roughly matching the Radeon R9 290X's noise output. It was the wonderful Maxwell architecture that brought huge efficiency gains so cooling could be quieter yet still powerful enough to keep temperatures in check. NVIDIA's board partners capitalized on that with such fantastic cards as the ASUS GTX 980 STRIX and MSI GTX 980 Gaming, which are not only very fast, but also extremely quiet even in full-load gaming, making them the perfect choice for low-noise gamers. The Titan X can not compete with that. It almost looks as though NVIDIA's thermal engineers are stuck back in the days of the original Titan as there is also no idle-fan-off feature, another recent addition made possible by Maxwell and implemented on many GTX 960/970/980 cards, but not the Titan X. So in terms of noise levels, the Titan X disappoints.
Oh, and there is no backplate, either. Why? NVIDIA says that people who used GTX 980 SLI didn't figure out that they had to remove the little black plate from the back to maximize SLI airflow, so we are back to no backplate yet again.
NVIDIA has equipped their GTX Titan X with a massive 12 GB framebuffer, which is the biggest of any single-GPU consumer card ever released. But is bigger better? We took a look at the memory consumption of today's games, and most are perfectly happy with just 2-3 GB; even the original Titan had twice that two years ago, so I'm not sure if the future-proof argument is really valid. Today, not a single game in our test suite uses that much memory, which leads me to the conclusion that 8 GB of VRAM would have been a smarter choice. Of course, you can find or construct scenarios where 12 GB of VRAM are useful, probably in computing apps, but for the vast majority of users, so much memory provides absolutely no benefit.
The Titan X's power efficiency is fantastic; the card nearly leads our performance-per-watt charts only to be topped by the GTX 980. The GTX 970 is already less power efficient, and AMD's cards are far behind with roughly half that power efficiency.
The Titan X is the first NVIDIA Titan that does not have full double-precision performance enabled, unlike the original Titan, Titan Black, and Titan Z. If you ask me, that's completely irrelevant to you, me, or anyone else reading this review. I don't know of a single consumer application that makes use of DP. Sure, some researchers and huge corporations have custom CUDA code that benefits from it, but those usually have the funds to buy Tesla cards.
Priced at $999, the GTX Titan X follows the original Titan price-model. I find the price to be way too high, but I doubt anyone will be surprised by it nowadays. NVIDIA has the fastest card and is charging you for it, and they'll still sell a ton of it.