The Radeon RX 6900 XT is AMD's return to the fight for the high-end graphics performance throne. The new Radeon flagship is surprisingly similar to the RX 6800 XT, with the biggest difference the eight additional compute units, which brings the total shader count to 5,120 as opposed to 4,608 on the RX 6800 XT. Theoretically, this would suggest a 11% performance increase. Increasing the CU count also increases TMUs and ray accelerators because these are part of every CU. All the other specifications are identical: same ROPs, clocks, memory, L3 cache, TDP, and cooler. We confirmed with AMD that the L3 cache is running at the same frequency as on the RX 6800 XT. The only other change we noticed is that the GPU voltage circuitry has an additional power phase. Probably the most important facet that has remained the same is the 300 W TDP, which is important for power efficiency, heat, and noise.
Averaged over our test suite at 4K resolution, we find the RX 6900 XT 7% faster than the RX 6800 XT. This feels like a bit less than what AMD made us expect, and there are several important points to make here. Our test suite is 23 games strong, a large mix of games and engines. Obviously, not all of these use APIs like DX12 and Vulkan. Ten games are based on DX11, ten use DX12 and three are built around the Vulkan API—a realistic mix, I would claim. AMD has traditionally had more overhead in DirectX 11 games than NVIDIA, which is the main reason their scores are getting dragged down a bit. Since the overhead is per-frame and the RX 6900 XT delivers more frames than any other Radeon card, this effect is more noticeable than before. Titles to check out for this are Project Cars 3 and Divinity Original Sin 2. Do look through our games and exclude those not relevant for your buying decision—that's why we present all this data. It is likely that a lot of upcoming titles use DX12, but I'm sure there will be other important games in the future that still run on DirectX 11. I'll use the Christmas holidays to take a closer look at our test suite to drop old games and add news ones, like Cyberpunk 2077.
I'll also look at test system options. Surely, you'll have noticed that we've been reviewing on an aging Core i9-9900K, which doesn't support PCIe Gen 4. On AMD Zen 3 platforms, the new Radeons can also use the resizable BAR feature of PCI-Express to eke out a little bit more performance. That's why this review includes a separate set of performance results—the gold bar shows FPS results on a Zen 3 Ryzen 9 5900X running DDR4-3800, with Infinity Fabric at 1:1 and SAM enabled. This is the best-case scenario for AMD, although I doubt a lot of users out there are running such a configuration as Zen 3 is sold out everywhere and many gamers have been buying Intel for years. Comparing Intel vs. AMD results, they are at exactly the same settings with the same drivers, which makes me think that the overclocked i9-9900K might still have some steam left in it; I expected more than a 2% difference at 4K.
Smart Access Memory (SAM), AMD's branding of the resizable BAR, is a fascinating new technology, AMD went into more detail in their press briefings and made clear that just mapping the whole GPU memory into the CPU's address space will not magically make games run faster. They have been working on game-specific optimizations for SAM and will continue to do so. NVIDIA has confirmed they are working on resizable BAR support, too, just like Intel for their platforms, but AMD has a headstart, and all parties involved will certainly learn a lot of new tricks in the coming months. At lower resolution, our SAM-enabled benchmarks clearly show that SAM adds a little bit of CPU load, which means SAM can scale negatively, especially in CPU-limited situations (e.g., Anno 1800 and Borderlands 3). In other games, the gains are phenomenal; +6% in Hitman, for example, at GPU-limited 4K Ultra HD.
What's also important for testing is that we made sure to heat up all our cards properly before recording results. If you take a look at the first chart on page 32, you'll see that the cool RX 6900 XT starts out at 2393 MHz, but clocks drop quickly as the card heats up, down to 2293 MHz steady state—100 MHz lower, or 4%. When testing integrated benchmarks, there is no way to preheat the card, so results will be higher. These tests are quite short, too—the card will never reach maximum temperature and won't show the real performance level you'll experience while playing the game for more than five minutes.
Compared to other cards at 4K, we see 6900 XT 2% in front of RTX 3080—an important win. AMD has achieved the unthinkable: The RX 6900 XT is twice as fast as last generation's RX 5700 XT and more energy efficient. NVIDIA's flagship, the RTX 3090, is still 8% faster. If we consider the Ryzen 5900X + SAM data point, that gap shrinks to 6%, which is still not enough to catch the RTX 3090. If you cherry pick results and exclude the games where NVIDIA wins, you could come to the conclusion that the RX 6900 XT matches or exceeds the RTX 3090, an assessment I'd still not agree with. While rasterization performance is no doubt important, everyone expects raytracing to be the next big thing, and here, it's just no contest. The RTX 3090 delivers much better raytracing performance than the RX 6900 XT; FPS is 45% higher in Metro Exodus and 78% (!) higher in Control. Two games is a small sample size, of course, and all raytracing games so far have been developed primarily on NVIDIA as that's the hardware developers had access to. This has changed now—the new consoles use the same AMD RDNA 2 architecture as the RX 6900 XT, which means game developers will possibly optimize for this architecture first, fine-tuning the amount of raytracing to the hardware capabilities of the new consoles. Whether that can help make up for SO much of a performance difference nobody knows—guess we'll know more in a year or so. NVIDIA also has DLSS, which improves performance through upscaling with minimal loss in image quality, although AMD is working on a competitor but has no updates on that yet (we asked).
We were impressed by the Radeon RX 6800 XT cooling solution, and the RX 6900 XT is no different. AMD was wise not to cheap out on this important component. The heatsink uses a large vapor-chamber base that sucks up heat from the GPU quickly, after which it is dissipated by the three slow-running fans. AMD has once again found the perfect fan settings for their card. At just 30 dBA, the card is nearly whisper quiet while pumping out over 60 FPS at 4K. The Radeon RX 6900 XT is also significantly quieter than NVIDIA's RTX 3090 Founders Edition—oh, how the tables have turned. Noise levels on the AMD reference card are better than on every single RTX 3090 custom design we've tested, and we tested all the important ones. If you want low noise, you have to go AMD. As we have seen in our RX 6800 XT custom design reviews, AMD's reference card is almost too good. The super-low noise levels are very hard for board partners to match, which makes finding a convincing selling point to justify a more expensive cooler difficult. Just like NVIDIA's GeForce RTX 30-series lineup, idle fan stop is part of the Radeon RX 6000, too; fans will shut down completely when the card is idle at the desktop or running productivity applications, and while Internet browsing.
The secret sauce behind the low noise levels is power efficiency. NVIDIA learned this after the GeForce GTX 480, and AMD has finally cashed in on it now. With just 300 W in typical gaming, the RX 6900 XT uses only 20 W more than the RX 6800 XT, which has both cards exactly match in efficiency. Compared to NVIDIA's lineup, the new AMD cards are more energy efficient; RX 6900 XT is 12% more efficient than NVIDIA's RTX 3090, which runs at 366 W, but offers a bit more performance. This 66 W difference is what allowed AMD to make their cooler whisper-quiet even though it isn't as good as the one on the RTX 3090 Founders Edition.
Some recent leaks got people all hyped up because AMD raised the overclocking limit on RX 6900 XT to 3 GHz. Firstly, I think people should rather ask why there is an artificial limit. My manual overclocking topped out at the 2.8 GHz "maximum clock" set in Radeon Settings, which translated into an actual clock frequency of 2434 MHz and is pretty similar to the RX 6800 and RX 6800 XT. What's holding the card back is the power limit, which I had to raise before I saw any gains at all. Even at its maximum, the power limit is not high enough; custom design RX 6800 XT cards with a higher power limit are able to achieve very similar performance results. The watercooled ASUS RX 6800 XT STRIX OC even ran 4.5% faster than the RX 6900 XT—both after manual overclocking to the max.
Not sure if VRAM sizes will be a hot topic for the RX 6900 XT vs. RTX 3090 debate. Technically, the RTX 3090 has +50% VRAM, which I doubt will translate into any meaningful performance for the lifetime of the card. On the other hand, NVIDIA is dangling the RTX 3090 in front of professionals and creators as a low-cost alternative to Quadro. A small subset of those might bite at the prospect of 24 GB VRAM, but given NVIDIA's strong foothold in the professional space, I doubt more VRAM on the RX 6900 XT would make a difference for these people. The new consoles offer around 8 GB of VRAM for next-gen titles; highly unlikely we'll see +100% graphics memory usage on PC vs. console.
AMD has set an MSRP of $999 for the RX 6900 XT, which is a lot of money and new territory for AMD. The RX 5700 XT launched at $400, and the Radeon VII at $700. No doubt, AMD looked at NVIDIA's pricing and thought that with the RX 6900 XT faster than the RTX 3080, it is definitely worth more than $700; and with the RTX 3090, which can't be caught, at $1500, they made it $999, a nice three-figure number. If the market were normal, I'd be unsure about how feasible this price point would be. The RTX 3080 is very similar in rasterization performance, better in raytracing, and runs a bit louder due to higher power draw, all for $300 less—most people would probably go for it. The RTX 3090, on the other hand, offers only minimal gains over RTX 3080, but screams "over the top" due to how NVIDIA positions it as the TITAN replacement—the RX 6900 XT is different. When comparing specification sheets, it looks like a small upgrade because most specifications are identical. I'm also not sure if I would be willing to spend an additional $350 over the RX 6800 XT for roughly 10% higher performance. In all fairness, I absolutely wouldn't spend +$800 for the RTX 3090 over the RTX 3080, either.
But, as we all know, the market isn't like that as there is pretty much no stock of the new graphics cards, which has people willing to pay crazy prices just to jump on the Ampere and RDNA 2 train. Scalpers are making a killing and are probably launching their bots as we speak to gobble up what little stock exists. Merchants are jacking up prices because they know they will sell everything. Looking at recent launches from both AMD and NVIDIA, it seems MSRP prices are a fantasy true for only the first batch, there to impress potential customers, with actual retail pricing ending up much higher. Let's hope that stock levels will improve in the coming weeks and months so people can actually get their hands on these new cards.
Obviously, the "Recommended" award in this context is not for the average gamer. Rather, it means you should consider this card if you have this much money to spend and are looking for an RX 6900 XT or RTX 3090.