Thursday, February 13th 2025

AMD Radeon RX 9070 XT GPU Specs Spotted in Leaked GPU-Z Screenshot
AMD's Radeon RX 9070 GPU series is due for release next month; a specific date has not been set, but we will likely find out more through official channels at the end of this month. Team Red and its board partners have chosen to remain silent on the subject of RDNA 4's technical makeup; post-CES 2025, hardware news outlets have relied on a steady trickle of Radeon RX 9070 XT and RX 9070-related leaks. Very basic sleuthing pointed to pools of 16 GB VRAM for both models, while insiders kept on mentioning an unannounced "Navi 48" GPU. The latest—courtesy of HKEPC—seems to confirm that the Radeon RX 9070 XT will utilize the aforementioned new RDNA 4 Navi chip. Earlier today, a screenshot was uploaded to social media—the leaker shared graphics card information displayed in a TechPowerUp GPU-Z (v2.62) session. Despite patch notes not disclosing compatibility, the latest version of GPU-Z is seemingly able to identify key aspects of the alleged "RX 9070 XT (Navi 48)" sample.
The card's name is obscured, but HKEPC and several press outlets believe that it is the genuine article. The fundamental details appear to be: 16 GB GDDR6 VRAM (Hynix-made), a 256-bit memory bus, 4096 stream processors, and a boost clock reaching a maximum frequency of 3.1 GHz. Older leaks have indicated that the first wave of RDNA 4 cards will make do with PCI-Express 4.0 x16 interfaces, but the GPU-Z screenshot shows a PCI-Express 5.0 x16 bus interface (detection could be bugged). The driver version was identified as Adrenalin 24.30.01.05. The unnamed card appears to feature a steep factory overclock; industry experts reckon that the sample could be a very high-end AIB model. Past reports suggest that PowerColor's Radeon RX 9070 XT Red Devil card is capable of boosting up to 3060 MHz. HKEPC uploaded another incriminating screenshot; showcasing performance results produced by Capcom's Monster Hunter Wilds PC performance benchmark tool. The test system—featuring Intel's Core Ultra 9 285K CPU and 48 GB of RAM—scored 36102 points and achieved a maximum frame rate of 211.71 FPS at 1080p, with "Very High" profile settings. The leaker confirmed that FSR and Frame Generation were enabled during the benchmark session.
Sources:
HKEPC Tweet, VideoCardz, Wccftech
The card's name is obscured, but HKEPC and several press outlets believe that it is the genuine article. The fundamental details appear to be: 16 GB GDDR6 VRAM (Hynix-made), a 256-bit memory bus, 4096 stream processors, and a boost clock reaching a maximum frequency of 3.1 GHz. Older leaks have indicated that the first wave of RDNA 4 cards will make do with PCI-Express 4.0 x16 interfaces, but the GPU-Z screenshot shows a PCI-Express 5.0 x16 bus interface (detection could be bugged). The driver version was identified as Adrenalin 24.30.01.05. The unnamed card appears to feature a steep factory overclock; industry experts reckon that the sample could be a very high-end AIB model. Past reports suggest that PowerColor's Radeon RX 9070 XT Red Devil card is capable of boosting up to 3060 MHz. HKEPC uploaded another incriminating screenshot; showcasing performance results produced by Capcom's Monster Hunter Wilds PC performance benchmark tool. The test system—featuring Intel's Core Ultra 9 285K CPU and 48 GB of RAM—scored 36102 points and achieved a maximum frame rate of 211.71 FPS at 1080p, with "Very High" profile settings. The leaker confirmed that FSR and Frame Generation were enabled during the benchmark session.
31 Comments on AMD Radeon RX 9070 XT GPU Specs Spotted in Leaked GPU-Z Screenshot
Not bad.
The 5700X3D will give numbers a bit lower than a 285K would, but at least there'll be an idea.
oh and independent benchmark results, don't fall for manufacturer lies
All speculation of course off one game. Just fun trying to if you can figure it out before the launch i guess.
Rendering resolution is 626p. With TPU's game suite the 285K from the leak was 6% faster than the 5800X3D in 720p. With that assumption, a stock 7900xtx would be some 13% faster than a supposed top 9070xt here.
7900 XTX is 192 ROPs with 384 TMUs, so thats 2 TMUs per ROP.
A theoretical 192 ROP 9090 XTX (or whatever it may be called) would be a massive 768 TMUs, which should be able to destroy the RTX 5090 in pure rasterization horsepower but would require more energy to do so.
1080p High preset implies FSR Balanced. FG on: 37984 points with 223.00 fps average.
1080p "High" with FSR Quality, FG on: 37335 pts with 219.06 fps avg.
1080p "High" with FSR Native AA, FG on: 33981 pts with 199.10 fps avg.
1080p "High" with no FSR and FG: 38800 pts with 113.51 fps avg.
Real news would be confirmation of the vanilla 9070. Most leaks and rumours suggest a 3584 shader config, but a news article here last week suggested that both the XT and vanilla were getting 4096 shaders which was rather controversial.
This is why most clusters are 1792-2048sp; nVIDIA's are currently 1536sp bc reasons (buff up certain other ops to look good, but I would argue we don't need them at such high ratio).
IMO 1920sp is optimal, but it's asynchronous (15x128 instead of the more symmetrical 14 or 16). 7800xt was this configuration. N48 is likely 2048 (for the extra TMUs/rt ratio, most-likely).
Given it can clock high, and bandwidth limited, no reason to have the extra ROPs.
This was made apparent by the stock clock. As I've said, 8192 (if not rop limited) would be limited to ~2720mhz @ 20gbps.
If limited by 64 ROPs, or around low ~75xxsp, you get a clock not unlike the one of 9070xt (2970mhz). Don't worry about it that much (ROP limitation), as extra compute can be used for things outside raster.
TMUs can be used for many things...in this case it is clearly for BVH/RT. One does not need more than 1RT unit per core, but it can be buffed up through the use of a TMU. I think AMD said ~30% iirc.
If I'm not mistaken, this is already used on the XSX (4 ops/CU). FWIW, nvidia does this (what the extra TMUs will do) with fixed-function hw (which is probably not unlike a TAU).
Hope I explained that well-enough to that make sense. You can read more about it here, if it helps. I wouldn't expect miracles on this chip (as it's mid-range), but probably good for 'low' RT like 7900xtx.
Essentially 2/3 chip of N31, but ~30% better RT through extra TMUs with clockspeed differance making up the equalization. Yeah...yeah...Pretty much exactly that...in theory.
As for the 'ideal' for RT, you can clearly see with nVIDIA it is 8 clusters (where 4080/5080 are 7), and they are very-purposely withholding this (probably until Rubin). AMD likely until the 7900xtx successor.
Think 'high-RT' 1080p->4k upscaled, or 'normal (in some new games)/low (in games with a low/high mode)' RT 1440p->4k upscaled.
This is why you shouldn't invest in RT rn, unless you want to upscale games that require it and/or have a 'low' mode from ~1080p...and even that might be iffy moving foward (bc of 12/16GB ram limitations).
With AMD we won't know until we (meaning probably I) throughly scrutinize the actual real-world capability of the TMUs on N48 wrt RT. Probably safe to assume a ~7900xtx with 2x TMU would also be good.
I am very curious if we can get really good RT on (3nm) 192-bit, or rather it will require 256-bit. For nVIDIA it should be possible w/ high clocks (6 clusters), but they may again inch it along to force an upsell.
All the more reason to wait/buy the 3nm 192-bit/256-bit chips, where I would assume if AMD doesn't match nVIDIA per clock (and units), they will likely make up for it (if not exceed it) w/ clockspeed.