Thursday, January 9th 2025
AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way
Although it has only been a few days since the RDNA 4-based GPUs from Team Red hit the scene, it appears that we have already been granted a first look at the 3D Mark performance of the highest-end Radeon RX 9070 XT GPU, and to be perfectly honest, the scores seemingly live up to our expectations - although with disappointing ray tracing performance. Unsurprisingly, the thread has been erased over at Chiphell, but folks have managed to take screenshots in the nick of time.
The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.Regarding the scores we currently have, it appears that the Radeon RX 9070 XT fails to match the Radeon RX 7900 XTX in both the tests, although it easily exceeds the GeForce RTX 4080 Super in the non-ray-traced TS Extreme test. In the Speed Way test, which is a ray-traced benchmark, the RX 9070 XT fails to match the RTX 4080 Super, falling noticeably short. Considering that it costs less than half the price of the RTX 4080 Super, this is no small feat. Interestingly, an admin at Chiphell, commented that those planning on grabbing an RTX 50 card should wait, further hinting that the GPU world has "completely changed". Considering the lack of context, the interpretation of the statement is debatable, but it does seem RDNA 4 might pack impressive price-to-performance that may give mid-range Blackwell a run for its money.
Sources:
Chiphell, @0x22h
The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.Regarding the scores we currently have, it appears that the Radeon RX 9070 XT fails to match the Radeon RX 7900 XTX in both the tests, although it easily exceeds the GeForce RTX 4080 Super in the non-ray-traced TS Extreme test. In the Speed Way test, which is a ray-traced benchmark, the RX 9070 XT fails to match the RTX 4080 Super, falling noticeably short. Considering that it costs less than half the price of the RTX 4080 Super, this is no small feat. Interestingly, an admin at Chiphell, commented that those planning on grabbing an RTX 50 card should wait, further hinting that the GPU world has "completely changed". Considering the lack of context, the interpretation of the statement is debatable, but it does seem RDNA 4 might pack impressive price-to-performance that may give mid-range Blackwell a run for its money.
95 Comments on AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way
It was the most rounded enthusiast card right in the middle of the 7900 stack and a next gen flagship should meet it both ways. I'm gonna go with -20% satisfactory of what we expect out of it. Mature drivers will very likely get the biggest get back we've ever seen.
Paired with a 5800X3D a reference 7900XTX does 14,400 GPU Marks in TSE and 5,800 in SW. If the leaked results were true, the 9070XT would be the same in raster and 10% faster in RT.
Given the huge disparity in shader count -- 6,144 for the 7900XTX vs possible 4,096 for the 9070XT -- it's hard to believe the two would achieve equal raster performance.
OTOH, the 7900XTX has 96 Ray Accelerators, while the 9070XT probably 64, though with a 22% higher boost clock. If the SW score is correct, RDNA4 would show about 33% improvement in RT, which is plausible.
Let's compare, cheapest AMD vs cheapest Nvidia:
In the extreme benchmark, the 7900xtx in 100th hits 18830, the 4080 100th hits 15714. This "9070" hits 14591.
The 7900xt in 100th hits 15199 for extreme and 6020 in speed way, just ot give you an idea of where this 9070xt is gonna land. Assuming thats what this is. You're right, it IS beyond tiresome how AMD, now on it's 3rd RT generation, cannot meaningfully improve their RT performance, to the point that half a dozen Nvidia cards place above it. Being mid tier doesnt excuse a total lack of improvement. RT isnt going anywhere, that much has become obvious. It's here to stay, like hardware T&L.
You will probably see minimal RT calculation with ML reconstruction doing the rest of the heavy lifting in the future, especially if textures go the ML algorithm route as well.
If a 9070xt has ~64 ray tracing hardware units, and a 7900xtx has 96, they have SIGNIFICANTLY improved RT performance.
Why would you compare scores from leaderboards where cards are obviously being overclocked to an unreleased card we know very little about. Just look at reference card scores… TPU literally has all that data at hand.
If it has 64 units, and IF they run at the same clock speed as a 7900 xtx, and IF this leak is accurate, then yes its a significant improvement.
None of that is confirmed. A single rendering error could easily artificially inflate 3dmark scores. The 9070's drivers are not finalized yet, and AMD themselves have said as much.
We DO know, though, that the PS5 pro uses rDNA4 RT hardware and it's RT performance has been.....lackluster, to say the least.www.techpowerup.com/gpu-specs/radeon-rx-7900-xtx.c3941
Go ahead, show me where here has the 3d mark scores. 3dmark doesnt have a page in the TPU review anywhere I can find. And I chose the bottom of the 100 list, on cards that are pretty clearly not OCed. If you have a better source for stock card runs then 3dmark themselves, I'd love to see it.
If they can price this right it will be a massive hit, 4080 performance for <$500 and I might bite
If the vanilla RX 9070 is the same or at least close in core clocks and VRAM speed, I'll grab that one - since I'm likely to undervolt it and tune it down to 250W anyway.
People tend to really forget that the biggest GPU gaming market are those 4060/3060/1660, just look at a steam survey.
A moderate price fo a capable 9070XT with way better AI for upscaling (FSR4) and more capable RT is definitely interesting
Here are 3 reviews of the reference card from TT, KG, and ET
AMD is incapable of doing something like improving performance, only Ngreedia can!.
/J
Its strange how such concepts cannot be applied to AMD or anyone else by these people.
And of course, when its a negative trait, anything and everything will be taken as gospel by them.
I am waiting for real reviews from unbiased places before I make any judgments on these gpus, until then, they are simply rumors.
7800XT ~ 9500
4070 Super ~ 9900
4070TI Super ~11500
7900XT ~ 12000
Here's that 100th 7900XTX entry for Time Spy Extreme you mentioned compared with my 100% stock MBA model. Mine shows as being bottom 13% of all submitted results:
Those Top 100 scores are in no way representative of a particular model. They merely show best OCing samples.
Before you ask why I cite the two above comments, it's because ray tracing is about as stupid as TressFX was. It's "better" than the results you get from the other guy doing the same calculations...but completely forgets that 99.9% of games that exist now were made before ray tracing was adopted. You're more than welcome to claim you think it looks more realistic...and someone else is more than able to call it crap. Those are not debatable points, only opinions. The truth is that it's a computationally intensive process that doesn't result in linear or better improvements...and thus will be relegated to the dustbin of history exactly like TressFX. The only difference is that Nvidia has clung to their dead horse for longer because AMD has not competed with them, and thus it's always something they win at. It's always easiest to be the best when nobody else competes.
The only fact is the cost to performance numbers that this card will eventually have after a proper review...and hopefully it will be priced competitively. Yesterday's performance in RT, today's performance in raster, and yesterday's pricing would be a great boon. That's especially true when today's pricing is highway robbery, and yesterday's yesterday provides enough performance for most people today.
A fried of mine bought a 4060 on discount recently, not because it was a good card, but because he has an extremely cramped mITX system with only a 350W PSU and I figured the 115W, tiny, 4060 was his best bet, I could be wrong but it seemed to me to be the smallest, most power-efficient card around.
He's running a 2560x1080 display, so not even 1440p and yet two of the three games (Indiana Jones, DA: Veilguard, Space Marine II) he upgraded for require him to turn things down to avoid stuttering because of VRAM shortages.
So yeah, the 8GB 9060 cards need to be 20% cheaper than a 12GB B580. 8GB wasn't enough for more than 1080p in 2022, it sure as hell isn't any better in 2025. I'm just as worried about the 5060 but Nvidia will sell like hotcakes because people will probably just believe Jensen when he says something like "a 5060 for $349 matches a 4080. The more you buy the more you save" or some other hand-wavy nonsense. My experience of DLSS FG and RT on 40-series cards is that all those fake-framed RT effects need a boatload more VRAM to work than just the raster codepath, and if the 5060 only gets 8GB, that's not going to go down so well.
but if you compare two cards from the same tier, you'll get the sameish pricing: