Thursday, January 9th 2025

AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way

Although it has only been a few days since the RDNA 4-based GPUs from Team Red hit the scene, it appears that we have already been granted a first look at the 3D Mark performance of the highest-end Radeon RX 9070 XT GPU, and to be perfectly honest, the scores seemingly live up to our expectations - although with disappointing ray tracing performance. Unsurprisingly, the thread has been erased over at Chiphell, but folks have managed to take screenshots in the nick of time.

The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.
Regarding the scores we currently have, it appears that the Radeon RX 9070 XT fails to match the Radeon RX 7900 XTX in both the tests, although it easily exceeds the GeForce RTX 4080 Super in the non-ray-traced TS Extreme test. In the Speed Way test, which is a ray-traced benchmark, the RX 9070 XT fails to match the RTX 4080 Super, falling noticeably short. Considering that it costs less than half the price of the RTX 4080 Super, this is no small feat. Interestingly, an admin at Chiphell, commented that those planning on grabbing an RTX 50 card should wait, further hinting that the GPU world has "completely changed". Considering the lack of context, the interpretation of the statement is debatable, but it does seem RDNA 4 might pack impressive price-to-performance that may give mid-range Blackwell a run for its money.
Sources: Chiphell, @0x22h
Add your own comment

95 Comments on AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way

#26
DaemonForce
docnorthMaybe a comparison to 7900 XT would be more useful :confused:.
GPU-Z appears to mis-ID as the 7800XT so maybe that's the real compare here. Either way a 7900XT compare, I'm all for it.
It was the most rounded enthusiast card right in the middle of the 7900 stack and a next gen flagship should meet it both ways.
GGforeverthe drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU
I'm gonna go with -20% satisfactory of what we expect out of it. Mature drivers will very likely get the biggest get back we've ever seen.
Posted on Reply
#27
_roman_
I think we should stick to the topic and not name called ngreedia. Amd is also not cheap or affordable.
Posted on Reply
#28
QuietBob
I'll just copy my comment from another thread, didn't notice this current news piece :rolleyes:

Paired with a 5800X3D a reference 7900XTX does 14,400 GPU Marks in TSE and 5,800 in SW. If the leaked results were true, the 9070XT would be the same in raster and 10% faster in RT.

Given the huge disparity in shader count -- 6,144 for the 7900XTX vs possible 4,096 for the 9070XT -- it's hard to believe the two would achieve equal raster performance.

OTOH, the 7900XTX has 96 Ray Accelerators, while the 9070XT probably 64, though with a 22% higher boost clock. If the SW score is correct, RDNA4 would show about 33% improvement in RT, which is plausible.
Posted on Reply
#29
3valatzy
_roman_Amd is also not cheap or affordable.
Unbelievable.. :kookoo:

Let's compare, cheapest AMD vs cheapest Nvidia:

Posted on Reply
#30
TheinsanegamerN
The 7900xtx in speedway, in 100th place, hits 7250. The 4080's 100th place number is 8082. This "9070" hits 6345.

In the extreme benchmark, the 7900xtx in 100th hits 18830, the 4080 100th hits 15714. This "9070" hits 14591.

The 7900xt in 100th hits 15199 for extreme and 6020 in speed way, just ot give you an idea of where this 9070xt is gonna land. Assuming thats what this is.
Neo_MorpheusThis is beyond tiresome.
You're right, it IS beyond tiresome how AMD, now on it's 3rd RT generation, cannot meaningfully improve their RT performance, to the point that half a dozen Nvidia cards place above it.
Neo_MorpheusIts supposed to be a mid tier card and as others have said, even the Ngreedias 90's GPU have issues with RT even after all kinds of trickeries.

And as others have said, its a performance killer that provides nothing to gameplay.

All that AMD need to do is price this damned thing right and stop trying to be greedy.
Being mid tier doesnt excuse a total lack of improvement. RT isnt going anywhere, that much has become obvious. It's here to stay, like hardware T&L.
Posted on Reply
#31
phanbuey
I think RT's current "Lets brute force calculate 200 rays" methodology is not long for this world. You have ML rendering - i.e. you tell an ML renderer on the card "this is what this light should look like, fill it in" - it's already in place to some extent with Cyberpunk's Ray Reconstruction algorithm, where it looks miles better without a performance hit.

You will probably see minimal RT calculation with ML reconstruction doing the rest of the heavy lifting in the future, especially if textures go the ML algorithm route as well.
Posted on Reply
#32
rv8000
TheinsanegamerNThe 7900xtx in speedway, in 100th place, hits 7250. The 4080's 100th place number is 8082. This "9070" hits 6345.

In the extreme benchmark, the 7900xtx in 100th hits 18830, the 4080 100th hits 15714. This "9070" hits 14591.

The 7900xt in 100th hits 15199 for extreme and 6020 in speed way, just ot give you an idea of where this 9070xt is gonna land. Assuming thats what this is.

You're right, it IS beyond tiresome how AMD, now on it's 3rd RT generation, cannot meaningfully improve their RT performance, to the point that half a dozen Nvidia cards place above it.

Being mid tier doesnt excuse a total lack of improvement. RT isnt going anywhere, that much has become obvious. It's here to stay, like hardware T&L.
Huh?

If a 9070xt has ~64 ray tracing hardware units, and a 7900xtx has 96, they have SIGNIFICANTLY improved RT performance.

Why would you compare scores from leaderboards where cards are obviously being overclocked to an unreleased card we know very little about. Just look at reference card scores… TPU literally has all that data at hand.
Posted on Reply
#33
Guwapo77
Yesterday's performance at a new lower price.
Posted on Reply
#34
rv8000
Guwapo77Yesterday's performance at a new lower price.
That’s what every generations 60 and 70 class gpus have been like from both brands since… forever. It’s not some new concept.
Posted on Reply
#35
TheinsanegamerN
rv8000Huh?

If a 9070xt has ~64 ray tracing hardware units, and a 7900xtx has 96, they have SIGNIFICANTLY improved RT performance.
Lotta IFs there.

If it has 64 units, and IF they run at the same clock speed as a 7900 xtx, and IF this leak is accurate, then yes its a significant improvement.

None of that is confirmed. A single rendering error could easily artificially inflate 3dmark scores. The 9070's drivers are not finalized yet, and AMD themselves have said as much.

We DO know, though, that the PS5 pro uses rDNA4 RT hardware and it's RT performance has been.....lackluster, to say the least.
rv8000Why would you compare scores from leaderboards where cards are obviously being overclocked to an unreleased card we know very little about. Just look at reference card scores… TPU literally has all that data at hand.
www.techpowerup.com/gpu-specs/radeon-rx-7900-xtx.c3941

Go ahead, show me where here has the 3d mark scores. 3dmark doesnt have a page in the TPU review anywhere I can find. And I chose the bottom of the 100 list, on cards that are pretty clearly not OCed. If you have a better source for stock card runs then 3dmark themselves, I'd love to see it.
Posted on Reply
#36
Marcus L
phanbueyIm talking about Cyberpunk specifically. I've poured 600+ hours into that game -- the RT is one of the best implementations ive seen, and it still looks like crap (IMO).




It's grainier, blurrier:



Pick the RT shot -- it's the one on the left.
Bottom shot, the right is better to me the ones above I gues RT does look a tiny bit better but for 40% performance hit completely not worth it and you won't notice a difference between having it on/off when playing at high settings apart from the FPS hit!

If they can price this right it will be a massive hit, 4080 performance for <$500 and I might bite
Posted on Reply
#37
Chrispy_
This sounds promising, but the 330W TDP does not.

If the vanilla RX 9070 is the same or at least close in core clocks and VRAM speed, I'll grab that one - since I'm likely to undervolt it and tune it down to 250W anyway.
Posted on Reply
#38
Daven
TheinsanegamerNYou're right, it IS beyond tiresome how AMD, now on it's 3rd RT generation, cannot meaningfully improve their RT performance, to the point that half a dozen Nvidia cards place above it.

Being mid tier doesnt excuse a total lack of improvement. RT isnt going anywhere, that much has become obvious. It's here to stay, like hardware T&L.
If RT went away right now, we would ALL be better off. AMD can only be knocked for following Nvidia down this horrible, horrible path of upselling us on less performance for the incorrect promise of better image quality.
Posted on Reply
#39
debido666
Can't wait to see the 9060. Should be a good card for people that don't spend half their rent or more on GPU's.
Posted on Reply
#40
sephiroth117
all comes down to the pricing.

People tend to really forget that the biggest GPU gaming market are those 4060/3060/1660, just look at a steam survey.

A moderate price fo a capable 9070XT with way better AI for upscaling (FSR4) and more capable RT is definitely interesting
Posted on Reply
#41
rv8000
TheinsanegamerNLotta IFs there.

If it has 64 units, and IF they run at the same clock speed as a 7900 xtx, and IF this leak is accurate, then yes its a significant improvement.

None of that is confirmed. A single rendering error could easily artificially inflate 3dmark scores. The 9070's drivers are not finalized yet, and AMD themselves have said as much.

We DO know, though, that the PS5 pro uses rDNA4 RT hardware and it's RT performance has been.....lackluster, to say the least.

www.techpowerup.com/gpu-specs/radeon-rx-7900-xtx.c3941

Go ahead, show me where here has the 3d mark scores. 3dmark doesnt have a page in the TPU review anywhere I can find. And I chose the bottom of the 100 list, on cards that are pretty clearly not OCed. If you have a better source for stock card runs then 3dmark themselves, I'd love to see it.
Lotta ifs? Its one if, we just don’t know the hardware count at this time.

Here are 3 reviews of the reference card from TT, KG, and ET

Posted on Reply
#42
Neo_Morpheus
rv8000they have SIGNIFICANTLY improved RT performance.
Heresy!

AMD is incapable of doing something like improving performance, only Ngreedia can!.

/J

Its strange how such concepts cannot be applied to AMD or anyone else by these people.

And of course, when its a negative trait, anything and everything will be taken as gospel by them.

I am waiting for real reviews from unbiased places before I make any judgments on these gpus, until then, they are simply rumors.
Posted on Reply
#43
Chrispy_
I don't use 3DMark, what's a 7800XT, 4070S, 7900XT, 4070TiS typical score in these tests?
Posted on Reply
#44
rv8000
Chrispy_I don't use 3DMark, what's a 7800XT, 4070S, 7900XT, 4070TiS typical score in these tests?
Timespy Extreme (all sourced from TT reviews)

7800XT ~ 9500
4070 Super ~ 9900
4070TI Super ~11500
7900XT ~ 12000
Posted on Reply
#45
QuietBob
TheinsanegamerNAnd I chose the bottom of the 100 list, on cards that are pretty clearly not OCed.
Actually, all cards in the 3DMark Top 100 are heavily OC'd and use liquid cooling or better. Most of them are combined with a heavily OC'd CPU as well. It's a competitive benchmark after all.

Here's that 100th 7900XTX entry for Time Spy Extreme you mentioned compared with my 100% stock MBA model. Mine shows as being bottom 13% of all submitted results:



Those Top 100 scores are in no way representative of a particular model. They merely show best OCing samples.
Posted on Reply
#46
rv8000
QuietBobActually, all cards in the 3DMark Top 100 are heavily OC'd and use liquid cooling or better. Most of them are combined with a heavily OC'd CPU as well. It's a competitive benchmark after all.

Here's that 100th 7900XTX entry for Time Spy Extreme you mentioned compared with my 100% stock MBA model:



Those Top 100 scores are in no way representative of a particular model. They merely show best OCing samples.
That’s even more pushed to the edge than I expected, probably running the ASRock Aqua BIOS mod at 450w+, even on air the Nitro and Merc bounce around at lot at 3k core.
Posted on Reply
#47
Marcus L
QuietBobActually, all cards in the 3DMark Top 100 are heavily OC'd and use liquid cooling or better. Most of them are combined with a heavily OC'd CPU as well. It's a competitive benchmark after all.

Here's that 100th 7900XTX entry for Time Spy Extreme you mentioned compared with my 100% stock MBA model:



Those Top 100 scores are in no way representative of a particular model. They merely show best OCing samples.
OT I have a couple of top scores compared with the same CPU and GPU combination, for example usually when I bought a GTX 670 10 years later and ran it with a modern CPU and just OC'd the snot out of it to see if I could get top spot for the same CPU/GPU combo, might have to log in and go have a look, maybe a RX 290/HD 7950 result in there somewhere as well lol obviously all those GPU's were EOL at the time and not many people with the same CPU/GPU combo, but was fun at the time just to try and eek every bit of perf out of them that I could and try and take top spot, fun times :laugh:
Posted on Reply
#48
lilhasselhoffer
phanbueyI think RT's current "Lets brute force calculate 200 rays" methodology is not long for this world. You have ML rendering - i.e. you tell an ML renderer on the card "this is what this light should look like, fill it in" - it's already in place to some extent with Cyberpunk's Ray Reconstruction algorithm, where it looks miles better without a performance hit.

You will probably see minimal RT calculation with ML reconstruction doing the rest of the heavy lifting in the future, especially if textures go the ML algorithm route as well.
Guwapo77Yesterday's performance at a new lower price.
I...can only say the following...TressFX.

Before you ask why I cite the two above comments, it's because ray tracing is about as stupid as TressFX was. It's "better" than the results you get from the other guy doing the same calculations...but completely forgets that 99.9% of games that exist now were made before ray tracing was adopted. You're more than welcome to claim you think it looks more realistic...and someone else is more than able to call it crap. Those are not debatable points, only opinions. The truth is that it's a computationally intensive process that doesn't result in linear or better improvements...and thus will be relegated to the dustbin of history exactly like TressFX. The only difference is that Nvidia has clung to their dead horse for longer because AMD has not competed with them, and thus it's always something they win at. It's always easiest to be the best when nobody else competes.

The only fact is the cost to performance numbers that this card will eventually have after a proper review...and hopefully it will be priced competitively. Yesterday's performance in RT, today's performance in raster, and yesterday's pricing would be a great boon. That's especially true when today's pricing is highway robbery, and yesterday's yesterday provides enough performance for most people today.
Posted on Reply
#49
Chrispy_
debido666Can't wait to see the 9060. Should be a good card for people that don't spend half their rent or more on GPU's.
The 9060 series worries me with its 8GB of VRAM.

A fried of mine bought a 4060 on discount recently, not because it was a good card, but because he has an extremely cramped mITX system with only a 350W PSU and I figured the 115W, tiny, 4060 was his best bet, I could be wrong but it seemed to me to be the smallest, most power-efficient card around.

He's running a 2560x1080 display, so not even 1440p and yet two of the three games (Indiana Jones, DA: Veilguard, Space Marine II) he upgraded for require him to turn things down to avoid stuttering because of VRAM shortages.

So yeah, the 8GB 9060 cards need to be 20% cheaper than a 12GB B580. 8GB wasn't enough for more than 1080p in 2022, it sure as hell isn't any better in 2025. I'm just as worried about the 5060 but Nvidia will sell like hotcakes because people will probably just believe Jensen when he says something like "a 5060 for $349 matches a 4080. The more you buy the more you save" or some other hand-wavy nonsense. My experience of DLSS FG and RT on 40-series cards is that all those fake-framed RT effects need a boatload more VRAM to work than just the raster codepath, and if the 5060 only gets 8GB, that's not going to go down so well.
Posted on Reply
#50
k0vasz
3valatzyUnbelievable.. :kookoo:

Let's compare, cheapest AMD vs cheapest Nvidia:

4090 is a different story - that's the absolute best card on the market, and those, nvidia can ask for (almost) whatever money they want, as there'll always be people who want the best, no matter what (also, it provides 23% better performance, so it's not like you pay twice as much to get the same performance)

but if you compare two cards from the same tier, you'll get the sameish pricing:

Posted on Reply
Add your own comment
Jan 10th, 2025 04:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts