Sunday, January 12th 2025
AMD Radeon RX 9070 XT Tested in Cyberpunk 2077 and Black Myth: Wukong
The recently unveiled AMD Radeon RX 9070 XT has been the epicenter of a plethora of leaks in the past few weeks. We now have not only a rough idea of what kind of synthetic performance the highest-end RDNA 4 GPU is about to bring to the table, but also how much the card is expected to cost. Now, yet another leak seemingly sheds light on perhaps the most crucial aspect of any gaming GPU - it's in the name, gaming.
Hilariously enough, this leak, once again, has been sourced from a now-nerfed forum post on Chiphell. This time around, site admin nApoleon was able to run a few games on the brand-new RDNA 4 card, including Black Myth: Wukong and Cyberpunk 2077. The results have been added as screenshots below, but here is a summary - the RX 9070 XT performs decently well, trading blows with the RTX 4070 Ti Super. In NVIDIA's favorite Cyberpunk 2077 at 4K with ray tracing, the RX 9070 XT managed 26 FPS, neck and neck with the RTX 4070 Ti, whereas the RTX 4080 Super was well ahead with 32 FPS.In 1080p, the RX 9070 XT managed 85 FPS, whereas the RTX 4070 Ti edged past with 2 more frames. The RTX 4080 Super, unsurprisingly, was well ahead once again with 101 FPS. The RX 9070 XT appears far from promising in rasterization performance, as revealed by Black Myth: Wukong. At 4K, the RX 9070 XT managed around 30 FPS, defeating the 4070 Ti Super which managed 28. And at 1080p, the RX 9070 XT raked in 97 FPS, coming shockingly close to the 4080 Super, which managed 99 FPS. The RTX 4070 Ti Super was left behind, managing 87 FPS 'only'.
Of course, it now all boils down to how competitive Blackwell is priced, and what kind of performance it brings to the table. DLSS 4, along with all the multi-frame generation AI chicanery appears to be all set to shake up the way we measure performance, making things even more complicated. Either way, for those who cannot care less about ray tracing, the RDNA 4 GPU sure does look pretty enticing, if the leaked benchmarks and pricing information is taken to be true.
Sources:
GameGPU, VideoCardz
Hilariously enough, this leak, once again, has been sourced from a now-nerfed forum post on Chiphell. This time around, site admin nApoleon was able to run a few games on the brand-new RDNA 4 card, including Black Myth: Wukong and Cyberpunk 2077. The results have been added as screenshots below, but here is a summary - the RX 9070 XT performs decently well, trading blows with the RTX 4070 Ti Super. In NVIDIA's favorite Cyberpunk 2077 at 4K with ray tracing, the RX 9070 XT managed 26 FPS, neck and neck with the RTX 4070 Ti, whereas the RTX 4080 Super was well ahead with 32 FPS.In 1080p, the RX 9070 XT managed 85 FPS, whereas the RTX 4070 Ti edged past with 2 more frames. The RTX 4080 Super, unsurprisingly, was well ahead once again with 101 FPS. The RX 9070 XT appears far from promising in rasterization performance, as revealed by Black Myth: Wukong. At 4K, the RX 9070 XT managed around 30 FPS, defeating the 4070 Ti Super which managed 28. And at 1080p, the RX 9070 XT raked in 97 FPS, coming shockingly close to the 4080 Super, which managed 99 FPS. The RTX 4070 Ti Super was left behind, managing 87 FPS 'only'.
Of course, it now all boils down to how competitive Blackwell is priced, and what kind of performance it brings to the table. DLSS 4, along with all the multi-frame generation AI chicanery appears to be all set to shake up the way we measure performance, making things even more complicated. Either way, for those who cannot care less about ray tracing, the RDNA 4 GPU sure does look pretty enticing, if the leaked benchmarks and pricing information is taken to be true.
169 Comments on AMD Radeon RX 9070 XT Tested in Cyberpunk 2077 and Black Myth: Wukong
Personally, I just want a graphics card that delivers good performance at 1440 UW native, doesn't cost an arm and a leg, doesn't require a nuclear reactor in my back yard, and works well with Linux. The 9070 XT seems to tick all of these boxes. Whether a card has CUDA, AI, or more fake frames than the last generation doesn't interest me in the slightest. Especially if said card has only 12 GB VRAM for the same price.
Look at this price table - AMD had never been this greedy as recently.
Different question:
Can you stack fake frames? So DLSS 4 1 -> 5
Lossless Scaling 5 ->
80991 "real" frame being followed up by
8099 "fake" frames would be hilariousAMD needs a working marketing with a new approach in order to find reasons other than the trivial ones in order to win more sales.
Edit: Besides, you're forgetting about consoles.
If the card releases within a reasonable time frame (and not much later than it's competition) with adequate stock (EU shops don't even have listings for amd cards for a couple of months) at 550$ with raster and RT being equal or better than the 5070, it will sell like hotcakes. I can easily see it selling as much as the 5070, just like other competitive cards did in the past (RX 470,480,570 580 etc.).
In truth, AMD is on a down cycle at the end of current gen consoles and the imminent release of new ones, the beginnings of new markets such as handhelds (Ryzen Z) and high power SoCs (Strix Halo) and floundering around in the discrete GPU market. New consoles are coming by the end of this year. We will know in a few weeks if RDNA4 is any good. It is too early to know about Strix Halo's popularity.
I am optimistic on all three above (consoles, handhelds and RDNA4) as well as Intel's efforts which will bring much needed competition back to the client GPU space. Data center GPUs are a whole other story which I won't get into.
That said, the argument can be made that most people are looking for good 1920x1080 performance...and that's generally the realm of a xx60 or x700 card. That's pretty much what we compete with/against for the attention of most people. In that regard, the 9070 is as much a failure as the 4070, 5070, and anything higher. For 550 USD you can purchase a console that half ways 4k...and that's basically watching these cards fight for a premium market share.
All of this said, the 9070 is something I personally think may be reasonable. It looks to pull about the amount of power of a 3080, perform somewhere between a 4070 and a 4080, be priced similar to a 5070, and have enough VRAM without the injected frames to make plenty of performance available for the average high refresh rate 2560x1440. Being realistic here, if they actually come in closer to the $500 range then this thing is shaping up to be the best card in the high priced category. I just have to question why there's so much consternation over chiphell leaks....which are usually about as accurate as inaccurate. With only a few weeks to go let's be real here, and suggest that there's nothing which should yet drive anybody to preorder anything day one. Pretending otherwise is how people are disappointed by something like the B580...despite it selling like toilet wine in a prison. So we are clear, the B580 is nothing special until you consider it's priced very well in the underserved budget gaming segment.
No one's even heard of CUDA who is a casual gaming enthusiast, hell even professional forumers and hardware enthusiasts know very little about CUDA, let alone have any use of it.
For real, the biggest use I've had of CUDA is using it with a chess engine on my desktop, which now calculates to 30 depth in 20 seconds, instead of 30 seconds, big deal.
This is the type of extreme use case of CUDA. Plus AMD has the equivalent in rocm, yeah its not as fully fleshed out as cuda, but it does 95% of stuff at same level as cuda, that 5% extreme use case literally even professionals don't care that much.
Its all propaganda and bots pushing a narrative to close off people's minds and somehow project nvidia gpu's as somehow better, when they clearly are not. They are garbage value, garbage performance, garbage vram, garbage longevity, getting bigger, heavier, louder, hotter, etc...
The bottom line is you'll still rely on raw performance for a good two-three generations at the very least. Especially in the midrange, where the raw performance is barely growing. Much like RT and all the other developments in game graphics, this takes time. Far more time than anyone has patience. Don't early adopt this crap. Right now, its actively used as a marketing tool. Its not primarily there to make games better, its there to sell GPUs and pseudo next gen graphics.
Edit: Also see tfdsaf's response above if CUDA is optional.
A lot hinges on UDNA, which should bring them back in the same direction as GCN. That was AMD's first arch after taking over from ATI, and while it got good reviews during launch and sold well, it never received the praise it deserved because it was flat out better compared to the current and two future nvidia archs at the time (Tesla, Fermi and Kepler). Maxwell was the one which knocked the wind out of AMD's sails at a time when they were also struggling financially.
Obviously something like that is wishful thinking but If AMD can even come close to parity this time around, it'll be a big win for them. They have the resources and capabilities now to design it well and improve ROCm as well which should play well with UDNA. Time will tell.
I play Helldivers 2, GOW, and pretty much anything 3rd person view on PC with a PS5 Controller.
Street fighter 6, Mortal kombat anything fighting with a Hori fightpad
All of my first person shooter games Delta Force, Counter Strike 2, Halo infinite etc are Mouse and keyboard.
Like I said I'm more interested in upscaling than triple digit frames.
And the geforce brand is actually pretty strong even for professionals, As puget system realised from their sales data. And it's also something that i can see when I talk with freelancers, take a look at the kind of machines that art/media schools are using...people tend to overestimate the appeal of "RTX quadro" for prossumers.
For all the shitty stuff that nvidia is pulling off as a company, they've been pretty good at not being caught slacking and off-guard. Unlike Intel they don't get complacent. If RT can be critiziced in gaming, Optix (CUDA +RT) was absolutely mental for the offline 3D industry, a single GPU managed to do stuff that required a small render farm a decade ago, now you got teenagers doing short 3d movies in their parents basement.