Tuesday, December 24th 2024
AMD Radeon RX 9070 XT Alleged Benchmark Leaks, Underwhelming Performance
Recent benchmark leaks have revealed that AMD's upcoming Radeon RX 9070 XT graphics card may not deliver the groundbreaking performance initially hoped for by enthusiasts. According to leaked 3DMark Time Spy results shared by hardware leaker @All_The_Watts, the RDNA 4-based GPU achieved a graphics score of 22,894 points. The benchmark results indicate that the RX 9070 XT performs only marginally better than AMD's current RX 7900 GRE, showing a mere 2% improvement. It falls significantly behind the RX 7900 XT, which maintains almost a 17% performance advantage over the new card. These findings contradict earlier speculation that suggested the RX 9070 XT would compete directly with NVIDIA's RTX 4080.
However, synthetic benchmarks tell only part of the story. The GPU's real-world gaming performance remains to be seen, and rumors indicate that the RX 9070 XT may offer significantly improved ray tracing capabilities compared to its RX 7000 series predecessors. This could be crucial for market competitiveness, particularly given the strong ray tracing performance of NVIDIA's RTX 40 and the upcoming RTX 50 series cards. The success of the RX 9070 XT depends on how well it can differentiate itself through features like ray tracing while maintaining an attractive price-to-performance ratio in an increasingly competitive GPU market. We expect these scores not to be the final tale in the AMD RDNA 4 story, as we must wait and see what AMD delivers during CES. Third-party reviews and benchmarks will give the final verdict in the RDNA 4 market launch.
Sources:
@All_The_Watts, @GawroskiT
However, synthetic benchmarks tell only part of the story. The GPU's real-world gaming performance remains to be seen, and rumors indicate that the RX 9070 XT may offer significantly improved ray tracing capabilities compared to its RX 7000 series predecessors. This could be crucial for market competitiveness, particularly given the strong ray tracing performance of NVIDIA's RTX 40 and the upcoming RTX 50 series cards. The success of the RX 9070 XT depends on how well it can differentiate itself through features like ray tracing while maintaining an attractive price-to-performance ratio in an increasingly competitive GPU market. We expect these scores not to be the final tale in the AMD RDNA 4 story, as we must wait and see what AMD delivers during CES. Third-party reviews and benchmarks will give the final verdict in the RDNA 4 market launch.
204 Comments on AMD Radeon RX 9070 XT Alleged Benchmark Leaks, Underwhelming Performance
RT is the single biggest leap in photorealism in real time rendering , EVER, you either use amd and cant enjoy fhe feature properly and need fsr performance or you are simply trolling. Its here to stay. Games are starting to use RT features that cant even be disabled anymore AMD is lagging behind sorely because of RT and updcaling and other bells and whistles, RDNA2 was very competitive in raster and often faster and yet it still got beat because the vast majority wants these new features , a very tiny minority on the Internet moans about RT being a gimmick.
Upscaling surpassed native in more and more ways ever since dlss 2.0 launched , if you still get artifacting or shimmering then you are a fsr user.
and anyone who calls it a gimmick has his brain shoved where the sun doesn't shine.
RT, Updcaling, AI is the future and any company that does not implement better solutions wilm be left in the dust
Unfortunately it's very hard to do image quality comparisons between engines and different technologies. It's more subjective than fps... There are games from 5+ years ago that look and run better than what's coming out now imo.
It's not a gimmick, it's snake oil that exists so Nvidia can sell more GPUs. Its as simple as, if a 4 year old card can run every game at 4k 120fps+, no one will have any reason to upgrade.
So they came up with RT and added upscaling to the mix.
And the fact upscaling reduces image quality, combined with games coming out now with reduced texture quality looking like vaseline smeared all over the screen, in order to force the RT marketing even to those on low end cards. Also some games are coming out with RT on by default, or no option to turn RT off, which makes AMD cards look worse than they really are. I think Nvidia realized they made a mistake with the GTX 10 series, those cards lasted for years without needing to upgrade, so Nvidia sells software features, some of those are only available unless you buy the latest GPU.
I am only asking out of curiosity. I prefer enriching my ignore list than doing this as a part time job. RT is going to become a necessity really soon and even people who insist on raster will be forced to reconsider. The way to do it is the same as with hardware PhysX. When Hardware PhysX came out, programmers suddenly forgot how to program physics effects on CPUs. You either had physics effects with hardware PhysX, or almost nothing at all without hardware PhysX.
Now about RT. In that latest video from Tim of HUB, where he spotted full screen noise when enabling RT(maybe they rejected his application to work at Nvidia? Strange no other site, including TPU, investigated these findings), there was at least one comparison in Indiana Jones that completely shocked me. And in both images RT was on, the difference was in one was RT NORMAL and in the other RT FULL.
The difference in lighting is shocking at least. In NORMAL it is like the graphics engine is malfunctioning, not working as expected, or like there is a huge bug somewhere in the code. In the past, games that where targeting both audiences who wanted RT and audiences who where happier with high fps raster graphics, had lighting differences between RT and raster modes, where you had to pause the image and start investigating lighting to really see the differences.
Here in a game that RT is the only option, the difference can be spotted with the eyes closed. And lighting is totally broken when leaving RT in NORMAL setting, the same as 15 years ago having PhysX in low.
I think Nvidia will try to push programmers in using libraries where only with FULL RT the gamer gets correct lighting. With medium, normal, low or whatever other option the gamer chooses in settings, lighting will be completely broken. That will drive people to start paying 4 digit prices just to get the lighting that today enjoy for free.
Maybe TPU would like to investigate it, .....or maybe not?
Every single game I've seen with CPU PhysX all have the same problem on DX12 STUTTER FEST all synchronization problems. It's using the same base main thread to sync up. GPU are still many magnitudes faster than CPUS in physX. My 2080 ti is literally 4 times faster than my 5800x 3D in Physx. The problem is Nvidia took away GPU physic from the 4,000 series now. No one has noticed until they try to use it on older game that have GPU physX that can enable & runs like crap on an RTX 4090.
Acknowledging Moore's Law as anything more than a tactic to convince investors is simply uninformed.
And second, the slowed rate of transistor density doubling is something that affects ALL COMPANIES who design chip architectures. If anything, NVIDIA has traditionally been the one of the companies who have not relied on a superior process node to make their products competitive.
And finally, let's remember that AMD, Intel, and NVIDIA are all American companies. They aren't each others enemies and none of them are any less of a greedy corporation than the other.
github.com/NVIDIA-Omniverse/PhysX
en.wikipedia.org/wiki/PhysX?wprov=sfti1#
Edit: I looked it up, Nvidia stopped GPU development of Physx in 2018 because nobody was using it anymore.
The O3DE's fork also has support for GPU on CUDA.
FWIW, there are many other physics engines nowadays that are really good. And doing those physics calculations on the CPU has become way better given the advancements in SIMD stuff and whatnot.
What am I missing?
When it is still locked to the main thread for anything done in gaming & it's also synced to that same thread.
That edit is funny because I can play a bunch of games where physX will show up when I enable it to show up on the Nvidia control panel & there's about recent 2020 on up games I have that show it. If it's Reliant on Cuda then it won't work anything, but a Nvidia Gpu.
So
Why bother calling open sources, when it's not?
That's why I said it only works on "CPU" for the open-source part, as it isn't limited to Cuda.
You're even free to port the CUDA-specific parts to vulkan, OpenCL or whatever you may want.
Many machine learning frameworks are also open-source but mostly work on CUDA on the GPU-side because that's what most users use and it's the best API to work with. No one stops calling those "open source" because of that.
GPU might be faster than CPU, but PCIe is slower than both of them. GPUs are only fast because all the graphics data is on GPU-side only and never leaves. If data needs to traverse backwards over PCIe, it slows down to the point of not being worth it at all.
CPUs never were that slow at physics (even if GPUs are better at it). But its a matter of PCIe more than anything else.
Ex: waving flags, cloth or hair that moves on GPU but never telling the CPU of those positions.
Aka: the game engine was blind to the physics. If you calculate physics in the GPUs, you need to leave it on the GPU.