Thursday, January 30th 2025

AMD Radeon 9070 XT Rumored to Outpace RTX 5070 Ti by Almost 15%
It would be fair to say that the GeForce RTX 5080 has been quite disappointing, being roughly 16% faster in gaming than the RTX 4080 Super. Unsurprisingly, this gives AMD a lot of opportunity to offer excellent price-to-performance with its upcoming RDNA 4 GPUs, considering that the RTX 5070 and RTX 5070 Ti aren't really expected to pull off any miracles. According to a recent tidbit shared by the renowned leaker Moore's Law is Dead, the Radeon RX 9070 XT is expected to be around 3% faster than the RTX 4080, if AMD's internal performance goals are anything to go by. MLID also notes that RDNA 4's performance is improving by roughly around 1% each month, which makes it quite likely that the RDNA 4 cards will exceed the targets.
If it does turn out that way, the Radeon RX 9070 XT, according to MLID, should be roughly around 15% faster than its competitor from the Green Camp, the RTX 5070 Ti, and roughly match the RTX 4080 Super in gaming performance. The Radeon RX 9070, on the other hand, is expected to be around 12% faster than the RTX 5070. Of course, these performance improvements are limited to rasterization performance, and when ray tracing is brought to the scene, the performance improvements are expected to be substantially more modest, as per tradition. Citing our data for Cyberpunk 4K with RT, MLID stated that his sources indicate that the RX 9070 XT falls somewhere between the RTX 4070 Ti Super and RTX 3090 Ti, whereas the RX 9070 should likely trade blows with the RTX 4070 Super. Considering AMD's track record with ray tracing, this sure does sound quite enticing.Of course, it will all boil down to pricing once the RDNA 4 cards hit the scene. If AMD does manage to undercut its competitors from NVIDIA by a reasonable margin, there is no doubt that RDNA 4 will be the better choice for most people. However, with NVIDIA's undeniable lead in ray tracing, paired with DLSS 4, will presumably make things more complicated than ever before. It is unclear what AMD has up its sleeve with FSR 4. Recent rumors do point at pretty good compatibility, but as with all rumors, be sure to accept any pre-release whispers with a grain of salt.
Source:
MLID via YouTube
If it does turn out that way, the Radeon RX 9070 XT, according to MLID, should be roughly around 15% faster than its competitor from the Green Camp, the RTX 5070 Ti, and roughly match the RTX 4080 Super in gaming performance. The Radeon RX 9070, on the other hand, is expected to be around 12% faster than the RTX 5070. Of course, these performance improvements are limited to rasterization performance, and when ray tracing is brought to the scene, the performance improvements are expected to be substantially more modest, as per tradition. Citing our data for Cyberpunk 4K with RT, MLID stated that his sources indicate that the RX 9070 XT falls somewhere between the RTX 4070 Ti Super and RTX 3090 Ti, whereas the RX 9070 should likely trade blows with the RTX 4070 Super. Considering AMD's track record with ray tracing, this sure does sound quite enticing.Of course, it will all boil down to pricing once the RDNA 4 cards hit the scene. If AMD does manage to undercut its competitors from NVIDIA by a reasonable margin, there is no doubt that RDNA 4 will be the better choice for most people. However, with NVIDIA's undeniable lead in ray tracing, paired with DLSS 4, will presumably make things more complicated than ever before. It is unclear what AMD has up its sleeve with FSR 4. Recent rumors do point at pretty good compatibility, but as with all rumors, be sure to accept any pre-release whispers with a grain of salt.
304 Comments on AMD Radeon 9070 XT Rumored to Outpace RTX 5070 Ti by Almost 15%
By the way, 7900 GRE performance is fine, imo, as long as the price is right.
9070xt will have much better RT performance... But like 4070/5070 RT will be largely unusable anyway at 1440p+
Without using upscaling which negates any benefit. Image quality will be worse.
Likely most depressing gpu generation for a long time...
Worst case scenario RX 7900 XT and RTX 4070S RT.
9070 ~$500, ~10% faster faster than 5070 and 16GB of VRAM. A deal of the year for mid-range gamers.
9070XT $650/690, ~10% faster than 5070Ti Brainwashing customers also sells products, since the beginning of time... 9070 for $499 and 9070XT for $699 would actually be a jackpot. 9070XT is NOT going to cost $549. Forget about it. 9070 non-XT will be ~$500.
It will be only 10/15% slower than 5080. People are completely brainwashed and don't pay attention to details. They just throw random numbers around.
i know i'm going to make a lot of people mad, but i hope the amd cards are amazing and cheap so nvidia drops the prices on their cards so i can buy one considerable cheaper
as long as any price difference for similar performing cards falls under 100eur i will never consider buying a amd card, if it's larger then i guess i'll have to consider the black screen simulator again, oh boy!
Even if the card is 10-15% slower than a 5080, it will get bashed on not having proprietary Nvidia features. Nvidia doesn't lower their prices, they're too busy only caring about the AI/datacenter market, and they have been giving gamers the finger for years yet people are still loyal.
Intel has had recent driver issues, as well as nividia, whilst switching from its ancient control panel.
NVIDIA App Allegedly Degrades Gaming Performance by Up to 15%, But There Is a Fix | TechPowerUp
I don't use the Nvidia app, I am one of those guys who has never installed it. I would say 98% of people Nvidia driver problems are related to unstable system settings, incompatible hardware ie memory causing their problems.
Because we're nerds. We're the people that supported ATi/AMD in the face of ever-increasing nVIDIA marketing and Intel's dominance in the space since forever. Because their choices were logical.
We bought the mobile T-birds and the T-breds instead of Intel and put them in a desktop socket.
We bought the A64 multi-core Opterons instead of Intel. We did pin-mods for cores, ran high-voltage Winbond BH-5 and DFI toaster boxes instead of Sammy TCCD on Intel.
We bought 9700 Pros. Actually we bought 9500 Pros and flashed them to 9700pros. We bought RV770s (and v-modded them). We flashed first-run lower-end models for more shaders. That is your audience.
We notice RAM stutter, and applaud when our cards allocate enough so it doesn't do it. We appreciate when you don't make cards like 3GB-on-one-bus-1GB-on-another-bus 970. 12GB 4070Ti. 16GB 5080.
The REAL ones.
We are the nerds that understand how this shit works and want the actual best common-sense technology and don't give a shit about what's popular. AMD needs to understand that, and they largely did until the last couple generations in which they started clock-gating, power-gating, locking stuff down like nVIDIA...and now shooting for higher and higher margins. That, unfortunately for some, is not the company that gets our vocal support. They need to return to their roots. I want a part uses a little more power for higher clocks...But also cool designs that leverage ground-breaking technology (that competition may capitalize on later). Is it a small die (or rather smaller transistor count) so cheap? Does it have the amount of RAM needed for intended markets (they haven't lost that one yet, thankfully). Is a design choice good for both their business/consumers? That's what AMD/ATi customers want...not artificial limitations. We are different than most regular people, or even *some* tech enthusiasts.
Using the A16/M2 design methodology (highest clocks versus pure efficiency) would be a nice start...or rather a return to form.
The philosophy (adding a power connector for higher clocks) on RV770XT was very smart thinking. The efficiency cost of the low-end using low clocks (but keeping all shaders enabled) for yields smart.
Makes me wonder about N48...is it RV790 (read last paragraph) in disguise but using the RV770/RV790 game-plan all in one go (essentially 9070XT is really like a 4850)?
I guess I'm the only person that will surprised if it isn't...and have been saying it's likely for a long time. Hmm...ring of decoupled capacitors around the die...Hmm.
I miss Anand's writing; his defection to Apple really hurt our scene. He kept it real.
In some respects so did Dave (originally Beyond3D - read 'The missing MUL' by Rys, Noodle will never be forgotten), Scott, Ryan, Alan, Kyle, and etc. IMO not enough people have stepped up to replace them.
It wasn't just Intel/AMD that poached 'our' guys, it was also Asus/eVGA etc. I honestly don't know if they all got hired for their potential usefulness, or rather to keep them from exposing everything.
Steves' do a good job for the most part, but also a minority (also kind of at the whim of the algorithm, and it shows sometimes). I largely look to Roman these days, and glad when he does avideo.
There is more interesting (clock/v) information in that video (that help understand process/power curve; potential improvements) than there are in some huge review chains (unboxing/features/review/oc/etc).
Large credit goes to Anand; he had a touch for getting those interviews and making sure the engineering information was both shared OTR and shared coherently. That article wasn't the only one (even about ATi/AMD GPG) over the years. It would be nice not only if someone would take his place, but if the companies were willing to share those people and that information again, rather than be so guarded. I think more people would be more-invested in both the hobby and be more knowledgeable how stuff works, which could lead to greater innovation in the space. Both in testing (like frametimes etc) and features. Case in point...
Son, the absolute performance of a typical 7900xt (~2800mhz bc bw limitation at 2700/'21600' tight timings) is similar to that of a 4080...and has always cost less money. Sometimes a LOT less money.
...with more ram. Does it need 20GB? Well, the 4080 needs 16GB. The 5080 needs more than 16GB (and doesn't have it).
The absolute performance of 7900xt is 60TF, which I would consider the cut-off for 16GB. So does AMD, apparently, because I don't expect most N48 chips to be running >3663mhz, but maybe some close!
Most 4080(s) can't do that...so it's certainly a judgement call and AMD went 20GB. What's the typical clock on a 40 series...~2855 Oc? 2855*9728 or 10240 = 55-58.5TF? For reference, 4090 is 24GB/90TF.
Did AMD also clock that GPU way too low at stock? Yes they did. Was it stupid as fuck given MSRP? Yes it was. Was it so they could replace it later? Probably yes. Should you replace it? No you shouldn't.
You should laugh at people with a 4070Ti when you don't run out of RAM and have 50% more compute.
The 4080 12GB was $800 for 40TF w/ 12GB of ram. Eight Hundred Fucking Dollars and you can't play some games with high settings at 1440p comfortably.
They wanted it to be nine hundred apeshit fucking dollars.
The $250 B580 has 12GB of ram. That's some hard cope there guy.
I would take a 7800XT every day of the week bc of the value. OC it (58.4on this test at stock, which I assume W1zard uses in tests) and you have the same playable settings as a 4070 Ti 12GB. But...16GB.
Overclock a 4070ti and you have a slightly faster 4070 Ti that makes no discernable difference. Stilll 12GB. Like a B580.
I guess also a 5070 if you want to go that route, but still $550 and only 12GB/40TF. I get some some people have a preference for whatever reason...but 7800xt is/was a VERY good card for it's price at launch.
(So good they put a power limit on it so it wouldn't overclock even MORE [over 20%] to the NEXT tier and they could more-or-less respin it as N48 and sell it again; this time at full/higher clock potential.)
Not $800 fucking dolllars for a 12GB 40TF card. No. $500 for a 40TF card. Probably soon ~$400 (Price drop/9070). With 16GB...because it's over 40TF (when OC). Almost like there's general boundaries.
Heavy heavy cope...or again, just a sign of the times where too many damn people have drunk Huang's kool-aid. Seriously...Why would someone even THINK what that guy wrote? It's insane, but common.
5080/5070ti is/will be the same way.
N48 from below, next-gen 192-bit/18GB from the side (but priced lower), 256-bit/24GB from above (priced similar)...and those 192-bit cards will be better! N48 good-enough but cheaper.
I haven't even caught up to this thread yet...but my God. This...this is why I'm sad. I shouldn't have to say this. AMD SHOULD BE SAYING THIS.
On one hand AMD makes cheap/OCable cards, and I appreciate that they're cheap bc priced for low-clock yields and better power consumption. They do this so they can price a tier (or more) down.
I appreciate they leave generally-decent power limits and room to overclock them to the next tier. THAT IS LITERALLY WHAT THEY DO. ON PURPOSE. BUT IT USES MORE POWER.
On the other hand some people are very not with the program, and that's a bummer. Can't fix...um...'not understanding.'
GDDR6 market spot prices last time I check was below $40 for 16GB.
In my opinion if priced right will awesome for local LLM.
I doubt will be much difference for games in 2025 but as past teach as more vram better on example of Rx 470 4Gb vs 8GB.
But.. I have been running AMD CPU and mobo for the last 4 years, shit almost 5 now dammit.. maybe 5.. to me its awesome.
Benchmarks say otherwise :rolleyes:
0% chance.
I prefer the previous rumour that the Red Devil flagship (3060MHz turbo) is showing performance within 5% of RX 4080S, at least is more believable.
To be at 5% distance from RX 4080S it will need 10% actual performance increase per CU at 3060MHz vs RDNA3 which is a stress (in my original calculation i had 4080S performing +10% vs 9070XT with a 3025MHz boost and a 6% actual performance increase per CU/clock)
AMD despite claiming 17.4% performance increase clock for clock with RDNA3 per CU pair in reality it achieved only around 6% in average vs RDNA2 despite doubling/per clock the FP throughput because it was very game dependent (in some games there was also regression despite having 2X the shading throughput) , so this time i didn't want to calculate with more than 6% average performance increase per CU/clock vs RDNA3
One of the problems of RTX 5000 is the low clocks, for example 2660MHz median for 5080 reference vs 2790MHz for 4080 reference, i didn't expected that.The other problem is that in some games the architectural change of the SM vs Ada doesn't bring uniform performance increase for older games, it's very game dependent (but logically it will get a little better in the future when developers start exploring Blackwell architecture)
Depending on actual median clock achieved in games RTX 5070Ti is going to be around -6% vs RTX 4080S (+18% performance/$ vs 4070Ti Super) in my calculations and RTX 5070 only 1% faster than RTX 4070S (+10% performance/$ vs 4070 Super).