Thursday, November 3rd 2022
AMD Radeon RX 7900 XTX Performance Claims Extrapolated, Performs Within Striking Distance of RTX 4090
AMD on Thursday launched the Radeon RX 7900 XTX and RX 7900 XT RDNA3 graphics cards. With these, the company claims to have repeated its feat of a 50+ percent performance/Watt gain over the previous-generation, which propelled the RX 6000-series to competitiveness with NVIDIA's fastest RTX 30-series SKUs. AMD's performance claims for the Radeon RX 7900 XTX put the card at anywhere between 50% to 70% faster than the company's current flagship, the RX 6950 XT, when tested at 4K UHD resolution. Digging through these claims, and piecing together relevant information from the Endnotes, HXL was able to draw an extrapolated performance comparison between the RX 7900 XTX, the real-world tested RTX 4090, and previous-generation flagships RTX 3090 Ti and RX 6950 XT.
The graphs put the Radeon RX 7900 XTX menacingly close to the GeForce RTX 4090. In Watch_Dogs Legion, the RTX 4090 is 6.4% faster than the RX 7900 XTX. Cyberpunk 2077 and Metro Exodus see the two cards evenly matched, with a delta under 1%. The RTX 4090 is 4.4% faster with Call of Duty: Modern Warfare II (2022). Accounting for the pinch of salt usually associated with launch-date first-party performance claims; the RX 7900 XTX would end up within 5-10% of the RTX 4090, but pricing changes everything. The RTX 4090 is a $1,599 (MSRP) card, whereas the RX 7900 XTX is $999. Assuming the upcoming RTX 4080 (16 GB) is around 10% slower than the RTX 4090; the main clash for this generation will be between the RTX 4080 and RX 7900 XTX. Even here, AMD gets ahead with pricing, as the RTX 4080 was announced with an MSRP of $1,199 (exactly 20% pricier than the RX 7900 XTX). With the FSR 3.0 Fluid Motion announcement, AMD also blunted NVIDIA's DLSS 3 Frame Generation performance advantage.
Source:
harukaze5719 (Twitter)
The graphs put the Radeon RX 7900 XTX menacingly close to the GeForce RTX 4090. In Watch_Dogs Legion, the RTX 4090 is 6.4% faster than the RX 7900 XTX. Cyberpunk 2077 and Metro Exodus see the two cards evenly matched, with a delta under 1%. The RTX 4090 is 4.4% faster with Call of Duty: Modern Warfare II (2022). Accounting for the pinch of salt usually associated with launch-date first-party performance claims; the RX 7900 XTX would end up within 5-10% of the RTX 4090, but pricing changes everything. The RTX 4090 is a $1,599 (MSRP) card, whereas the RX 7900 XTX is $999. Assuming the upcoming RTX 4080 (16 GB) is around 10% slower than the RTX 4090; the main clash for this generation will be between the RTX 4080 and RX 7900 XTX. Even here, AMD gets ahead with pricing, as the RTX 4080 was announced with an MSRP of $1,199 (exactly 20% pricier than the RX 7900 XTX). With the FSR 3.0 Fluid Motion announcement, AMD also blunted NVIDIA's DLSS 3 Frame Generation performance advantage.
164 Comments on AMD Radeon RX 7900 XTX Performance Claims Extrapolated, Performs Within Striking Distance of RTX 4090
My interest will be in getting an XTX and tuning it to see if I can run it at 200-250W without losing too much performance. If The 7900XTX can't manage that, perhaps the 7800XT will do. My HTPC is currently rocking a 6700 10GB which sips about 125W under full load after an undervolt and I really don't think my case or slim furniture can handle anything more than 200-250W. The irony is that it's right underneath the only 4K display in the house, so it needs the most GPU power.
It's the same old claim that AMD (or Intel) will catch up on Nvidia with better software over time, but it never happens. If driver overhead was holding back the performance, we would see a progressively growing overhead with the higher tier cards, holding them back to the point where high-end cards become almost pointless. The performance figures we've seen so far does not indicate this, and when reviews arrive, we can probably discredit that claim completely.
And no, (PC) games are not optimized for specific GPU architectures. Games are written using DirectX or Vulkan these days, neither are tailored to specific GPU architectures. Games may have some exclusive features requiring specific API extensions, but these don't skew the benchmark results.
BTW, did anyone catch the review embargo? I agree, I expect some good deals from both makers, so people better set some price notifications grab the best deals.
My assumption is that in 5800X TPU testbed with the current game selection, RTX 4090 realizes around -10% from it's potential.
For example:
RTX 4090 theoretical 4K 139%
RTX 4090 realized 4K 125% (-10%)
Full AD103 with the quoted clocks 100%
I may be wrong, we will see what actual performance the current RTX 4080 model config will achieve (304 TMUs/TC vs 336 and -4% clocked vs my proposed AD103 specs) in relation with RTX 4090.
AMD has a 300 sq. mm die vs nvidia's 2x larger die. Of course, AMD's product is around 60-70% of the cost of the nvidia's.
I only hope the AMD GPUs are as fast as advertised. Some reviewers claim it is already a big joke and a cash grab.
4080 being 60% of 4090, means that if 4080 is 100%, 4090 is 166%
NVIDIA GeForce RTX 4080 3DMark TimeSpy scores have been leaked as well - VideoCardz.com
NVIDIA GeForce RTX 4080 Graphics Card Geekbench 5 Benchmark Leaks Out, Up To 15% Faster Than RTX 3090 Ti (wccftech.com)
40704080 and the 4090.Very compelling to see how it all plays out once they all have been released.
It is also highly likely the 4080 16G will get repositioned - in MSRP. $200 or even $300 just for somewhat better RT perf is steep. Too steep - and thats giving Nvidia benefit of the doubt that 4080 won't get eclipsed by AMD's 7900XT (yes XT). I honestly think the 4080 is going to be only situationally equal, and overall lower in raster perf, and even the 7900XT will be highly competitive with its performance, seeing the tiny gap between XTX and it.
4K struggles with efficiency because you're basically wasting performance on pixels you'll never notice at the supposed ideal view distance. You'll make a choice between a perf sacrifice for no uptick in graphical fidelity, versus sitting closer to see it all and killing your neck/back. At longer view distances, you can make do with lower res for the exact same experience. Another issue is scaling, 4K requires it for small text or its just unreadable.
Its a thing to consider ;) Not much more; the fact remains 4K is becoming more mainstream so there's simply more on offer, specifically also OLED. But the above is where the statement '1440p is enough' really comes from. Its a sweet spot, especially for a regular desktop setting. Couch gaming follows a different ruleset, really. But do consider also the advantages. I can still play comfortably at 3440x1440 on a GTX 1080... (!) 4K is going to absolutely murder this card though. Jumping on 4K is tying yourself to a higher expense to stay current on GPU, or sacrificing more FPS for wanted IQ. Nvidia has all opportunity to tweak the line up and the better half isn't even out... They always ran the risk of misfires because they release first.
Also, AMD already markets "8K" or 4K ultrawide experience with Radeon RX 7900 series GFX.
A local store with offers (quantity of offers is in brackets):
And anyway my point wasn't what Nvidia will do but what it could achieve based on AD103 potential... According to leak, even an OC cut-down RTX 4080 (304TCs enabled vs 336TCs of my higher clocked full AD103 config...) appears to be only -20% slower vs RTX 4090 in 3DMark Time Spy Performance preset and -27% in Extreme 4K preset...
You do your math, I will do mine!
For example theoretical Shading performance delta alone is useless to extract performance difference between 2 models, it's much more complex than that...
NVIDIA GeForce RTX 4080 16 GB Specs | TechPowerUp GPU Database
These posts make no sense whatsoever. 4080 isn't in the correct place in that chart, obviously, and 'local store offers' tell just about jack shit about where 4K is for gaming. Its marketing; you can find IoT devices like a fridge with '4K support'.
Resolution was, is and will always be highly variable. Especially now with FSR/DLSS. There is also a resolution for every use case, its not true the only way is up, enough is enough.