Wednesday, May 17th 2023
![AMD Radeon Graphics](https://tpucdn.com/images/news/amdradeon-v1721205152158.png)
Volt-modded RX 7900 XTX Hits 3.46 GHz, Trades Blows with RTX 4090
An AMD Radeon RX 7900 XTX graphics card is capable of trading blows with NVIDIA GeForce RTX 4090, as overclocker jedi95 found out. With its power limits unlocked, the RX 7900 XTX was found reaching engine clocks as high as 3.46 GHz, significantly beyond the "architected for 3.00 GHz" claim AMD made in its product unveil last Fall. At these frequencies, the RX 7900 XTX is found to trade blows with the RTX 4090, a segment above its current segment rival, the RTX 4080.
Squeezing 3.46 GHz out of the RX 7900 XTX is no child's play, jedi95 used an Elmor EVC2SE module for volt-modding an ASUS TUF Gaming RX 7900 XTX, essentially removing its power-limit altogether. He then supplemented the card's power supply, so it could draw as much as 708 W (peak), to hold its nearly 1 GHz overclock. A surprising aspect of this feat is that an exotic cooling solution, such as liquid-nitrogen evaporator, wasn't used. A full-coverage water block and DIY liquid cooling did the job. The feat drops a major hint at how AMD could design the upcoming Radeon RX 7950 XTX despite having maxed out the "Navi 31" silicon with the RX 7900 XTX. The company could re-architect the power-supply design to significantly increase power limits, and possibly even get the GPU to boost to around the 3 GHz-mark.
Sources:
jedi95 (Reddit), HotHardware
Squeezing 3.46 GHz out of the RX 7900 XTX is no child's play, jedi95 used an Elmor EVC2SE module for volt-modding an ASUS TUF Gaming RX 7900 XTX, essentially removing its power-limit altogether. He then supplemented the card's power supply, so it could draw as much as 708 W (peak), to hold its nearly 1 GHz overclock. A surprising aspect of this feat is that an exotic cooling solution, such as liquid-nitrogen evaporator, wasn't used. A full-coverage water block and DIY liquid cooling did the job. The feat drops a major hint at how AMD could design the upcoming Radeon RX 7950 XTX despite having maxed out the "Navi 31" silicon with the RX 7900 XTX. The company could re-architect the power-supply design to significantly increase power limits, and possibly even get the GPU to boost to around the 3 GHz-mark.
75 Comments on Volt-modded RX 7900 XTX Hits 3.46 GHz, Trades Blows with RTX 4090
I think comparing AD102 and N31 on die area and transistor count isn't going to be any accurate in justifying it as a smaller processor. If anything, N31s design might be actually more complex all things considered.
Not that either of us can actually verify our claims, I just strongly believe that for what they are and with the intent of graphics, they are much closer than apart.
I will bring my second argument. When I see Ray tracing in TWWH, XCOM or ARPGs I will care more. I know I have not seen many posts about the Ray tracing performance of Diablo 4 but there are plenty of people enjoying that and the most popular Game right now Age of Wonders 4 has no posts on TPU about Ray Tracing performance.
There are so many Culture War issues in today's society that the truth is hard to see. I will expand, there was a post(s) on TPU that establish that TPU users are AMD Centric. I look at it another way. Lets look at the cost of the 4090 vs the cost of the 7900XTX and you could go from the Asus Prime B650 to the MSI X670E Ace Max. You could go from a 7600 to to 7950X. You could also buy the most expensive Case Cooler Master HAF whatever, You could go full ARGB and buy into the Corsiar eco system from a Deepcool Matrix 55. Is RT really worth that much?
The thing about 7000 that people that own them appreciate is you generally don't have to do anything to enjoy 4K High refresh rate panels with high FPS, butter smooth, High Fidelity (With Mini LED you can generally turn the contrast and colors way up) Games that have come full circle with the best Japanese Games coming to the PC Master Race. There are more people excited about Armored Core 6 than not. Now we get updates for Games like CP2077 and Gears 5 when the amount of quality is so much more than just those, Greedfall anyone? There are also plenty of older Games like Red Faction Guerilla that are sweet on modern systems.
Unreal 5 and Open adoption of DX Ray tracing is the last piece that the RT fans don't give context to. IF the PS5 and Xbox1 are adopting Ray Tracing on AM4 CPUs and 6000 series GPUs. Those 3 months that AMD spent updating the 7000 was also to make sure it worked with 6000 and provided the benefits of more bandwidth and we will see in about a year when RT Games look no better than Unreal 5 but there will be tons more Unreal 5 Games that have native DX Ray Tracing and don't need Nvidia's version. As the PC has many different software options but consoles are all about Gaming.
If you really step back you will see that there are many similarities to the Freesync vs Gsync debate and we all know how that turned out. It's like the people that bash FSR, when it's the only thing you have gotten for free from any GPU vendor that gives cards 10 years old performance improvements. Of course someone is going to tell me how much better DLSS is than FSR but DLSS is not open. Sure Nvidia released a SDK to the wild for DLSS but will that support your 2000 series card or 1080TI?
Frankly this exact experience is what I always loved about Nvidia cards too. Pascal was definitely similar in experience. I dont need it any other way! Im SO past comparing AA methods and whatnot. Its completely irrelevant, just like every option that isnt universally available in games on any card. I think you and @Dr. Dro are right. In the current state of affairs, Ada is the better one. I think the point Im pushing on is more on the perspective of its long term prospects - its potential if you will. Thats an old AMD story... so much potential, but.... But this time though the potential is proven already with Zen. Chiplet works. I think that is why I view RDNA3 as superior. The cost and yield advantages are undeniably there and will bring a much bigger benefit to gaming perf than whatever Ada is.
Some specifics that are quite remarkable:
- perf/W is stellar on RDNA3, despite presence of interconnect ie more hardware to facilitate its design
- support for excellent VRAM amount within its TDP bracket
- price/frame substantially better than Ada
This is a much better list than what Ryzen 1 brought to the table. Chiplets are The Way.
Chiplet seems to work well for gamers using an X3D CPU... - thát is the example I was pointing at earlier when I said smaller die better arch which you managed to turn into a 7700X comparison ;)
But yeah... that idle usage. Damn. Can't be having that now. Guess you're better off power limiting a 13th gen Core instead, yeah... I mean we don't really need high min. FPS in games right, with that idle usage. Its much better to use 3x the load Power in gaming!
Here's a pro tip against idle usage. Turn the PC off. You use less than a single Watt. And it boots in 15 seconds.
Not really sure how number of transistors is relevant here. Number of transistors isn't indicative of a product being inherently more expensive or better. It will vary widely as different features on the die require differing amounts of transistors.
Would the 4080 be better without RT parts? I'm not sure this is relevant given AMD has RT acceleration built into it's cores as well and would theoretically gain by getting rid of those and replacing them with more raster. On the architectural level I'm not sure I'd call AMD much slower in RT. If you look at fortnite's semi-recent UE5 update you can see the 7900 XTX can easily perform just as well as the 4080. Clearly there is a lot of performance left on the table in many games on RDNA3 cards (which makes sense given Nvidia sponsors so many of the games that do include RT) so I'd not rush to say AMD's RT is terrible on an architectural level when you cannot ignore the software component of that equation. I can assure you they don't:
At idle both Intel and AMD consume around the same amount of power:
Chiplets are the future for desktop parts. Nvidia said as much in a 2017 white paper and Intel is rushing as fast as it can to get them. There are far too many benefits to ignore for any chip larger than 120mm2. Heck the idle power usage excuse isn't even a valid one given AMD and Intel are within 2 watts of each other in that regard.
Intel does not idle at 20+ watts, lol. Mine is streaming 2 videos with around 15 browser tabs open while being on discord call at 6 watts (12900k). My 3700x needs around 40 for the same workload. Only the mobile zen parts get close to intel in that regard. You misunderstood the point. I used the 4070ti replying to a guy sayin the xtx beats the 4090 in some games. It's completely irrelevant, exactly as irrelevant it is that the 70ti beats the xtx in some games. Doesn't matter, it's an outlier.
Yeah... so.... euhm... let's try and pull a general statement from this then
Or this, I especially love that monolithic 13900K there. This one is great for context. See, watt per frame/points is nothing unless you factor in the actual total usage and performance. What it underlines is this: the results are all over the place, chiplet excels in about as many use cases as monolithic, and this is a wildly moving target now as stuff gets optimized for both chiplet and big little.
Also in your graph the 3d still sucks in single thread efficiency, lol. In order to get the 95 pts shown in your graph, it drops its clockspeeds to the point it's as fast as the 13400f. And the 13400f at that point is much more efficient, lol (50+% more to be exact)
Wtaf CPU idle watts has to do with a GPU oc is beyond me.
Maybe make a thread, call it come get some 4090 love or something.
And no the 12900k isn't a power hog. It's actually very very efficient for my workload. Gaming it's below 60w unless I drop to 1080p in which case it goes up to 100 on the heaviest game in existence right now (tlou), and in every other workload I do it's sitting below 20, usually below 10. I'm pretty impressed by its efficiency actually.
My old 6800XT use to idle around 7-8 Watts single 34 inch ultrawide display
On my 7900XTX idle is 50 watts on the same display.
Wizzards card's show less idle wattage so its varies on your card and monitor combo.
They are both megacorporations interested in earning your money. Neither are your friends and neither are interested in cutting you a particularly good deal, they just want to dig at each other and make as much money as possible in the process. Buy what suits your needs or corresponds to your ideological bias, end of the day both sides to this pitiless argument are going to be playing the exact same video games, arguably at the same level of fidelity overall. This clubism is no different from console wars, except thrice as pointless. Then again, 3 times 0 is still 0. We are in resounding agreement about that - chiplets are the future, hold the most potential and MCMs are going to be necessary to continue scaling forward to improve yield, manage costs and scale performance further. The 7900 XTX's "underwhelming" performance is - IMO - excusable given that it is the very first design of its kind, much like the R9 Fury X and its HBM pitch.
A lot of the anger coming from the usage of words such as "underwhelming" come from their generally negative connotation and people are very proud but as you perfectly stated, it obviously performs exceptionally well even having missed its intended performance targets. That's why AMD did not ask more money for it, and instead pitched it against the 4080 after all. I can still mostly run every game out there at 4K with maximum settings at great frame rates on my RTX 3090, so I just don't see why anyone with a 7900 XT or XTX cannot do it either. Some of the games that use more RT effects require DLSS, but I toggle that to quality and go on my merry way. Same you would do with FSR on an AMD card, when the option is spending between $1600 to $2000 on a 4090, I'm not complaining.
And here I'm rocking 300 watt power limit on the 4090 suprim liquid on a daily basis. Oh right this is purely for benchmarking.