Wednesday, May 17th 2023

Volt-modded RX 7900 XTX Hits 3.46 GHz, Trades Blows with RTX 4090

An AMD Radeon RX 7900 XTX graphics card is capable of trading blows with NVIDIA GeForce RTX 4090, as overclocker jedi95 found out. With its power limits unlocked, the RX 7900 XTX was found reaching engine clocks as high as 3.46 GHz, significantly beyond the "architected for 3.00 GHz" claim AMD made in its product unveil last Fall. At these frequencies, the RX 7900 XTX is found to trade blows with the RTX 4090, a segment above its current segment rival, the RTX 4080.

Squeezing 3.46 GHz out of the RX 7900 XTX is no child's play, jedi95 used an Elmor EVC2SE module for volt-modding an ASUS TUF Gaming RX 7900 XTX, essentially removing its power-limit altogether. He then supplemented the card's power supply, so it could draw as much as 708 W (peak), to hold its nearly 1 GHz overclock. A surprising aspect of this feat is that an exotic cooling solution, such as liquid-nitrogen evaporator, wasn't used. A full-coverage water block and DIY liquid cooling did the job. The feat drops a major hint at how AMD could design the upcoming Radeon RX 7950 XTX despite having maxed out the "Navi 31" silicon with the RX 7900 XTX. The company could re-architect the power-supply design to significantly increase power limits, and possibly even get the GPU to boost to around the 3 GHz-mark.
Sources: jedi95 (Reddit), HotHardware
Add your own comment

75 Comments on Volt-modded RX 7900 XTX Hits 3.46 GHz, Trades Blows with RTX 4090

#26
Dr. Dro
Vayra86Architectural superiority?!

Did you consider this?

vs

And let's not forget the 5nm GCD is only 300mm2 as well.

Ada is not superior, its more dense, a bigger die and a LOT more expensive to bake. Most other metrics that originate from the hardware are much the same, such as power consumption / W/frame. That 20% ish gap on raster translates almost directly in the transistor count gap as well. What else has Nvidia got? Featureset. Not hardware or architecture. That's why they push DLSS3 so hard.

This is the equivalent of early day Ryzen vs Intel monolithic, and its clear AMD is building competitive advantage if you look at die size / perf.


Oh lol, missed this one :D
With 25% of the cache (very transistor heavy) plus 12.5% of SMs/CUDA cores disabled, how much of that die area is simply dark on an RTX 4090 and how much transistor logic AMD simply moved off-die into the MCDs? There's also the consideration of dedicated tensor cores which add up die area but aren't normally used for regular shader processing.

I think comparing AD102 and N31 on die area and transistor count isn't going to be any accurate in justifying it as a smaller processor. If anything, N31s design might be actually more complex all things considered.

Not that either of us can actually verify our claims, I just strongly believe that for what they are and with the intent of graphics, they are much closer than apart.
Posted on Reply
#27
Lionheart
fevgatosUhm, that's silly. The 4070ti beats the 7900xtx in some games as well. Is that a testament to how great a value it its?
Lol what Nvidia Koolaid are you drinking ....
Posted on Reply
#28
fevgatos
LionheartLol what Nvidia Koolaid are you drinking ....
Are you suggesting there is not a single game that the 4070ti beats the 7900xtx? What amdkoolaid are you drinking?
Posted on Reply
#29
Hxx
fevgatosAnyways we have the 4080 vs the xtx comparison, amd is achieving much less with much more. Bigger die, bigger bus widths, more vram, more of everything, it just ties the 4080 in raster, loses in rt.
Ignoring everything else but raster , the 7900 xtx is faster than a 4080 in most games . Including minimum fps . Depending on which games you prefer , it can be noticeably faster in a few titles . I don’t recall seeing a title where the 4080 is noticeably faster (maybe civilization ).
Posted on Reply
#30
fevgatos
HxxIgnoring everything else but raster , the 7900 xtx is faster than a 4080 in most games . Including minimum fps . Depending on which games you prefer , it can be noticeably faster in a few titles . I don’t recall seeing a title where the 4080 is noticeably faster (maybe civilization ).
The difference is negligible, like less than 5% on average, while the xtx is much bigger in transistor count, bus width etc.
Posted on Reply
#31
kapone32
How many PC Games are out there? How many of those Games support Ray Tracing? If the Ray Tracing uptake was even 5% of the overall library it would matter more than it does now with less than 1% total for all Games you can play on PC. The people arguing that the 4070TI is faster than the 7900XTX are looking at day one numbers and not taking into account that owners of the 7000 series cards are not complaining about the issues that people have blasted all over social media about console ports with cards with less than 12GB.

I will bring my second argument. When I see Ray tracing in TWWH, XCOM or ARPGs I will care more. I know I have not seen many posts about the Ray tracing performance of Diablo 4 but there are plenty of people enjoying that and the most popular Game right now Age of Wonders 4 has no posts on TPU about Ray Tracing performance.

There are so many Culture War issues in today's society that the truth is hard to see. I will expand, there was a post(s) on TPU that establish that TPU users are AMD Centric. I look at it another way. Lets look at the cost of the 4090 vs the cost of the 7900XTX and you could go from the Asus Prime B650 to the MSI X670E Ace Max. You could go from a 7600 to to 7950X. You could also buy the most expensive Case Cooler Master HAF whatever, You could go full ARGB and buy into the Corsiar eco system from a Deepcool Matrix 55. Is RT really worth that much?

The thing about 7000 that people that own them appreciate is you generally don't have to do anything to enjoy 4K High refresh rate panels with high FPS, butter smooth, High Fidelity (With Mini LED you can generally turn the contrast and colors way up) Games that have come full circle with the best Japanese Games coming to the PC Master Race. There are more people excited about Armored Core 6 than not. Now we get updates for Games like CP2077 and Gears 5 when the amount of quality is so much more than just those, Greedfall anyone? There are also plenty of older Games like Red Faction Guerilla that are sweet on modern systems.

Unreal 5 and Open adoption of DX Ray tracing is the last piece that the RT fans don't give context to. IF the PS5 and Xbox1 are adopting Ray Tracing on AM4 CPUs and 6000 series GPUs. Those 3 months that AMD spent updating the 7000 was also to make sure it worked with 6000 and provided the benefits of more bandwidth and we will see in about a year when RT Games look no better than Unreal 5 but there will be tons more Unreal 5 Games that have native DX Ray Tracing and don't need Nvidia's version. As the PC has many different software options but consoles are all about Gaming.

If you really step back you will see that there are many similarities to the Freesync vs Gsync debate and we all know how that turned out. It's like the people that bash FSR, when it's the only thing you have gotten for free from any GPU vendor that gives cards 10 years old performance improvements. Of course someone is going to tell me how much better DLSS is than FSR but DLSS is not open. Sure Nvidia released a SDK to the wild for DLSS but will that support your 2000 series card or 1080TI?
Posted on Reply
#32
Vayra86
kapone32How many PC Games are out there? How many of those Games support Ray Tracing? If the Ray Tracing uptake was even 5% of the overall library it would matter more than it does now with less than 1% total for all Games you can play on PC. The people arguing that the 4070TI is faster than the 7900XTX are looking at day one numbers and not taking into account that owners of the 7000 series cards are not complaining about the issues that people have blasted all over social media about console ports with cards with less than 12GB.

I will bring my second argument. When I see Ray tracing in TWWH, XCOM or ARPGs I will care more. I know I have not seen many posts about the Ray tracing performance of Diablo 4 but there are plenty of people enjoying that and the most popular Game right now Age of Wonders 4 has no posts on TPU about Ray Tracing performance.

There are so many Culture War issues in today's society that the truth is hard to see. I will expand, there was a post(s) on TPU that establish that TPU users are AMD Centric. I look at it another way. Lets look at the cost of the 4090 vs the cost of the 7900XTX and you could go from the Asus Prime B650 to the MSI X670E Ace Max. You could go from a 7600 to to 7950X. You could also buy the most expensive Case Cooler Master HAF whatever, You could go full ARGB and buy into the Corsiar eco system from a Deepcool Matrix 55. Is RT really worth that much?

The thing about 7000 that people that own them appreciate is you generally don't have to do anything to enjoy 4K High refresh rate panels with high FPS, butter smooth, High Fidelity (With Mini LED you can generally turn the contrast and colors way up) Games that have come full circle with the best Japanese Games coming to the PC Master Race. There are more people excited about Armored Core 6 than not. Now we get updates for Games like CP2077 and Gears 5 when the amount of quality is so much more than just those, Greedfall anyone? There are also plenty of older Games like Red Faction Guerilla that are sweet on modern systems.

Unreal 5 and Open adoption of DX Ray tracing is the last piece that the RT fans don't give context to. IF the PS5 and Xbox1 are adopting Ray Tracing on AM4 CPUs and 6000 series GPUs. Those 3 months that AMD spent updating the 7000 was also to make sure it worked with 6000 and provided the benefits of more bandwidth and we will see in about a year when RT Games look no better than Unreal 5 but there will be tons more Unreal 5 Games that have native DX Ray Tracing and don't need Nvidia's version. As the PC has many different software options but consoles are all about Gaming.

If you really step back you will see that there are many similarities to the Freesync vs Gsync debate and we all know how that turned out. It's like the people that bash FSR, when it's the only thing you have gotten for free from any GPU vendor that gives cards 10 years old performance improvements. Of course someone is going to tell me how much better DLSS is than FSR but DLSS is not open. Sure Nvidia released a SDK to the wild for DLSS but will that support your 2000 series card or 1080TI?
Amen to this whole story and perspective man. You really hit the nail when you say 'just simple out of the box perf'. It really is like that. Shit just runs, I dont have to even consider FSR or RT, even if you can turn it on, I dont have to pixel peep to spot differences between settings. Just push every slider to the far right, disable the annoying post fx, 3440x1440 native, and Go.

Frankly this exact experience is what I always loved about Nvidia cards too. Pascal was definitely similar in experience. I dont need it any other way! Im SO past comparing AA methods and whatnot. Its completely irrelevant, just like every option that isnt universally available in games on any card.
fevgatosYou are missing the point. I'm not arguing about the usefulness of rt. Let's for the sake of argument agree that's it's completely garbage and no game should ever use it. Still when you want to compare architectural efficiency you need to tske that into account. If half the 4090 is used for RT acceleration (it's not, just an example) then just comparing transistors to raster performance would get you completely flawed results. There are purely rt workloads / benchmarks you cna use for such a comparison. 3d Mark has such a test for example

Regarding your comment about cpus, your argument leads to the 7700x being better architecturally than a 7950x,since they perform similarly in games. Which is just not true, it's just flawed comparison, exactly like comparing 4090 purely on raster is flawed.

Anyways we have the 4080 vs the xtx comparison, amd is achieving much less with much more. Bigger die, bigger bus widths, more vram, more of everything, it just ties the 4080 in raster, loses in rt.
I think you and @Dr. Dro are right. In the current state of affairs, Ada is the better one. I think the point Im pushing on is more on the perspective of its long term prospects - its potential if you will. Thats an old AMD story... so much potential, but.... But this time though the potential is proven already with Zen. Chiplet works. I think that is why I view RDNA3 as superior. The cost and yield advantages are undeniably there and will bring a much bigger benefit to gaming perf than whatever Ada is.

Some specifics that are quite remarkable:
- perf/W is stellar on RDNA3, despite presence of interconnect ie more hardware to facilitate its design
- support for excellent VRAM amount within its TDP bracket
- price/frame substantially better than Ada

This is a much better list than what Ryzen 1 brought to the table. Chiplets are The Way.
Posted on Reply
#33
kapone32
Vayra86Amen to this whole story and perspective man. You really hit the nail when you say 'just simple out of the box perf'. It really is like that. Shit just runs, I dont have to even consider FSR or RT, even if you can turn it on, I dont have to pixel peep to spot differences between settings. Just push every slider to the far right, disable the annoying post fx, 3440x1440 native, and Go.

Frankly this exact experience is what I always loved about Nvidia cards too. Pascal was definitely similar in experience. I dont need it any other way! Im SO past comparing AA methods and whatnot. Its completely irrelevant, just like every option that isnt universally available in games on any card.
I don't use it either and have no need. The 2080TI was the fastest card ever made when it was released. They did intro Ray Tracing with that card but there was no doubting the Gaming performance of that card.
Posted on Reply
#34
fevgatos
Vayra86I think you and @Dr. Dro are right. In the current state of affairs, Ada is the better one. I think the point Im pushing on is more on the perspective of its long term prospects - its potential if you will. Thats an old AMD story... so much potential, but.... But this time though the potential is proven already with Zen. Chiplet works. I think that is why I view RDNA3 as superior. The cost and yield advantages are undeniably there and will bring a much bigger benefit to gaming perf than whatever Ada is.

Some specifics that are quite remarkable:
- perf/W is stellar on RDNA3, despite presence of interconnect ie more hardware to facilitate its design
- support for excellent VRAM amount within its TDP bracket
- price/frame substantially better than Ada

This is a much better list than what Ryzen 1 brought to the table. Chiplets are The Way.
I don't like and will never like chiplets. I don't know what you mean "they work". Meaning, you can use them? Sure. Are they better than monolithic? Hell no.. You can see that even on CPUs, AMD has the better core design right now in terms of core transistors / performance and per watt, but they need a stupendous amount of power for simple tasks (or for just ... sitting there idle) compared to Intel chips cause of how much the rest of the die / dies consume. I recently bought a 6900hs laptop and I was astonished to find out it's extremely efficient at idle and light loads, contrary to desktop Zen parts which just stink. Chiplet only works best for the company making them, not for the consumer buying / using them.
Posted on Reply
#35
R0H1T
Except 6900hs is not a chiplet based CPU, also optimization by OEM/ODM is a thing that's why virtually the same laptops can behave very different if configured (im)properly.
Posted on Reply
#36
Vayra86
fevgatosI don't like and will never like chiplets. I don't know what you mean "they work". Meaning, you can use them? Sure. Are they better than monolithic? Hell no.. You can see that even on CPUs, AMD has the better core design right now in terms of core transistors / performance and per watt, but they need a stupendous amount of power for simple tasks (or for just ... sitting there idle) compared to Intel chips cause of how much the rest of the die / dies consume. I recently bought a 6900hs laptop and I was astonished to find out it's extremely efficient at idle and light loads, contrary to desktop Zen parts which just stink. Chiplet only works best for the company making them, not for the consumer buying / using them.
You do know what I mean with they work, I just literally explained it. Idle usage is a thing now? I guess its a thing for you, it seems awkwardly convenient too given the general stance. I can't help but smell heavy bias, much the same when you posit an example of how a 4070ti can beat a 7900XTX... You don't see me saying a 7900XTX can beat a 4090 in certain games either do you? We both know its an irrelevant outlier. You're entitled to your opinion, let's leave it there.

Chiplet seems to work well for gamers using an X3D CPU... - thát is the example I was pointing at earlier when I said smaller die better arch which you managed to turn into a 7700X comparison ;)
But yeah... that idle usage. Damn. Can't be having that now. Guess you're better off power limiting a 13th gen Core instead, yeah... I mean we don't really need high min. FPS in games right, with that idle usage. Its much better to use 3x the load Power in gaming!

Here's a pro tip against idle usage. Turn the PC off. You use less than a single Watt. And it boots in 15 seconds.
Posted on Reply
#37
evernessince
fevgatosBut the xtx is bigger - has more transistors - and is much slower in rt? How is this not a clear indication of architectural superiority. I mean is obvious that if the while die of the 4080 or the 4090 was used for raster performance the gap would be massive in raster, no?
This is akin to saying the 7950X3D is architecturally inferior to the 13900K because if you add up the size of the two CPU core dies, the IO die, and the cache die it works out to be larger than the 13900K. You are completely ignoring the factors underpinning that total die size, as in one is chiplet based and the other isn't.

Not really sure how number of transistors is relevant here. Number of transistors isn't indicative of a product being inherently more expensive or better. It will vary widely as different features on the die require differing amounts of transistors.

Would the 4080 be better without RT parts? I'm not sure this is relevant given AMD has RT acceleration built into it's cores as well and would theoretically gain by getting rid of those and replacing them with more raster. On the architectural level I'm not sure I'd call AMD much slower in RT. If you look at fortnite's semi-recent UE5 update you can see the 7900 XTX can easily perform just as well as the 4080. Clearly there is a lot of performance left on the table in many games on RDNA3 cards (which makes sense given Nvidia sponsors so many of the games that do include RT) so I'd not rush to say AMD's RT is terrible on an architectural level when you cannot ignore the software component of that equation.
fevgatosbut they need a stupendous amount of power for simple tasks (or for just ... sitting there idle) compared to Intel chips cause of how much the rest of the die / dies consume.
I can assure you they don't:




At idle both Intel and AMD consume around the same amount of power:



Chiplets are the future for desktop parts. Nvidia said as much in a 2017 white paper and Intel is rushing as fast as it can to get them. There are far too many benefits to ignore for any chip larger than 120mm2.
Vayra86You do know what I mean with they work, I just literally explained it. Idle usage is a thing now? I guess its a thing for you, it seems awkwardly convenient too given the general stance. I can't help but smell heavy bias, much the same when you posit an example of how a 4070ti can beat a 7900XTX... You don't see me saying a 7900XTX can beat a 4090 in certain games either do you? We both know its an irrelevant outlier. You're entitled to your opinion, let's leave it there.

Chiplet seems to work well for gamers using an X3D CPU... - thát is the example I was pointing at earlier when I said smaller die better arch which you managed to turn into a 7700X comparison ;)
But yeah... that idle usage. Damn. Can't be having that now. Guess you're better off power limiting a 13th gen Core instead, yeah... I mean we don't really need high min. FPS in games right, with that idle usage. Its much better to use 3x the load Power in gaming!

Here's a pro tip against idle usage. Turn the PC off. You use less than a single Watt. And it boots in 15 seconds.
Heck the idle power usage excuse isn't even a valid one given AMD and Intel are within 2 watts of each other in that regard.
Posted on Reply
#38
Vayra86
evernessinceHeck the idle power usage excuse isn't even a valid one given AMD and Intel are within 2 watts of each other in that regard.
The GPU idle power on RDNA3 is what he refers to I think. I hope, at least I took it that way. Ryzen has 'higher idle' as well, so technically ... I mean that is kind of the level I feel I'm communicating at with him right now. Its too bad, really.
Posted on Reply
#39
fevgatos
evernessinceI can assure you they don't:
Yeah, right



Intel does not idle at 20+ watts, lol. Mine is streaming 2 videos with around 15 browser tabs open while being on discord call at 6 watts (12900k). My 3700x needs around 40 for the same workload. Only the mobile zen parts get close to intel in that regard.
Vayra86you posit an example of how a 4070ti can beat a 7900XTX... You don't see me saying a 7900XTX can beat a 4090 in certain games either do you? We both know its an irrelevant outlier. You're entitled to your opinion, let's leave it there.
You misunderstood the point. I used the 4070ti replying to a guy sayin the xtx beats the 4090 in some games. It's completely irrelevant, exactly as irrelevant it is that the 70ti beats the xtx in some games. Doesn't matter, it's an outlier.
Posted on Reply
#40
Vayra86
fevgatosYeah, right



Intel does not idle at 20+ watts, lol. Mine is streaming 2 videos with around 15 browser tabs open while being on discord call at 6 watts (12900k). My 3700x needs around 40 for the same workload. Only the mobile zen parts get close to intel in that regard.


You misunderstood the point. I used the 4070ti replying to a guy sayin the xtx beats the 4090 in some games. It's completely irrelevant, exactly as irrelevant it is that the 70ti beats the xtx in some games. Doesn't matter, it's an outlier.
This one is much more funny, and recent:



Yeah... so.... euhm... let's try and pull a general statement from this then

Or this, I especially love that monolithic 13900K there. This one is great for context. See, watt per frame/points is nothing unless you factor in the actual total usage and performance. What it underlines is this: the results are all over the place, chiplet excels in about as many use cases as monolithic, and this is a wildly moving target now as stuff gets optimized for both chiplet and big little.

Posted on Reply
#41
fevgatos
Vayra86You do know what I mean with they work, I just literally explained it. Idle usage is a thing now? I guess its a thing for you, it seems awkwardly convenient too given the general stance. I can't help but smell heavy bias, much the same when you posit an example of how a 4070ti can beat a 7900XTX... You don't see me saying a 7900XTX can beat a 4090 in certain games either do you? We both know its an irrelevant outlier. You're entitled to your opinion, let's leave it there.

Chiplet seems to work well for gamers using an X3D CPU... - thát is the example I was pointing at earlier when I said smaller die better arch which you managed to turn into a 7700X comparison ;)
But yeah... that idle usage. Damn. Can't be having that now. Guess you're better off power limiting a 13th gen Core instead, yeah... I mean we don't really need high min. FPS in games right, with that idle usage. Its much better to use 3x the load Power in gaming!

Here's a pro tip against idle usage. Turn the PC off. You use less than a single Watt. And it boots in 15 seconds.
The 13900k also stinks in gaming power draw, that's why I sold mine and went back to the 12900k. But zen cpus for daily use are just horrible in power draw, and that's just a fact. Most of my daily work involves multiple tasks (but not multithreaded), with multiple browsers and stuff, and goddamn any zen part consumes 3-4-5 times more power than the intel equivalent.
Vayra86This one is much more funny, and recent:
There is nothing funny about it, of course if you use the 13900k without limits it's a power hog. But who does that? Who the heck runs 10 hour long blenders at 300+ watts? I bet no one. Just reviewers I guess, to generate clicks from the amd armada.

Also in your graph the 3d still sucks in single thread efficiency, lol. In order to get the 95 pts shown in your graph, it drops its clockspeeds to the point it's as fast as the 13400f. And the 13400f at that point is much more efficient, lol (50+% more to be exact)
Posted on Reply
#42
Vayra86
fevgatosThe 13900k also stinks in gaming power draw, that's why I sold mine and went back to the 12900k. But zen cpus for daily use are just horrible in power draw, and that's just a fact. Most of my daily work involves multiple tasks (but not multithreaded), with multiple browsers and stuff, and goddamn any zen part consumes 3-4-5 times more power than the intel equivalent.
The 12900k is still a power hog, but its good you're living that alternate reality hardcore.
fevgatosThere is nothing funny about it, of course if you use the 13900k without limits it's a power hog. But who does that? Who the heck runs 10 hour long blenders at 300+ watts? I bet no one. Just reviewers I guess, to generate clicks from the amd armada.

Also in your graph the 3d still sucks in single thread efficiency, lol. In order to get the 95 pts shown in your graph, it drops its clockspeeds to the point it's as fast as the 13400f. And the 13400f at that point is much more efficient, lol (50+% more to be exact)
Yes it is funny because I can look beyond the single result and see context and variance, as I pointed out a post earlier shit's moving all over the place. You're just frantically defending a point that was defeated 5 years ago. You're that unique guy with a set of use cases that perfectly fit Intel and Nvidia's architectural choices and product inner workings. It has been an amazing read!
Posted on Reply
#43
TheoneandonlyMrK
fevgatosI don't like and will never like chiplets. I don't know what you mean "they work". Meaning, you can use them? Sure. Are they better than monolithic? Hell no.. You can see that even on CPUs, AMD has the better core design right now in terms of core transistors / performance and per watt, but they need a stupendous amount of power for simple tasks (or for just ... sitting there idle) compared to Intel chips cause of how much the rest of the die / dies consume. I recently bought a 6900hs laptop and I was astonished to find out it's extremely efficient at idle and light loads, contrary to desktop Zen parts which just stink. Chiplet only works best for the company making them, not for the consumer buying / using them.
Your going to love pc in ten years then
fevgatosYeah, right



Intel does not idle at 20+ watts, lol. Mine is streaming 2 videos with around 15 browser tabs open while being on discord call at 6 watts (12900k). My 3700x needs around 40 for the same workload. Only the mobile zen parts get close to intel in that regard.


You misunderstood the point. I used the 4070ti replying to a guy sayin the xtx beats the 4090 in some games. It's completely irrelevant, exactly as irrelevant it is that the 70ti beats the xtx in some games. Doesn't matter, it's an outlier.
Your entire input in this threads Been irrelevant, you n Nguy and your precious 4090S four foot snake, just like when Linus pushed said 4090 pointlessly to near 1000 Watts, it's just a look at what's left un tapped it's not a look into the future or a brand fight for the ages so there is probably no need to go on and on about your precious in every thread.

Wtaf CPU idle watts has to do with a GPU oc is beyond me.

Maybe make a thread, call it come get some 4090 love or something.
Posted on Reply
#44
fevgatos
Vayra86The 12900k is still a power hog, but its good you're living that alternate reality hardcore.


Yes it is funny because I can look beyond the single result and see context and variance, as I pointed out a post earlier shit's moving all over the place. You're just frantically defending a point that was defeated 5 years ago. You're that unique guy with a set of use cases that perfectly fit Intel and Nvidia's architectural choices and product inner workings. It has been an amazing read!
No I'm not, that's why I have an amd laptop. Full amd. Both cpu and gpu.

And no the 12900k isn't a power hog. It's actually very very efficient for my workload. Gaming it's below 60w unless I drop to 1080p in which case it goes up to 100 on the heaviest game in existence right now (tlou), and in every other workload I do it's sitting below 20, usually below 10. I'm pretty impressed by its efficiency actually.
Posted on Reply
#45
evernessince
Vayra86The GPU idle power on RDNA3 is what he refers to I think. I hope, at least I took it that way. Ryzen has 'higher idle' as well, so technically ... I mean that is kind of the level I feel I'm communicating at with him right now. Its too bad, really.
Isn't high idle power consumption only with two different high refresh rate monitors? I believe AMD has it marked as a bug but I haven't kept up with whether they fixed it or not. I think in any case, whether RDNA3 has high idle power or not, I was pointing out with the Zen CPUs that you can have a chiplet based architecture that is efficient. You are probably right with your assumption of what his angle is but I don't see his end goal.
Posted on Reply
#46
Makaveli
evernessinceIsn't high idle power consumption only with two different high refresh rate monitors? I believe AMD has it marked as a bug but I haven't kept up with whether they fixed it or not. I think in any case, whether RDNA3 has high idle power or not, I was pointing out with the Zen CPUs that you can have a chiplet based architecture that is efficient. You are probably right with your assumption of what his angle is but I don't see his end goal.
It also affected single monitors but worse on dual monitor setups but I believe it will be addressed by a driver update.

My old 6800XT use to idle around 7-8 Watts single 34 inch ultrawide display

On my 7900XTX idle is 50 watts on the same display.

Wizzards card's show less idle wattage so its varies on your card and monitor combo.
Posted on Reply
#47
Dr. Dro
LionheartLol what Nvidia Koolaid are you drinking ....
The koolaid that both AMD and NVIDIA diehards drunk reeks of sadness and desperation (a little more apparent on the AMD side as a lot of their fans use red herrings, false equivalences and rely on outlier statistics to claim their ground, but also pretty obvious on the Nvidia side when they're reminded their 4090's check is in the mail), as if it this whole us vs. them thing even mattered.

They are both megacorporations interested in earning your money. Neither are your friends and neither are interested in cutting you a particularly good deal, they just want to dig at each other and make as much money as possible in the process. Buy what suits your needs or corresponds to your ideological bias, end of the day both sides to this pitiless argument are going to be playing the exact same video games, arguably at the same level of fidelity overall. This clubism is no different from console wars, except thrice as pointless. Then again, 3 times 0 is still 0.
Vayra86I think the point Im pushing on is more on the perspective of its long term prospects - its potential if you will. Thats an old AMD story... so much potential, but.... But this time though the potential is proven already with Zen. Chiplet works. I think that is why I view RDNA3 as superior. The cost and yield advantages are undeniably there and will bring a much bigger benefit to gaming perf than whatever Ada is.

Some specifics that are quite remarkable:
- perf/W is stellar on RDNA3, despite presence of interconnect ie more hardware to facilitate its design
- support for excellent VRAM amount within its TDP bracket
- price/frame substantially better than Ada

This is a much better list than what Ryzen 1 brought to the table. Chiplets are The Way.
We are in resounding agreement about that - chiplets are the future, hold the most potential and MCMs are going to be necessary to continue scaling forward to improve yield, manage costs and scale performance further. The 7900 XTX's "underwhelming" performance is - IMO - excusable given that it is the very first design of its kind, much like the R9 Fury X and its HBM pitch.

A lot of the anger coming from the usage of words such as "underwhelming" come from their generally negative connotation and people are very proud but as you perfectly stated, it obviously performs exceptionally well even having missed its intended performance targets. That's why AMD did not ask more money for it, and instead pitched it against the 4080 after all. I can still mostly run every game out there at 4K with maximum settings at great frame rates on my RTX 3090, so I just don't see why anyone with a 7900 XT or XTX cannot do it either. Some of the games that use more RT effects require DLSS, but I toggle that to quality and go on my merry way. Same you would do with FSR on an AMD card, when the option is spending between $1600 to $2000 on a 4090, I'm not complaining.
Posted on Reply
#48
Makaveli
Dr. Drowhen the option is spending between $1600 to $2000 on a 4090, I'm not complaining.
Good post and that cost for me would be like $2000-$2500 CAD.
Posted on Reply
#49
Dr. Dro
MakaveliGood post and that cost for me would be like $2000-$2500 CAD.
Yup. Cheapest 4090 on record here in Brazil was a Leadtek WinFast on sale for exactly 10 grand ($2000 USD) this week. I don't earn in dollars, however. :oops:
Posted on Reply
#50
Godrilla
So there hope for the 7950 xtx to compete with the 4090. @ Nvidia release the Titan.
And here I'm rocking 300 watt power limit on the 4090 suprim liquid on a daily basis. Oh right this is purely for benchmarking.
Posted on Reply
Add your own comment
Aug 8th, 2024 06:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts