• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

ASUS Radeon RX 7900 XTX TUF OC

Without power measurement equipment, you can still use Hwinfo both on stock and OC. Just measure either the avg or maximum.
That's the software measurement I mentioned, it's not exactly accurate, but could be better than nothing. Traditionally I do things right, so I'll figure something out
 
Last edited:
That's the software measurement I mentioned, it's not exactly accurate, but could be better than nothing. Traditionally I do things right, so I'll figure something out

A watt-meter is good enough, simple is best :D
At the end of the day, people pay for the electricity of the entire PC, not the GPU alone anyways
 
There's other sites that measure from both PCI-E slot up to PCI-E 12V > https://www.tomshardware.com/reviews/amd-radeon-rx-7900-xtx-and-xt-review-shooting-for-the-top/8

Not sure if you want to put so much detail into a card review either. I bet its time consuming ramping up every card like that.
Huh? I do a lot of testing with card-only power, too: https://www.techpowerup.com/review/asus-radeon-rx-7900-xtx-tuf-oc/37.html

Edit: they are using NVIDIA's PCAT, which is too slow imo (only 10 measurements per second)
 
@W1zzard is there any chance for adding Next Gen Witcher 3 to the test results? I know it just showed up but it seems it has a very strong impact on GPUs. It would be nice to have a comparisson vs Nvidia 40X0 serie.
 
@W1zzard is there any chance for adding Next Gen Witcher 3 to the test results? I know it just showed up but it seems it has a very strong impact on GPUs. It would be nice to have a comparisson vs Nvidia 40X0 serie.
I will definitely use it in future testing, but for these reviews it's not possible .. going on holiday tomorrow for a week, then it's basically xmas, I have a bunch of other NDA reviews coming up, then going on holiday again around NYE

"adding a game" for me means play through at least most of it, make notes of GPU loading, FPS, where good test scenes are, then pick a scene that's "realistic, demanding, not worst-case"
 
welp, after more thought put into this I've come to my personal conclusion that these cards are overpriced because the 7900 XT is really the 7800 XT in disguise, and the XTX is the real 7900 XT

this 7900 XT is AMD's equivalent of nvidia's 4080 12GB, yet I haven't seen them getting as much shit for it - in fact, most I saw is them being praised for being 'pro-consumer' and giving us, unwashed plebs, a X900 card for $100 less than the previous generation's 6900 XT's MSRP of $999 - real fuckin clever of AMD, I'll give them that

the 7900 XT is only 33% faster than a 6800 XT - what will that make the "real" 7800 XT? 15% faster? 20% faster? at what MSRP? I bet it'll be higher than that of 6800 XT's, which further shows how pathetic this generational leap is - we're back to R9 200 -> R9 Fury days LOL

this generation is a total bummer - high-end performance price goes up while carefully marketed as a "better deal than last gen's 6900 XT" to give the illusion that they're not just selling the 7800 XT as a X900 tier card

6800 XT MSRP - $650
3080 MSRP - $700
7900 XT (real: 7800 XT) - $900 (+$250 over last gen high end)
4080 - $1200 (+$500 over last gen high end)

no amount of "but it's new tech" cope will change it - as a person looking to buy a graphics card I DON'T GIVE A SHIT if it's got 6,000,000nm chiplet & fishlet tech, I want a better deal than last gen, not a worse deal

if this trend continues, and if dumbfuck gamers continue buying either lineup from AMD or nvidia, you'll see the 5080 for $1600, the 5090 for $2500, and AMD will follow suit, what with majority shareholder companies being the same in both AMD and nvidia, as pointed out by someone on here :)
 
Last edited:
I don't give a crap what AMD or Nvidia call their cards; they could call them the Ford Focus and Toyota Corolla for all I care. Read the reviews, do you comparisons and buy it if you like, or don't... It's your choice.
 
Seems OC power numbers are really needed, also going by all the comments on Reddit, so I'll probably move the whole OC testing to the power testing rig. I wanted to switch to a different test that Unigine anyway, maybe the new Witcher

Another option could be to report the card's software power measurements before and after OC
I think there's a lot of appetite to look at extended/oc performance per watt numbers, specifically in this instance. It appears RDNA3, specifically so far N31, more so than other architectures in recent years is power constrained in reference form and benefits readily from more power. It could be that rather than being given a power limit at/near the peak of clocks X voltage X stability point, the reference cards wound that back to hit the design/marketing goal of 350w board power, leaving board partners to exploit the headroom.

I'd certainly be very keen to see any testing and analysis you'd be willing to do, tall order 11 sleeps from Christmas though :)
 
6800 XT MSRP - $650 (UK price is £681)
3080 MSRP - $700 (UK price is £743)
7900 XT (real: 7800 XT) - $900 (+$250 over last gen high end)
4080 - $1200 (+$500 over last gen high end)

No amount of "but it's new tech" cop will change it, as a person looking to buy a graphics card I want a better deal than last gen, not a worse deal
I couldn't agree more.

Like a growing number of PC owners, I am utterly puzzled by nVidia/Amd marketing/pricing of "well its faster than the old GPU so you should pay more" b$. To be clear, even the curent prices that are at or just below the MSRP's of last gen flagships indicated above are also not acceptable.

I was fortunate with my last 2x GPU upgrades -
  • In 2020 I traded 1080Ti (paid £450 in 2018, exch. val. £400) for 3070, upgrade cost £210.
  • Then 2yrs year later traded the 3070 (price £610 in 2020, exch. val. in 2022 £400) for a 3080-12gb (price £610), which cost £210.
I applied the same trade-in method when I upgraded from LGA1150 to AM4. So when adding all the upgrades, I have carefully spent around £1200 over 4yrs on upgrades. So spending around that amount in one go, on one item, hard pass...!!!
 
The machine that I test OC on does not have hardware power measurement capability. Moving to a different box just for the sake of getting OC power numbers seems like a lot of work (time is ultra-limited for those reviews), so I've never considered it.

Seems OC power numbers are really needed, also going by all the comments on Reddit, so I'll probably move the whole OC testing to the power testing rig. I wanted to switch to a different test that Unigine anyway, maybe the new Witcher

Another option could be to report the card's software power measurements before and after OC, which could be an option, too.
software power readings are good enough sir, just make a note that they are software readings
 
Last edited:
Can someone explain to me, why AMD abandoned the OC by curve tuning like it is done in afterburner? It was excellent stuff, and now they returned to these primitive sliders. Why?
 
Holy smoke:

1671045078043.png


This card deserves OC performance testing in more games.
 
Holy smoke:

View attachment 274467

This card deserves OC performance testing in more games.
I am not sure if I should order the card or the Water block first. The As Rock Aqua is too expensive for me so I am getting a block from Alphacool. I wish like EK they provided single back plate slot adapters. For those of us that want to OC make sure we get the 3x8 pin variant. I expect that there will be some serious OC profiles for these cards by Feb-Mar. I might add another 360 by getting the new Alphacool Eisbaer. I really like how big the cold plate is and it has never had to deal with Asetek BS. I dislike the narrative that OC does nothing to improve Ray Tracing. As I was playing Grid Legends this morning I was thinking would I want Ray Tracing and DLSS or Unreal Engine 5 and FSR? Then I remembered I was playing a racing Game in 4K so the car model was already stellar at 120 FPS in 4K with my 6800XT. So more FPS in every Game? Yes? If they tune this card the same way they have tuned 6000 series with monthly drivers we will (buyers) all be so glad we grabbed on of these and I didn't want to say it but if (when) Mining becomes a thing again these cards should be frigging awesome at that too.

Can someone explain to me, why AMD abandoned the OC by curve tuning like it is done in afterburner? It was excellent stuff, and now they returned to these primitive sliders. Why?
I suspect that AMD is not going to fully open the software package until they can work on the issues they have identified.
 
The machine that I test OC on does not have hardware power measurement capability. Moving to a different box just for the sake of getting OC power numbers seems like a lot of work (time is ultra-limited for those reviews), so I've never considered it.

Seems OC power numbers are really needed, also going by all the comments on Reddit, so I'll probably move the whole OC testing to the power testing rig. I wanted to switch to a different test that Unigine anyway, maybe the new Witcher

Another option could be to report the card's software power measurements before and after OC, which could be an option, too.
Yes your idea is what I talked about but it was more to have the Power limit of each card like Nvidia review. Like 6900XT Ref it was 293W with the +15%. What I wanted to know is the max it give after the 15% on each card
 
I think there's a lot of appetite to look at extended/oc performance per watt numbers, specifically in this instance. It appears RDNA3, specifically so far N31, more so than other architectures in recent years is power constrained in reference form and benefits readily from more power. It could be that rather than being given a power limit at/near the peak of clocks X voltage X stability point, the reference cards wound that back to hit the design/marketing goal of 350w board power, leaving board partners to exploit the headroom.

I'd certainly be very keen to see any testing and analysis you'd be willing to do, tall order 11 sleeps from Christmas though :)
This looks like Vega OC boom again.

Vega 56/64 was a lot of fun and performance gains were huge too.

Power and voltage were limiting these gpus, but that limit was that high that most cards hit power wall much before frequencies stopped going up.

So excited to see these cards on waterblocks, volt mods, power limits removed, etc.
 
3090 power consumption with 4080 performance
that's pretty solid

It's surprising to see my 3090 get pushed so far down the lists, so fast
 
Hmm, this card seems waaaayyyyy better than the XFX one. But again, price. I'm drawn to the more efficient 4080. But not at the price point. And this card looks to be heading toward that point too, though, the TUF cards are usually cheaper.
I was comparing the 2 as well. My initial impression is that they're both overwhelmingly similar to the reference design.
the XFX PCB looks shorter; keeps the 3 phase VRM design for the memory and the vaporchamber on the cooler
while the ASUS PCB is longer; has 4 phases for memory and doesn't have a vaporchamber.
Couldn't find any info about other components or indicators of quality.

What makes the ASUS one way better than the XFX ?
 
Slightly faster on OC but also very much quieter in use. Noise is a deal-breaker for me. Shame it's the price it is.
 
Back
Top