• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

4090 swapped to a 7900XTX and prefers it!?!?

Status
Not open for further replies.
He is just hittig one more nail in the coffing of the already well known issues like the dpc latency and such that would be a total non issue. i can't even notice it nor do I know how.

The 0,1% lows story was not very credible.

But the alt-tab latency issue got me really bad, now I can't get any sleep over the possibility that it is tediously slow to tab in an out of a game.
 
Hopes are one thing, but NVIDIA could literally do nothing but release a node shrink Ada Lovelace on 3 nm, with GDDR7 instead of G6X, and still beat RDNA4. At this point in time, NVIDIA has an architecture that is between 1.2x and 3x more efficient than RDNA3, depending on load, and is somewhere between 2-4 years ahead in AI and RT.

Can they? If AMD can stuff multiple graphics dies on a single interposer Nvidia is going to need to innovate. If AMD follow the same strategy as they have with their recent Zen 4c release, they add more physical resources on the chip while lowering the clock to hit a much higher level of energy efficiency. In fact if you look at Wendell's recent video on Zen 4c, the IO die only takes a mere 2 watts for a server platform and the cores are insanely efficient. You are vastly under-estimating the benefits of chiplet based design the same way Intel did with Zen 1.

Your power figures are misleading as well. Nvidia is 11% more power efficient:

1690040954097.png


If your 3x value is from OptimumTech's video, I recommend anyone go and read through the issues with how those figures are obtained here: https://www.techpowerup.com/forums/...-consumption-optimum-tech.311063/post-5056918

Multi-monitor power draw has largely been addressed on AMD's side, although it would be misleading to use that figure in the first place given it's a software and not a hardware issue. Some odd monitor configurations will have higher than normal idle power draw, regardless of whether you are using Nvidia or AMD.

2-4 years ahead RT? The 4080 is 16% faster than the 7900 XTX in RT. That's not even a generation ahead.

Nvidia is ahead in AI? Ahead in sales but AMD's hardware is more accurate and getting more performant as ROCm improves. Wendell has a video on the topic demonstrating a lower end AMD GPU vs Nvidia's A100 and it does very well. Moreover, the AMD GPU is more precise allowing the AI model to have several improvements.
 
Last edited:
He is just hittig one more nail in the coffing of the already well known issues like the dpc latency and such that would be a total non issue. i can't even notice it nor do I know how.

The 0,1% lows story was not very credible.

But the alt-tab latency issue got me really bad, now I can't get any sleep over the possibility that it is tediously slow to tab in an out of a game.

The 0.1%s I don't find hard to believe, it might not be credible or scientific but everyone goes off of feel, if they like the card then that's the best result for them.

The alt tab is the least credible claim lmao, the fast alt tab and better handling of true fullscreen is an almost headlining feature of Win 11. On a clean and healthy installation there should be zero problems with either Ada or Navi31, period, ever.

The flickering and blackscreening I think I can corroborate but only on 30 series.

The video states nothing about the test setup's Windows install. I don't think I have to prove any further at this point after my experience with both AD104 and Navi31 that the ideal scenario for any card is a clean Windows.

Still, happy camper is a happy camper. Nothing like being stuck with an expensive card you don't like.
 
Its a well known fact that system synergy on different HW combinations can give anyone experiences that may differ from the majority. I have not experienced anything he has (on a 4080) where he is saying Nvidia 40 series (not necessarily the 4090 per se) is not giving him positive experiences on his current rig (incl display or audio issues). He even acknowledges that this may apply to him and not others. By the same token, others who have been on AMD cards reported similarly but in reverse when they went to Nvidia. So it can work both ways. I know someone who had a 7900 xtx and moved to a 4090 and is much happier for it. Again, I put that to system synergy. The same goes for Intel vs AMD CPUs.

Currently have multi-display setup on (main Gsync, 144hz) and its working flawlessly with no audio or blank screens issues at all as he is reporting (ironically issues more associated with AMD cards). So anyone can have different experiences depending on differing HW in their systems.
 
They're released one generation of DGPU cards, and since release, W1z and many other people have seen the user experience go from buggy to smooth, and performance go up massively, through relentless and rapid driver releases.

How many generations has AMD had to fix their idle/low load power draw issues? How long did it take for AMD to release a GPU generation with an encoder that worked?
Um this is not their first dgpu, i740, hello?
 
When someone has already spent that amount on an RTX 4090, I think it's quite dificult to find "reasons" to swap for an RX 7900 XTX. Of course, we all know that these are different products with different target niches - for example, one that prefers higher textures quality in the games, always stays with Radeon:


While, others who care about raw frames per second, dlss, ray-tracing framerate would still always prefer geforces...

1690052031677.png


I still consider that the RX 7900 XTX is a disappointment because I did expect it to be close overall to RTX 4090 in the raw performance tables.
I will wait if AMD will release a proper and fixed RX 7950 XTX with at least 20% higher performance...

I will say AV1 encoding quality was noticeably better on the A770.

AMD must improve its AV1 support.

 
Hopes are one thing, but NVIDIA could literally do nothing but release a node shrink Ada Lovelace on 3 nm, with GDDR7 instead of G6X, and still beat RDNA4. At this point in time, NVIDIA has an architecture that is between 1.2x and 3x more efficient than RDNA3, depending on load, and is somewhere between 2-4 years ahead in AI and RT.

I mean, the "maybe next year" trope gets old pretty fast. There's nothing stopping AMD competing this generation, but they choose to have similar prices with an inferior product, so there is no competition. They've got enough performance to offer a competitive product (if you ignore power draw, ray tracing, AI features like frame generation, and inferior upscaling, something possible to do at the right price), but AMD wants to pretend the GPU market isn't a monopoly and their architecture is on the same level.

Intel is my own hope, as others have mentioned. They've got a competent driver/software team and also understand the concept of product testing before release.

Imagine switching to an MCM product design for cost savings (with all the power draw and latency issues that causes) but keeping the same pricing as last generation...
Seriously, how much are you paid to spout this nonsense in EVERY post? 2-4 years ahead? RDNA2 is on par with RTX 3000 in ray tracing, one gen behind, when they released their own RT tech one gen afterwards? and AI?? are you talking GPU's or data centre? MLM/ AI is not relevant in gaming, not yet anyway and even less-so than RT, your coming across as a massive shill, change the bloody record pal...

When someone has already spent that amount on an RTX 4090, I think it's quite dificult to find "reasons" to swap for an RX 7900 XTX. Of course, we all know that these are different products with different target niches - for example, one that prefers higher textures quality in the games, always stays with Radeon:


While, others who care about raw frames per second, dlss, ray-tracing framerate would still always prefer geforces...

View attachment 305887

I still consider that the RX 7900 XTX is a disappointment because I did expect it to be close overall to RTX 4090 in the raw performance tables.
I will wait if AMD will release a proper and fixed RX 7950 XTX with at least 20% higher performance...



AMD must improve its AV1 support.

Considering price-wise the xtx has been comparable to the 4080 not the 2k 4090, it wins out in pure raster
 
Ah yes, a single SKU example means "at best one generation behind", which is still almost two years, if you don't include the rest of the disparities listed, not just just power draw, or the step backwards with MCM RDNA3 which is worse vs Ada than RDNA2 was vs Ampere.

AMD didn't have a good encoder until the second half of the release of RDNA2, with RDNA3 they have something that's as good as Shadowplay finally, meanwhile NVIDIA can now encode two streams.

Intel encoder works from out of the gate.

I mean if you want to look at other sku's

5700xt - 6900xt - 7900xt, constant improvement

RTX2080 meanwhile does better then the 3080....and then Nvidia solves it big time with the 4080.

You can see the 7900xt, which is faster then the RTX3090Ti, has similair power consumption in this scenario (and a lot better when gaming).
That is just 1 generation difference... so yeah "at best one generation behind".

On the encoder front:

What do you mean "didnt have a good encoder", what did it not do well that it was suppose to do well?
Until the second half of the release? so you mean to say later released cards have a better encoder then earlier cards? a 6650xt has a better encoder then a 6800xt?

So RDNA 3 has parity with Nvidia on quality but you still mention it? and Nvidia can encode two streams....ok what is the purpose of that?

and I still dont really have a full answer to the original statement:
"How long did it take for AMD to release a GPU generation with an encoder that worked?"

How long did it take Nvidia to do that? from what point are we going to start counting? and what do you even mean by "worked"?
 
I mean if you want to look at other sku's

5700xt - 6900xt - 7900xt, constant improvement

RTX2080 meanwhile does better then the 3080....and then Nvidia solves it big time with the 4080.

You can see the 7900xt, which is faster then the RTX3090Ti, has similair power consumption in this scenario (and a lot better when gaming).
That is just 1 generation difference... so yeah "at best one generation behind".

On the encoder front:

What do you mean "didnt have a good encoder", what did it not do well that it was suppose to do well?
Until the second half of the release? so you mean to say later released cards have a better encoder then earlier cards? a 6650xt has a better encoder then a 6800xt?

So RDNA 3 has parity with Nvidia on quality but you still mention it? and Nvidia can encode two streams....ok what is the purpose of that?

and I still dont really have a full answer to the original statement:
"How long did it take for AMD to release a GPU generation with an encoder that worked?"

How long did it take Nvidia to do that? from what point are we going to start counting? and what do you even mean by "worked"?
Tis a matter of perspective and we know where this bloke is at. To each their own, right ;) I stopped caring, the field is muddy as hell with a half dozen technologies in play that still reek strongly of early adopter bullshit. RT is still going nowhere three generations in now. Whatever. Even if the entirety of TPU is in agreement of the factual differences between AMD and Nvidias latest, we are still discussing these nonsense points.

Cloud's the future anyway according to these same early adopters too, can we take a short pause to consider the irony there? :) Nvidia is pushing always online hardware. That on its own is enough to give the entire lineup a massive hard pass even without the major price point gap.

Fools And Money
 
Last edited:
Still, happy camper is a happy camper. Nothing like being stuck with an expensive card you don't like.

End of the day, it really comes down to this. I factored this above anything else - brand, GPU, SKU tier, etc. - and this is what resulted in the white ROG Strix 4080. A GPU which I adore so far.

It beats the *crap* out of the power constrained TUF 3090 and I never knew a GPU could look so good. Sometimes I think people are way too concerned with having the absolute best or reaching an optimal price/performance ratio while forsaking the one thing that truly matters: having something that they genuinely like and enjoy.

I'll agree with the point that trading a 4090 for a XTX is, in hindsight, stupid at best, but if you consider what I just mentioned, and the fact that when push comes to shove whether you have a 4080, a 4090 or a XTX you are going to be playing the same games at fantastic settings and frame rates, this matters a lot less than it initially seems.
 
End of the day, it really comes down to this. I factored this above anything else - brand, GPU, SKU tier, etc. - and this is what resulted in the white ROG Strix 4080. A GPU which I adore so far.

It beats the *crap* out of the power constrained TUF 3090 and I never knew a GPU could look so good. Sometimes I think people are way too concerned with having the absolute best or reaching an optimal price/performance ratio while forsaking the one thing that truly matters: having something that they genuinely like and enjoy.

I'll agree with the point that trading a 4090 for a XTX is, in hindsight, stupid at best, but if you consider what I just mentioned, and the fact that when push comes to shove whether you have a 4080, a 4090 or a XTX you are going to be playing the same games at fantastic settings and frame rates, this matters a lot less than it initially seems.
So you blew 1200 on something one could nearly consider a sidegrade there, well done and thx for illustrating my point.

23%. Enjoy lol
 
So you blew 1200


Wish it was just 1200, considered It's $1550 on american Amazon. By far the most expensive 4080 you can buy. Converted to USD, roughly $2000, if you account for the taxes and prices in my country.

Stellar card. Old one paid half of it. This one doesn't throttle, runs way colder, and feels snappier. Happy camper. Not to mention it just looks so sexy, mods pls dont ban for 18+

20230722_175658.jpg


I'm actually going to have to buy a proper fulltower case for it sometime, it's too large for my MasterFrame, I can't install the glass due to the 12VHPWR connector being on the way.
 

Wish it was just 1200, considered It's $1550 on american Amazon. By far the most expensive 4080 you can buy. Converted to USD, roughly $2000, if you account for the taxes and prices in my country.

Stellar card. Old one paid half of it. This one doesn't throttle, runs way colder, and feels snappier. Happy camper. Not to mention it just looks so sexy, mods pls dont ban for 18+

View attachment 305909

I'm actually going to have to buy a proper fulltower case for it sometime, it's too large for my MasterFrame, I can't install the glass due to the 12VHPWR connector being on the way.
Im glad youre happy while I rofl for free :) this is the cherry on top. Perfect fit indeed huh? Oops my case isnt big enough for a connector a way overpriced card doesnt even remotely require. Yay!
 
Yay! Now I have an excuse to buy another case. Lovely :D
 
Yay! Now I have an excuse to buy another case. Lovely :D
You need to buy white 12VHPWR adapter as well because this black POS sticking out halfway a GPU looks like absolute shit tbh

Make sure its a quality one too. Yknow
 
So RDNA 3 has parity with Nvidia on quality but you still mention it? and Nvidia can encode two streams....ok what is the purpose of that?

Both AMD 7000 series and Nvidia 4000 series have a dual encoder setup. The number of physical encoders units doesn't correlate to the max number of streams so I'm not sure what his point was when he implied that it somehow was an advantage for Nvidia.

You can encode up to 3 streams simultaneously on 4000 series card. That's because Nvidia artificially limits you to 3 streams. On a 7000 series card there is no limitation to the number of streams you can have simultaneously. EposVox had 6 streams running on a 7900 XTX and 7950X and that's only because his CPU limited him from having more.

And yes, it's an niche use case. If you had a plex server that needs to do a lot of simultaneous encodes for example. In that case the AMD card would be superior because it can handle more 1080p streams. The Nvidia card is capable of doing more streams but cannot due to artificial Nvidia software limitations.
 
I Sold My 4090 for a 7900 XTX - 2 Weeks Later:

How can I trust ANYTHING this dude says, with an honest to god porno mustache like that? Dude legit looks like he should be filming his next XXX flick or screwing people over down at his local used car lot, rather than bitching about GPUs on youtube :roll:
 
How can I trust ANYTHING this dude says, with an honest to god porno mustache like that? Dude legit looks like he should be filming his next XXX flick or screwing people over down at his local used car lot, rather than bitching about GPUs on youtube :roll:
God only knows where that thumbs been.

Still I was bored and this helped, I just got a laugh too.
 
5700xt - 6900xt - 7900xt, constant improvement

Radeon RX 5700 XT = 100%.
Radeon RX 6900 XT = +106%.
Radeon RX 7900 XT = +25%.
Radeon RX 7900 XTX =+47%.

Constant improvement but very fail with the 7900 series ;)


Wish it was just 1200, considered It's $1550 on american Amazon. By far the most expensive 4080 you can buy. Converted to USD, roughly $2000, if you account for the taxes and prices in my country.

Stellar card. Old one paid half of it. This one doesn't throttle, runs way colder, and feels snappier. Happy camper. Not to mention it just looks so sexy, mods pls dont ban for 18+

I'm actually going to have to buy a proper fulltower case for it sometime, it's too large for my MasterFrame, I can't install the glass due to the 12VHPWR connector being on the way.

30% over RTX 3090 but also 30% behind RTX 4090 that you could have bought for the same money.
And don't forget liquid on the card, not only on the CPU o_O
 
MoRadeon RX 5700 XT = 100%.
Radeon RX 6900 XT = +106%.
Radeon RX 7900 XT = +25%.
Radeon RX 7900 XTX =+47%.

Constant improvement but very fail with the 7900 series ;)



30% over RTX 3090 but also 30% behind RTX 4090 that you could have bought for the same money.
And don't forget liquid on the card, not only on the CPU o_O
Only hear a parrot here
 
You need to buy white 12VHPWR adapter as well because this black POS sticking out halfway a GPU looks like absolute shit tbh

Make sure its a quality one too. Yknow

Lol yeah it did strike me as odd they'd include a black one :P
 
Seriously, how much are you paid to spout this nonsense in EVERY post? 2-4 years ahead? RDNA2 is on par with RTX 3000 in ray tracing, one gen behind, when they released their own RT tech one gen afterwards? and AI?? are you talking GPU's or data centre? MLM/ AI is not relevant in gaming, not yet anyway and even less-so than RT, your coming across as a massive shill, change the bloody record pal...
So, remind me when RTX 3000 was released please, does it fall in the 2-4 years range? It's currently July 2023 by the way.
Screenshot_20230722-231241_Opera.png


And I think you mean RDNA3, not RDNA2, is on par with Ampere ray tracing, in lighter implementations.
 
So, remind me when RTX 3000 was released please, does it fall in the 2-4 years range? It's currently July 2023 by the way.
View attachment 305916

And I think you mean RDNA3, not RDNA2, is on par with Ampere ray tracing, in lighter implementations.
attempt 3 = FAIL

more of the same , is there only one track to that record.
 
attempt 3 = FAIL

more of the same , is there only one track to that record.
Seriously, this guy is staff and is a 1 trick pony spouting the same bollocks in all of his posts, guess that's the vibe TPU wants to give off now, he's a massive troll...
 
Status
Not open for further replies.
Back
Top