• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RX 7900 XTX vs. RTX 4080

Status
Not open for further replies.
No, what you're saying is delusional and I'll prove how devastatingly unintelligent your line of thought is since you couldn't come up with an explanation for your nonsensical claims :

View attachment 289919

Look how much faster the 7900 XTX is than a 4090, clearly since this the largest performance delta between these two GPUs in raster performance that I could find then that means TPU's own assessment that the 4090 is 22% faster in raster is irrelevant.

Turns out the 7900 XTX is actually 27% faster than a 4090, obviously, and every other benchmark is wrong/irrelevant/delusional. Would you fully agree with this statement ?

I would agree with you that the amd cards are certainly faster in cpu bottlenecked scenarios - and by a quite big margin.
 
No, what you're saying is delusional and I'll prove how devastatingly unintelligent your line of thought is since you couldn't come up with an explanation for your nonsensical claims :

View attachment 289919

Look how much faster the 7900 XTX is than a 4090, clearly since this the largest performance delta between these two GPUs in raster performance that I could find then that means TPU's own assessment that the 4090 is 22% faster in raster is irrelevant.

Turns out the 7900 XTX is actually 27% faster than a 4090, obviously, and every other benchmark is wrong/irrelevant/delusional. Would you fully agree with this statement ?
I'm sorry but you lack basic cognitive skills. What you just said clearly demonstrates that. There is no pattern in raster, some games do in fact perform better on amd, some on nvidia. That's why you take the averages across a big selection of games.

That is just not the case with rt. There is a very very clear pattern with rt games. The more rt effects a game has, the bigger the difference is. Is there any rt games that the 7900xtx is wiping the floor with the 4080? No.

Can you explain to me why adding rt effects in a game increases the difference? Take hogwarts. Why is the difference only 6% with 1 rt effect but 46% with all the rt effects?
 
There is no pattern in raster, some games do in fact perform better on amd, some on nvidia. That's why you take the averages across a big selection of games.

That is just not the case with rt.

This is what being delusional actually looks like.
 
This is what being delusional actually looks like.
So you don't want t explain why the difference piles up with more rt effects.

See that's why I think it's a waste of time. There is no way people don't know the difference is much bigger, they are just playing dumb for defense
 
So you don't want t explain why the difference piles up with more rt effects.

Why don't you give an actual explanation that isn't mind numbing on why it's OK to use averages across multiple games for everything expect for ray tracing.

And no, "the difference is bigger so you can't use averages" is not an explanation, probably even a preschooler would find your logic laughable.
 
I think anyone who is making a gpu purchasing decision where RT is important really needs to look at the specific games they want to play using it. If you just looked at TPU averages for RT games you might think a 7900XTX isn't that far behind in RT and then try to play CP/metro/witcher/control with all the settings maxed and have a really bad day. For me anyways if I didn't care about RT at all I would definitely buy a 7000 series gpu.

On a side note the two games with the heaviest current implementation of RT CP/witcher also work really well with DLSS3 making a 40 series GPU almost a no brainer if either of those games is important to you.

UE5 does give me some hope that by the next generation of cards it won't matter whether you choose the 8000 series or 5000 series you'll just go with what performs better overall.
 
Why don't you give an actual explanation that isn't mind numbing on why it's OK to use averages across multiple games for everything expect for ray tracing.

And no, "the difference is bigger so you can't use averages" is not an explanation, probably even a preschooler would find your logic laughable.
But we are not using averages for everything. For example if I want to see whats the multithreaded difference between CPUs, I will ONLY use benchmarks that run on all cores. TPU has an application perfomance average, but that also includes single threaded or low threaded workloads, so I won't use that average to judge multithreading performance.

The same way I won't use games with minimal RT to compare RT performance, especially when I know for a fact that the more RT effects you add, the bigger the difference is. That would LITERALLY be impossible if the actual difference was 16% like you are claiming. It's a simple concept, and I still don't accept that you don't understand it. If the difference in RT was 16% than in Hogwarts for example the difference would be 16% regardless of how many RT effects you activate.
 
But we are not using averages for everything. For example if I want to see whats the multithreaded difference between CPUs, I will ONLY use benchmarks that run on all cores. TPU has an application perfomance average, but that also includes single threaded or low threaded workloads, so I won't use that average to judge multithreading performance.

The same way I won't use games with minimal RT to compare RT performance, especially when I know for a fact that the more RT effects you add, the bigger the difference is. That would LITERALLY be impossible if the actual difference was 16% like you are claiming. It's a simple concept, and I still don't accept that you don't understand it. If the difference in RT was 16% than in Hogwarts for example the difference would be 16% regardless of how many RT effects you activate.

A bunch of incompressible nonsense that you use to cover up for the lack of logic in your arguments, there is nobody on here that knows what you are on about besides you.

We look at averages across multiple games because that's the most realistic assessment of the performance you can expect across the board, there is little to no use in looking at either the largest or the smallest performance delta. Some games are heavier than others with ray tracing enabled, that's the reality, if you ignore on purpose the games that have the least impact on performance you are doing nothing expect getting a skewed perception of real world performance.

I really couldn't care less that RT performance is everything for you but saying that looking at averages across games is delusional is laughable, that's literally the least subjective way of assessing real world performance.
 
A bunch of incompressible nonsense that you use to cover up for the lack of logic in your arguments, there is nobody on here that knows what you are on about besides you.

We look at averages across multiple games because that's the most realistic assessment of the performance you can expect across the board, there is little to no use in looking at either the largest or the smallest performance delta. Some games are heavier than others with ray tracing enabled, that's the reality, if you ignore on purpose the games that have the least impact on performance you are doing nothing expect getting a skewed perception of real world performance.

I really couldn't care less that RT performance is everything for you but saying that looking at averages is delusional is laughable.
Never said RT performance is everything for me, or that it is even important. Let's say RT performance is completely useless and nobody cares about it. Great, now explain why the RT difference gets bigger the more RT effects you add. HOW can that be possible???
 
explain why the RT difference gets bigger the more RT effects you add.

I don't know nor do I care but why is that relevant to anything ? Like I said some games are heavier than others, that's why everyone cares about performance across multiple games.
 
I don't know nor do I care but why is that relevant to anything ? Like I said some games are heavier than others, that's why everyone cares about performance across multiple games.
You really shouldn't bother, he sounds like an unreasonable hardcore nVidia fanboy, I say this when he completely disregards TPU's benchmark rating, choosing his own set of numbers/percentages which he'd apparently pulled out from his posterior orifice.
 
You really shouldn't bother, he sounds like an unreasonable hardcore nVidia fanboy, I say this when he completely disregards TPU's benchmark rating, choosing his own set of numbers/percentages which he'd apparently pulled out from his posterior orifice.

It's OK, someone's got to make fun of the catastrophically uninteligent thought process of fanboys on here.
 
I don't know nor do I care but why is that relevant to anything ? Like I said some games are heavier than others, that's why everyone cares about performance across multiple games.
But im not comparing different games. Im talking about for example hogwarts. Why does the RT perforrmance difference keep increasing the more RT effects you activate on that specific game?

You really shouldn't bother, he sounds like an unreasonable hardcore nVidia fanboy, I say this when he completely disregards TPU's benchmark rating, choosing his own set of numbers/percentages which he'd apparently pulled out from his posterior orifice.
The only unreasonable people are the ones claiming the RT performance difference is 16%. That's completely delusional
 
Is the 7900 XTX encoder as good as nvidia on low bitrates , for like streaming on twitch and stuff?

I am pleased to tell you that after years of complaints, AMD did finally something about it and it has greatly improved. It's functionally on par with Ada and the quality will match at least Turing/Ampere's NVENC, if not Ada's, too.
 
This is a clear demonstration of what im talking about

image_2023-03-31_184211882.png


No RT, the cards are basically identical. 1 RT effect,, the difference is only 8%. Full RT package, the difference is 45+%!!

Therefore, when you use games that have minimal RT effects, the difference will be smaller, but the hardware is still there, it's just underutilized.
 
I am pleased to tell you that after years of complaints, AMD did finally something about it and it has greatly improved. It's functionally on par with Ada and the quality will match at least Turing/Ampere's NVENC, if not Ada's, too.
Thats great news, i went with nvidia mostly for that, any idea about rx 6000 series encoder compared with Turing? I might go with 7000 series for my next GPU but i was tempted to go with 6800 xt or RTX 3070 / TI (i am on 2060super now).
 
Thats great news, i went with nvidia mostly for that, any idea about rx 6000 series encoder compared with Turing? I might go with 7000 series for my next GPU but i was tempted to go with 6800 xt or RTX 3070 / TI (i am on 2060super now).
Not a good idea if you're considering the RTX 3070 Ti, it IS a powerful card, but nVidia saw to it that it'd have a limited useability in some present AAA games like Hogwarts Legacy and RE4R. 8GB VRAM is beginning to come up short, even at 1440P...
 
Thats great news, i went with nvidia mostly for that, any idea about rx 6000 series encoder compared with Turing? I might go with 7000 series for my next GPU but i was tempted to go with 6800 xt or RTX 3070 / TI (i am on 2060super now).

I would ignore any card that cost over 350 usd with 8GB of vram.
 
Thats great news, i went with nvidia mostly for that, any idea about rx 6000 series encoder compared with Turing? I might go with 7000 series for my next GPU but i was tempted to go with 6800 xt or RTX 3070 / TI (i am on 2060super now).

Turing and Ampere have an identical 5th gen NVENC, for Ampere only NVDEC was updated to add AV1 decode. It walks all over everything on RDNA 2 and earlier, even after recent software-side improvements from AMD (the AMF stack was largely unmaintained for a long time). If you want to purchase an AMD card and care about this, buy at least the 7900 XT right now.
 
Then what data is relevant and why ? And explain how averaging out performance in various games is irrelevant. That's literally what we do for any benchmarks ever, CPU benchmarks, GPU benchmarks, everything.

Why would RT performance be the only one that differs ?
The relevant data is the actual FPS in the actual games people desire to play in the near future / when buying their new GPU. Its that simple. You go by averages on a benchmark suite, you get an average. If you go by per-title basis, the differences are huge and can indeed represent the gap between playable and not playable. This is where Nvidia's TLC to RT games pays off. They control the narrative. AMD just tosses a chip at it and prays it'll work out for them.

CPUs and gaming are much the same. If you go by averages, reviews will tell you just about nothing, but if you pick the outliers on both ends of the spectrum, you get interesting views on CPU capability in specific scenarios. This is specifically true in the case of X3Ds. They don't exactly pay off in every game, but when they do, its a difference that matters.

You and I may see and frown upon RT's current state on any GPU, but others can have a different view. I can totally get the argument of wanting to experience new technology, and in doing so, also having it at playable frames. Its just not for me. But if it were ten years ago, I might have fallen for it too, who knows. Back then I upgraded bi-yearly to keep up with the latest...

Also, we might have a view on where RT is going, but that's not the view of the average customer, or just people who want to play games. They really don't know UE's latest might change the landscape a bit. And let's be honest here, neither do we, we're just making an assumption based on experience. At the same time... if multiple RT effects strain AMDs hardware harder while they don't on Nvidia's implementation, why would that image change in the near future? Because it comes out of UE and is more tailored to hardware agnostic? Are we really thinking Nvidia won't special sauce their special cores to make them work just as well? Is there an AMD push towards engines in use for console game development that nudges in their approach, or at least tries to optimize for it?
 
Last edited:
It doesn't really matter.

If you want to use RT - get NVIDIA, there's little reason to buy AMD at any tier if you care about RT. Raster performance is more than sufficient with either card, and efficiency/featureset swings towards NVIDIA.

If you want 900 FPS instead of 850 FPS in CS, get the 7900XTX.

Fortnite uses a hybrid RT approach that's intentionally light, because the vast majority of players use consoles, with relatively weak hardware. It looks good, and is impressive, but is by no means a demanding RT workload.

Hopefully AMD will actually become a software company too, take parity seriously and start innovating in featureset again, but with the progress of their equivalents, they seem to perpetually be playing catch up, in hardware too. Whatever your opinion on the current state of RT, it's the future, and all game engines will be moving towards a more complete RT basis. Even with a process advantage AMD has only ever been on par at best in the GPU arena (at least in the past five or so generations), and it's pretty shocking to see Intel come out swinging with an RT implementation that's on par or better in their first GPU generation, excellent encode/decode hardware, and workable raster performance, AMD has had issues with the first two for three generations now. Shows what a motivated and funded team can do if their leadership sets the right goals.

AMD needs to make a decision. Be the perpetual underdog with 10% marketshare, but actually offer cheap pricing, unlike whatever their current plan is OR commit to software and hardware parity (on time, not a year or two after NVIDIA releases something, who is then on gen 2 or 3 by the time AMD puts out their equivalent) and stop releasing products that are "almost as good" for a little bit cheaper.

I mean, for crying out loud, AMD's software team is so small they can't seem to even work on a driver for their current and previous generation concurrently.
 
Last edited:
Fortnite uses a hybrid RT approach that's intentionally light, because the vast majority of players use consoles, with relatively weak hardware. It looks good, and is impressive, but is by no means a demanding RT workload.

If it's demanding or not who cares if it looks fine, that game proves it's possible to add all the RT features you want and still get playable framerates.
 
If it's demanding or not who cares if it looks fine, that game proves it's possible to add all the RT features you want and still get playable framerates.
It's nowhere near all the RT features you want, and their software based approach to maintain compatibility with the RDNA2 based consoles is actually pretty taxing considering the RT level offered, which is minimal. Bear in mind Epic Games has a lot of money to throw at the problem of most of their player base having marginal RT hardware but a relatively powerful CPU, this isn't the case for your average game studio, who basically have to choose between no RT or some level of real RT.

It's OK, I get it. You have an AMD GPU so for you it's gonna be a "who cares".
 
It's OK, I get it. You have an AMD GPU.
No, you're just wrong.

It's a software implementation only on consoles, on PC is hardware accelerated, it has ray traced reflections, AO and GI, what more do you want ? It's the exact same feature set a game like Cyberpunk 2077 currently has. I know the Nvidia mindshare is devastating but is it really that hard to accept that RT can be done without running like crap on both vendors when it's hardware agnostic ?
 
Status
Not open for further replies.
Back
Top