• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RX 7900 XTX vs. RTX 4080

Status
Not open for further replies.
Not so sure about that, looks like they're now pushing for the full path tracing stuff which is even heavier than regular ray tracing.

It looks like their end game is to make everything literally unplayable without upscaling and frame interpolation.

Well at least till the 6090 A card I will be buying just for it's name alone comes out. :laugh:
 
Not so sure about that, looks like they're now pushing for the full path tracing stuff which is even heavier than regular ray tracing.

It looks like their end game is to make everything literally unplayable without upscaling and frame interpolation.
I would not care as long as RT in a game can be switched off.
 
7900XTXs RT performance is good. It's just comparable to the nvidias 30s gen.
That's fine.
I like the XTX but it's priced poorly since AMD don't seem to realize that you don't compete with the 4080 when you can't offer 4080s performance AND features in similar levels.

Just better raster numbers don't say anything to me that I want RT, CUDA, nvenc, DLSS3 etc.
It's April and FSR3 is still in the oven....
 
I like the XTX but it's priced poorly since AMD don't seem to realize that you don't compete with the 4080 when you can't offer 4080s performance AND features in similar levels.

It is cheaper, is it not ? I really don't get what you people expect from AMD, make the thing 50% cheaper because the 4080 is 16% faster in RT ?
 
It is cheaper, is it not ? I really don't get what you people expect from AMD, make the thing 50% cheaper because the 4080 is 16% faster in RT ?

No because they don't offer cuda, nvenc, dlss/dlss3, avg performance in professional apps.
And it's not 16%. It's way more than that.
 
No because they don't offer cuda, nvenc, dlss/dlss3, avg performance in professional apps.

No what ?

What do you use CUDA for ? I don't know of any professional app that does not support AMD cards, they're typically hardware agnostic, even if they have a backend for CUDA, AMD's VCN is also comparable to NVENC.

And it's not 16%. It's way more than that.
That's not what TPU's benchmarks say. And regardless of that things like UE implementation of ray tracing proves the RT acceleration hardware is really not that different in capability with a vendor agnostic implementation.
 
No because they don't offer cuda, nvenc, dlss/dlss3, avg performance in professional apps.
And it's not 16%. It's way more than that.
as you said in professional world you dont use consumer cards! you use professional gpu lol
 
No because they don't offer cuda, nvenc, dlss/dlss3, avg performance in professional apps.
And it's not 16%. It's way more than that.
How is AMD going to use CUDA or DLSS3 or NVENC? These are NV proprietary and cannot be used on AMD hardware.
Heck, you cant even use DLSS3 on a 3000 series NV cards and you want AMD be able to use it?
It's like saying AMD should use Raptor Lake cores in their processors. LOL
 
as you said in professional world you dont use consumer cards! you use professional gpu lol
Absolutely not true actually. The vast vast vast vast vast really vast majority of content creators use the RTX line of gpus.

No because they don't offer cuda, nvenc, dlss/dlss3, avg performance in professional apps.
And it's not 16%. It's way more than that.
Whoever actually claims that the difference in rt is 16% should be just automatically ignored. I'm pretty convinced they don't believe it themselves, they are doing damage control.

They are in the same boat with people that want to compare cpus in 8k resolution
 
Whoever actually claims that the difference in rt is 16% should be just automatically ignored. I'm pretty convinced they don't believe it themselves, they are doing damage control.

So TPU is lying in their benchmarks ?
 
So TPU is lying in their benchmarks ?
Would I be lying if I told you the difference in games between the 7950x 3d and an i3 8100 was 1%? Not really, it's just that I was testing in 8k.

Not lying doesn't make the data relevant. When you are using games with minimal rt effects then the difference in rt will be minimal. I don't believe you don't understand this simple concept, I think you are trolling on purpose. I do not accept that there is a single human being out there that doesn't get that.
 
as you said in professional world you dont use consumer cards! you use professional gpu lol
We have quadros and RTX Axxxx in the company I work for.
Some times I use my desktop as well for processing. And yes, the software supports cuda acceleration and it makes big difference in most workloads (but not all).

How is AMD going to use CUDA or DLSS3 or NVENC? These are NV proprietary and cannot be used on AMD hardware.
Heck, you cant even use DLSS3 on a 3000 series NV cards and you want AMD be able to use it?
It's like saying AMD should use Raptor Lake cores in their processors. LOL

They should have a cuda alternative. They don't. (they have but it's not supported by the software we use)
The don't have DLSS3 alternative. Yet. It's been months since the release.
They don't have nvenc alternative - they added AV1 in the 7000 series as well as nvidia.
 
Not lying doesn't make the data relevant.
Then what data is relevant and why ? And explain how averaging out performance in various games is irrelevant. That's literally what we do for any benchmarks ever, CPU benchmarks, GPU benchmarks, everything.

Why would RT performance be the only one that differs ?
 
What data is relevant and why ?
Ideally if you want to measure just rt performance then obviously something that runs just rt. There is a benchmark for that and the differences are vast in that one.

Just to clarify, it is true that the average difference in rt games is 16%, I have no problems with that statement. But that's different from claiming the difference between the cards is 16% that's an absolute lie. Say X game is released tomorrow with the full rt pack. Easy money would be to bet on the difference being 50%, not 16%. Would you bet your money at 16?
 
Ideally if you want to measure just rt performance then obviously something that runs just rt. There is a benchmark for that and the differences are vast in that one.

Why aren't we doing this for everything else then ? Why don't we ignore every game benchmark ever and simply look at whatever is the heaviest GPU or CPU synthetic benchmark.

Say X game is released tomorrow with the full rt pack. Easy money would be to bet on the difference being 50%, not 16%. Would you bet your money at 16?

My money would be on a figure closer to 16% rather than 50% because that's going to be closer to the mean.
 
They should have a cuda alternative. They don't. (they have but it's not supported by the software we use)
The don't have DLSS3 alternative. Yet. It's been months since the release.
They don't have nvenc alternative - they added AV1 in the 7000 series as well as nvidia.
That is not what you have said previously. You said, AMD does not offer CUDA, DLSS3 and NVENC. Which basically means you want AMD to have those things the way they are, implemented into AMDs products. That is simply impossible.
NVENC is proprietary and AV1 is the one commonly used. it is like Gsync and adaptive sync. You cant use Gsync if you dont have monitor supporting it and hardware in the GPU to support the GSYNC. Alternative to that is freesync. That is the simplest I can explain it.
DLSS3 is proprietary for NVidia and ADA GPUs only. AMD is working on alternative just like they did with DLSS2 and FSR2. From my standpoint, DLSS3 is a lost cause because you dont even know if it will be used on the next gen graphics from NVidia. What if you get DLSS4 for new graphics only? You think dev will implement DLSS2 and DLSS3 and DLSS4 back then? If they will you and everyone else will pay additional cost for it when you purchase the game. Do you want that? Why quickly NV was able to release DLSS3? Well, only a fraction of GPUs from NVidia can use it so probably optimizing for 1 generation of cards is easier than for all available or a broader range. Still not much of ADA GPUS have been sold due to price in comparison with other gen GPUs. Why should AMD even bother with that and release something as soon as possible, when they don't even know if DLSS3 will work on GPUs moving forward or will be replaced with something else?
With CUDA, OpenCL was the alternative but it was abandoned. Why it was abandoned? Because AMD did not see any means to continue with it. You dont need CUDA to get the job done. AMD can do exactly same things without this acceleration. Plus, what is the CUDA for you and a computer to game on? Do you really need it and buy to NV marketing? Game devs dont use it. Even if you do have applications using CUDA it is not a game changer if you have it or you dont.
Just be realistic and stop using technologies that are proprietary to a product line, which across the board is irrelevant.
 
Why aren't we doing this for everything else then ? Why don't we ignore every game benchmark ever and simply look at whatever is the heaviest GPU or CPU synthetic benchmark.



My money would be on a figure closer to 16% rather than 50% because that's going to be closer to the mean.
No it's not. The average on games with the full rt pack is definitely not 16%,lol

I mean look at hogwarts or cyberpunk. There is a very very clear pattern, I cannot accept that you don't see it. The more rt effects = the bigger the difference. How do you account for that if the difference is just 16%?
 
No it's not. The average on games with the full rt pack is definitely not 16%,lol

Games with "full RT" pack are not the norm. Plus how do you explain that in a game like Fortnite which also has RT reflections + RT GI and AO like cyberpunk the difference is nowhere near 50%.

But anyway you've just said there is no point in looking at games and that ideally you'd just look at benchmarks that use the most RT, so games like Hogwarts or cyberpunk constitute irrelevant data. And you still haven't explained why we aren't doing the same for everything else, why is ray tracing the only special metric where we should ignore games and focus on synthetics.
 
Games with "full RT" pack are not the norm. Plus how do you explain that in a game like Fortnite which also has RT reflections + RT GI and AO like cyberpunk the difference is nowhere near 50%.

But anyway you've just said there is no point in looking at games and that ideally you'd just look at benchmarks that use the most RT, so games like Hogwarts or cyberpunk constitute irrelevant data. And you still haven't explained why we aren't doing the same for everything else, why is ray tracing the only special metric where we should ignore games and focus on synthetics.
I remember NV funclub always bringing the argument, can you play 3dmark when AMD had the upper hand? or nobody plays Strange Brigade (among other games) where AMD did well. Nowadays it is back to synthetics since NV has the upper hand in these and selective games as usual since these are the 'real RT' games. Not convinced to be fair but maybe that's just me.
The culprit is most of those games 'we should only look at', are NV sponsored. Bummer.
 
Last edited:
I simply took a look at fevgatos' system specs, saw what GPU he has and immediately understood why he's a proponent of 'Full RT'(whatever that means to him).
 
I simply took a look at fevgatos' system specs, saw what GPU he has and immediately understood why he's a proponent of 'Full RT'(whatever that means to him).
i've see him in a lot of thread and i can't take anything he say seriously now.
 
The 4080 is more power efficient and its raytracing performance is simply superior, with the 7900 XTX being in line with the almost 3-year-old RTX 3090, that is to say, it's by no means bad, but it will lose the showdown against Ada Lovelace. The drivers are also in much better condition than AMD's right now, especially targeting this new hardware.

On the flip side, the 7900 XTX is cheaper, has a lot more memory, more raster performance and superior Linux drivers. Note I did not mention AMD's traditionally poor video encoder performance, AMD has managed to get that in order, so it's really not a concern any longer. Unless you do heavy encoding work, then the dual NVENC on 4080 might come as a plus, but for someone who's recording gameplays and streaming, the 7900 XTX closed the gap.

If you're a Windows gamer, and can spare the extra money, the RTX 4080 is a no-brainer. It's just a better product. If you're a Linux guy, don't even look at Nvidia. You'll dread the day you decided to purchase the card. Either way you go, you will have a great experience in newer games.
Is the 7900 XTX encoder as good as nvidia on low bitrates , for like streaming on twitch and stuff?
 
I simply took a look at fevgatos' system specs, saw what GPU he has and immediately understood why he's a proponent of 'Full RT'(whatever that means to him).
I'm not a proponent of full rt at all. I'm saying it's delusional to think the difference is 16%.

If you think rt is complete trash, sure, I won't argue. But let's not pretend the difference is small cause it just isn't.
 
I'm saying it's delusional to think the difference is 16%.

No, what you're saying is delusional and I'll prove how devastatingly unintelligent your line of thought is since you couldn't come up with an explanation for your nonsensical claims :

1680268856080.png


Look how much faster the 7900 XTX is than a 4090, clearly since this the largest performance delta between these two GPUs in raster performance that I could find then that means TPU's own assessment that the 4090 is 22% faster in raster is irrelevant.

Turns out the 7900 XTX is actually 27% faster than a 4090, obviously, and every other benchmark is wrong/irrelevant/delusional. Would you fully agree with this statement ?
 
Status
Not open for further replies.
Back
Top