• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

How about tpups numbers? Let's ignore those as well right?
We could try not to turn every thread into a us v them brand debate but your not into that eh.
 
The problem is, the 7900xt should be beating the 4070Ti across the board
It does by roughly 10%, look up the reviews on TPU, sometimes by a lot more at 4K as shown in that example. So of course it's more expensive, it also has a lot more VRAM, that stuff isn't given out for free.

But muh RT you will say, sure it's a bit faster in RT, doesn't really matter because in order to get playable framerates you'll need upscaling anyway.

it also consumed a truckload more power.
not only consuming 30% more power
30W isn't a "truckload" nor is it 30% more you mathematical prodigy, though I am sure in your view even 1W more would be a truckload because you are obsessively trying to harp on AMD over any insignificant difference.

1676890680399.png


Now that I realize it, Nvidia somehow managed to make a chip that has 20 billion less transistors on a newer node pull almost as much power as Navi 31, amazing stuff.
 
Last edited:
It does by roughly 10%, look up the reviews on TPU, sometimes by a lot more at 4K as shown in that example. So of course it's more expensive, it also has a lot more VRAM, that stuff isn't given out for free.

But muh RT you will say, sure it's a bit faster in RT, doesn't really matter because in order to get playable framerates you'll need upscaling anyway.



30W isn't a "truckload" nor is it 30% more you mathematical prodigy, though I am sure in your view even 1W more would be a truckload because you are obsessively trying to harp on AMD over any insignificant difference.

View attachment 284693

Now that I realize it, Nvidia somehow managed to make a chip that has 20 billion less transistors on a newer node pull almost as much power as Navi 31, amazing stuff.
The fact that you are misquoting numbers on power draw tells me all I need to know. Maximum power draw is completely useless. In games as per tpu the 7900xt draws 50 more watts. In basic video playback it consumes 400% (lol) more power. 400 freaking percent. That numbers is insane.

power-raytracing.png
 
I use Nvidia RTX and I don't give a f about ray tracing, gimping performance while delivering pretty much nothing you will see when you actually play - instead of standing still - yet decreases performance by a ton. DLSS and DLDSR is the best features about RTX for me and lets be honest, it could probably have been done without dedicated hardware.......

RT is mostly good for screenshots because without the absolute most expensive card, people won't be using it anyway, unless they think 30 fps is great, hell in some games it even feels like there's additional processing lag when RT is enabled, even when fps is "decent" - I think it's a gimmick and I hope AMD will be very competitive in raster perf going forward. ALOT of people don't care about RT and 1440p is still the sweet spot and will be for long, this is where AMD shines as well. 7900XTX already bites 4090 in the butt in some games when raster only and 1440p. This is why I consider going AMD next time.

And FYI AMD has 15-16% dGPU marketshare and that is on Steam, it's probably more + 100% of Console market.

You are assuming you are the reference for the entire market ? :roll:

And FYI https://overclock3d.net/news/gpu_di..._all-time_low_in_q3_2022_-_nvidia_dominates/1
 
Well it doesn't really matter whether you think it's worth it or not. Why is it okay for the 7900xt not only to be 15% more expensive at launch, not only consuming 30% more power, but also getting it's ass handed to it in rt?
Actually it does matter what people think. Why you ask? Because we are going to buy these cards. You need to look closer to those numbers you have given cause it does not seem correct. You keep arguing about RT making Raster irrelevant and being omitted in your calculations. Rasterization is literally core for gaming nowadays not RT.
Also, there are games with RT where 7900xtx is faster than a 4070Ti like Forspoken, The Calisto protocol both at 4k but I'm guessing these games are not good to evaluate RT performance right?
 
Last edited:
Maximum power draw is completely useless.
Lmao, no it's not. What a laughable statement.

In games as per tpu the 7900xt draws 50 more watts.
No, it's not "in games", it's in ray tracing games, as it's clearly labeled on the chart. Knowing how I typically prove you wrong on every occasion you didn't thought I'd notice that ? Here is the correct chart for that, not that it matters much, you're still wrong, 273 to 321 is 17% more not 30%, just say you're bad at math, it's understandable.

1676892535637.png


Also do you want to know why the 4070ti is drawing less power in ray tracing games and the 7900XT doesn't ? It's because it's clearly bottlenecked by something else, likely memory bandwidth, what a great well balanced architecture Nvidia designed lol.

In basic video playback it consumes 400% (lol) more power. 400 freaking percent. That numbers is insane.

Yeah bro maximum power draw is useless but I am sure everyone is picking a 4070ti over a 7900XT because of the video playback power draw, duh, now that's important stuff right there. Do you know what the power draw is when you use notepad ? I reckon that's even more important.

Clutching at straws much ?

The Calisto protocol noth at 4k but I'm guessing these games are not good to evaluate RT performance right?
Duh, obviously.
 
Can't see the problem. Here in Australia, the 7900XT seems to be cheaper than the 4070ti, and as far as TPU's own reviews of the two cards is concerned, I can't see the 7900XT being destroyed by the 4070ti anywhere - in fact, the average gaming framerate chart at the end shows the 7900XT above the overclocked 4070ti at every resolution (raster)...

...unless you meant RT, or video compression, or DLSS, or something else - but you didn't say any of that.
I have a 7900Xt and do not miss in anyway my 6800XT. Take that for what it is. Before people talk about the XT is $400 worth it to you? Not me.
 
Beats it by 10% but until a few days ago it was 14% more expensive, while massively losing in rt and consuming 30% more power. Great deal my man
It has always been pretty much the same price where I am. 7900XT is like 5% more expensive, but has ~10% more performance and twice the memory with much higher bandwidth.

Most people don't really care about RT at all. I am using RTX but I would take more raster perf any day over RT perf which is a joke in most games unless you buy the absolute most expensive GPU, or you will be looking at crappy framerate with RT on anyway


The low bandwidth of 4070 Ti shows the higher the solution go, 504 GB/s is pretty low for a 799 dollar GPU and 12GB VRAM is also low in 2023.

It does not consume 30% more, haha. It consumes 12% more. 284 vs 320 watts in gaming load. And when you actually overclock those cards, 7900XT will perform even better, because Nvidia always maxes out their cards and chips so overclocking will net you a few percent performance at most.

7900 XT will age better for sure, you will see in 1-2 years. In some games, it's on par with 4080, just like 7900XTX beats 4090 in some games. In RT, no, but RASTER yes. Once again, pretty much no-one cares about ray tracing at this point. It's a gimmick and I will take 100% higher fps any day over RT.

Why are you even talking about power consumption when you have a 4090 which spikes at 500+ watts, it has terrible performance per watt and laughable performance per dollar.

Actually 4090 is the worst GPU ever in terms of perf per dollar.


:roll:

And 4080 Ti and 4090 Ti will probably release very soon making 4090 irellevant, just like last time.
 
Last edited:
let me put this to you

What if I where to combine AI Art Generation with ChatGPTs Natural Lanuage Interface with Something like Unreal Engine 5 (we really are not far away from this at all all the pieces exist it just takes somebody to bring it all togetor )

what if you could generate entire envroments just by telling a AI to "show me the bridge of the enterprise"
if you can't see the potental and the way the winds are shifting you may our soon to exist Ai-god have mercy on your fleshy soul
A lot of people are excited about "A.I democritizing creative/technical jobs", but not realizing that it's also going to oversaturate the market with low effort content. We are already finding faults on stuff that require a lot of money and willpower to do, A.I generated content is just going to make more of them.

We need to be carefull about how we use that tool, (who's becoming more than a tool) a few generation down the line, we might just end up with a society addicted to get instant results, and less interested to learn stuff. Studies shows that gen Z are really not that tech literate...because they don't need to understand how something actually work to use it, it's been simplified so much.
So in that sense I like AMD statement, we don't need to use A.I for every little thing. It's a wonderfull thing for sure, but overusing it might also have bad consequences.
 
Last edited:
AMD:
AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU

Also AMD:​

AMD FidelityFX™ Super Resolution (FSR) uses cutting-edge upscaling technologies to help boost your framerates in select titles1 and deliver high-quality, high-resolution gaming experiences, without having to upgrade to a new graphics card.​



i assume they mean they no longer what us to have high quality without having to upgrade to a new gpu. From a sales perspective it makes sense.
 
twice the memory with much higher bandwidth.
twice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)
 
twice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)

Twice the memory, for longevity

3080 10GB is already in trouble for 3440x1440 users

Nvidia gimping makes people upgrade faster, smart tactic

You are using 2060 6GB which barely does 1080p today, 2060 "Super" came out for a reason, now with 8GB VRAM :laugh:

You also bough into Intel 6C/6T is enough I see, sadly 6C/6T chips are choking only a few years after, 6C/12T is bare minimum for proper gaming, just like 8GB VRAM is bare minimum for 1440p and up

AMD generally gives you better longevity than both Nvidia and Intel, wake up and stop being milked so hard


milk GIF
 
twice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)

If only you'd have some technical knowledge on the matter you'd understand how this works.

The 4070ti has less memory bandwidth but a lot more L2 cache, L2 is going to be faster than the L3 AMD has on it's GPUs but then they also increased the memory bandwidth this generation as well. In other words bandwidth absolutely does matter, that's why they had to increase the L2 cache in the first place however more cache is not a complete substitute for VRAM bandwidth, the 4070ti is slower at 4K than than it is at lower resolutions, the only explanation for that is in fact the lack of memory bandwidth and possibly memory capacity as well depending on the game.
 
Lmao, no it's not. What a laughable statement.


No, it's not "in games", it's in ray tracing games, as it's clearly labeled on the chart. Knowing how I typically prove you wrong on every occasion you didn't thought I'd notice that ? Here is the correct chart for that, not that it matters much, you're still wrong, 273 to 321 is 17% more not 30%, just say you're bad at math, it's understandable.

View attachment 284697

Also do you want to know why the 4070ti is drawing less power in ray tracing games and the 7900XT doesn't ? It's because it's clearly bottlenecked by something else, likely memory bandwidth, what a great well balanced architecture Nvidia designed lol.



Yeah bro maximum power draw is useless but I am sure everyone is picking a 4070ti over a 7900XT because of the video playback power draw, duh, now that's important stuff right there. Do you know what the power draw is when you use notepad ? I reckon that's even more important.

Clutching at straws much ?


Duh, obviously.
Of course maximum power draw is absolutely useless. Card A has 400w max power draw and 200w average, card B has 280w max and 250w average. Card A is clearly absolutely unarguably better at power draw. You can't even argue that.

So you proved me wrong by agreeing with me that the XT draws a lot more power. Great, and yes that's usually the case, you prove me wrong every single time by admitting that everything I said is absolutely the case. Gj, keep it up

Twice the memory, for longevity

3080 10GB is already in trouble for 3440x1440 users

Nvidia gimping makes people upgrade faster, smart tactic

You are using 2060 6GB which barely does 1080p today, 2060 "Super" came out for a reason, now with 8GB VRAM :laugh:

You also bough into Intel 6C/6T is enough I see, sadly 6C/6T chips are choking only a few years after, 6C/12T is bare minimum for proper gaming, just like 8GB VRAM is bare minimum for 1440p and up

AMD generally gives you better longevity than both Nvidia and Intel, wake up and stop being milked so hard


milk GIF
Yeah, that 6c12t that amd launched in 2023 for 350 gives you great longevity over the 14c Intel offers. LOL
 
Thanks to Xilinx, AMD has the potential to not only match Nvidia in AI, but also consuming much less power and using less silicon (lower cost).

AMD_VCK5000_Slide15.png





What is being said is that they don't want to build this into the GPUs and force ordinary users to pay a lot more for something that can be adapted to run in regular shaders.
 
Card A has 400w max power draw and 200w average, card B has 280w max and 250w average.
That doesn't happen in the real world, both AMD and Nvidia have very strict power limits, the 7900XT has a 300W TBP limit and the average and maximum power draw are, surprise, surprise, about the same, matter of fact both 4070ti and 7900XT have similar maximum and average power readings to their respective limits.

Actually if you'd use your head for a second you'd realize that what you're saying is complete nonsense anyway, a GPU is typically always facing 100% utilization, it makes no sense that a GPU with let's say 300W TDP limit would ever average out at 200W with 400W maximum readings, it just wouldn't happen. As usual your complete lack of understanding of how these things work prohibits you from ever making a coherent point.

But as I keep saying none of that matters, you're just wrong, it doesn't use 30% more power. Do you not know how to read or are you purposely ignoring this ?

So you proved me wrong by agreeing with me that the XT draws a lot more power.
Completely delusional.
 
Last edited:
If only you'd have some technical knowledge on the matter you'd understand how this works.

The 4070ti has less memory bandwidth but a lot more L2 cache, L2 is going to be faster than the L3 AMD has on it's GPUs but then they also increased the memory bandwidth this generation as well. In other words bandwidth absolutely does matter, that's why they had to increase the L2 cache in the first place however more cache is not a complete substitute for VRAM bandwidth, the 4070ti is slower at 4K than than it is at lower resolutions, the only explanation for that is in fact the lack of memory bandwidth and possibly memory capacity as well depending on the game.

That doesn't happen in the real world, both AMD and Nvidia have very strict power limits, the 7900XT has a 300W TBP limit and the average and maximum power draw are, surprise, surprise, about the same, matter of fact both 4070ti and 7900XT have similar maximum and average power readings to their respective limits.

Actually if you'd use your head for a second you'd realize that what you're saying is complete nonsense anyway, a GPU is typically always facing 100% utilization, it makes no sense that a GPU with let's say 300W TDP limit would ever average out at 200W with 400W maximum readings, it just wouldn't happen. As usual your complete lack of understanding of how these things work prohibits you from ever making a coherent point.

But as I keep saying none of that matters, you're just wrong, it doesn't use 30% more power. Do you not know how to read or are you purposely ignoring this ?


Completely delusional.
Absolutely wrong. My 4090 has a power limit of 520 watts. It can actually supposedly draw that much, but average in games is way lower than that. And of course that insane power draw on the 7900xt while just watching videos is irrelevant to you. 400% - 4 times as much power to play a YouTube video, no biggie I guess lol
 
AMD:
AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU

Also AMD:​

AMD FidelityFX™ Super Resolution (FSR) uses cutting-edge upscaling technologies to help boost your framerates in select titles1 and deliver high-quality, high-resolution gaming experiences, without having to upgrade to a new graphics card.​



i assume they mean they no longer what us to have high quality without having to upgrade to a new gpu. From a sales perspective it makes sense.

FSR does not uses AI-processing resources.

Your welcome.
 
Of course maximum power draw is absolutely useless. Card A has 400w max power draw and 200w average, card B has 280w max and 250w average. Card A is clearly absolutely unarguably better at power draw. You can't even argue that.

So you proved me wrong by agreeing with me that the XT draws a lot more power. Great, and yes that's usually the case, you prove me wrong every single time by admitting that everything I said is absolutely the case. Gj, keep it up


Yeah, that 6c12t that amd launched in 2023 for 350 gives you great longevity over the 14c Intel offers. LOL
The gaming power draw of the 7900XT is 36W more, not a massive amount as you claim, a card that has more VRAM, higher bandwidth, and is faster of course draws a bit more power.
I'm not sure why you even brought up CPU's but launch prices are pointless, only people that always buy the latest thing care about launch prices. The 7600X has 6 performance cores, and now sells for less than $250, Intel is still charging over $300 for 6 performance cores, also you get less upgrades from an Intel board.
 
My 4090 has a power limit of 520 watts. It can actually supposedly draw that much, but average in games is way lower than that.
Then it's not utilized 100%, it's as simple as that.

If you have a 300W average, a 300W limit and a 400W power maximum then that means the contribution of that 400W maximum figure to the average is basically none whatsoever and the power limit is doing it's job as it's supposed to. That's why your example is dumb and nonsensical, this isn't a matter of opinion, you just don't know math.

There isn't a single card on those charts which has a disparity between average and maximum that big, nonetheless going by maximum is still preferable, for instance it's useful in choosing a power supply. You think it's useless because you simply don't know what you're talking about.
 
Last edited:
Who really cares, they're both right and wrong, besides upscaling the ML hardware accelerators really are worthless for the consumer space, at the same time they wont be used for anything else any time soon.




You're both beyond utterly wrong though, over 3 billion in revenue is not insignificant by any stretch of the imagination.
View attachment 284681


They've always made a huge chunk of their money from consumer products, sadly for Nvidia marketing and sponsorship deals don't work very well outside of the consumer market. You can't buy your way to success as easily and actually have to provide extremely competitive pricing because ROI is critical to businesses as opposed to regular consumers so you can't just price everything to infinity and provide shit value.

that is in 2021 when gaming GPU sales are being boosted significantly by crypto. look at nvidia numbers for Q3 2022. gaming sales is only half of that. gaming contribute less and less towards nvidia revenue.
 
I'm not sure I understand what this means.
He said "with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing."
Isn't it something that's up to the game and CPU entirely ?
Does it mean that they will now go towards a more bruteforce approach ? like full raster performance instead of going down the AI-path like Nvidia ?
I'm confused
 
Then it's not utilized 100%, it's as simple as that.

If you have a 300W average, a 300W limit and a 400W power maximum then that means the contribution of that 400W maximum figure to the average is basically none whatsoever and the power limit is doing it's job as it's supposed to. That's why your example is dumb and nonsensical, this isn't a matter of opinion, you just don't know math.

There isn't a single card on those charts which has a disparity between average and maximum that big, nonetheless going by maximum is still preferable, for instance it's useful in choosing a power supply. You think it's useless because you simply don't know what you're talking about.
Next thing is he is going to tell you he uses Vsync or frame cap at 60. I've seen those user who claim that 4090 is very efficient and use very little power with Vsync enabled or frame cap. Then they measure power consumption and according to their calculation it is very efficient. Utter crap but it is what it is. Countless of those posts everywhere.
Or even better. Downclock it 2000Mhz and then measure. But when they check how fast can it render then obviously no limits but then they do not bring the power consumption up since it is irrelevant. :laugh:
 
Let me guess, when AMD introduced a 128MB stack of L3 ("Infinity Cache") to cushion the reduction in bus width and bandwidth, you hailed it as a technological breakthrough.
When Nvidia does the exact same thing with 48MB/64MB/72MB L2, you consider it "making the wrong bet". Okay.



In case you haven't noticed, a 530mm^2 aggregation of chiplets and an expensive new interconnect didn't exactly pass along the savings to gamers any more than Nvidia's 295mm^2 monolithic product did.
False arguments used: strawman, loaded question, black-or-white, ambiguity.

  • Strawman: fabricating AMD's L3 cache as the same thing as Nvidia's L2.
  • Loaded question: claiming that I somehow "hailed" a performance enhancement to try to make me appear to either blindly support it or shy away thus somehow "retracting" the statement about Nvidia's L2 cache.
  • Black-or-white: implying that my criticisms of Nvidia automatically make me blindly agree with allow any of AMD's actions.
  • Ambiguity: ignoring that AMD's L3 cache is cheaper to implement versus Nvidia's L2 cache using direct die space.
In short: AMD's implementations are more generic and better suited for long-term performance whereas Nvidia's is a knee-jerk reaction to do everything it can to have the "best of the best" in order to work off of the mindless sports mentality of "team green" versus "team red".
 
Back
Top