• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
How about tpups numbers? Let's ignore those as well right?
We could try not to turn every thread into a us v them brand debate but your not into that eh.
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The problem is, the 7900xt should be beating the 4070Ti across the board
It does by roughly 10%, look up the reviews on TPU, sometimes by a lot more at 4K as shown in that example. So of course it's more expensive, it also has a lot more VRAM, that stuff isn't given out for free.

But muh RT you will say, sure it's a bit faster in RT, doesn't really matter because in order to get playable framerates you'll need upscaling anyway.

it also consumed a truckload more power.
not only consuming 30% more power
30W isn't a "truckload" nor is it 30% more you mathematical prodigy, though I am sure in your view even 1W more would be a truckload because you are obsessively trying to harp on AMD over any insignificant difference.

1676890680399.png


Now that I realize it, Nvidia somehow managed to make a chip that has 20 billion less transistors on a newer node pull almost as much power as Navi 31, amazing stuff.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
It does by roughly 10%, look up the reviews on TPU, sometimes by a lot more at 4K as shown in that example. So of course it's more expensive, it also has a lot more VRAM, that stuff isn't given out for free.

But muh RT you will say, sure it's a bit faster in RT, doesn't really matter because in order to get playable framerates you'll need upscaling anyway.



30W isn't a "truckload" nor is it 30% more you mathematical prodigy, though I am sure in your view even 1W more would be a truckload because you are obsessively trying to harp on AMD over any insignificant difference.

View attachment 284693

Now that I realize it, Nvidia somehow managed to make a chip that has 20 billion less transistors on a newer node pull almost as much power as Navi 31, amazing stuff.
The fact that you are misquoting numbers on power draw tells me all I need to know. Maximum power draw is completely useless. In games as per tpu the 7900xt draws 50 more watts. In basic video playback it consumes 400% (lol) more power. 400 freaking percent. That numbers is insane.

 
Joined
Oct 4, 2017
Messages
706 (0.27/day)
Location
France
Processor RYZEN 7 5800X3D
Motherboard Aorus B-550I Pro AX
Cooling HEATKILLER IV PRO , EKWB Vector FTW3 3080/3090 , Barrow res + Xylem DDC 4.2, SE 240 + Dabel 20b 240
Memory Viper Steel 4000 PVS416G400C6K
Video Card(s) EVGA 3080Ti FTW3
Storage XPG SX8200 Pro 512 GB NVMe + Samsung 980 1TB
Display(s) Dell S2721DGF
Case NR 200
Power Supply CORSAIR SF750
Mouse Logitech G PRO
Keyboard Meletrix Zoom 75 GT Silver
Software Windows 11 22H2
I use Nvidia RTX and I don't give a f about ray tracing, gimping performance while delivering pretty much nothing you will see when you actually play - instead of standing still - yet decreases performance by a ton. DLSS and DLDSR is the best features about RTX for me and lets be honest, it could probably have been done without dedicated hardware.......

RT is mostly good for screenshots because without the absolute most expensive card, people won't be using it anyway, unless they think 30 fps is great, hell in some games it even feels like there's additional processing lag when RT is enabled, even when fps is "decent" - I think it's a gimmick and I hope AMD will be very competitive in raster perf going forward. ALOT of people don't care about RT and 1440p is still the sweet spot and will be for long, this is where AMD shines as well. 7900XTX already bites 4090 in the butt in some games when raster only and 1440p. This is why I consider going AMD next time.

And FYI AMD has 15-16% dGPU marketshare and that is on Steam, it's probably more + 100% of Console market.

You are assuming you are the reference for the entire market ? :roll:

And FYI https://overclock3d.net/news/gpu_di..._all-time_low_in_q3_2022_-_nvidia_dominates/1
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Well it doesn't really matter whether you think it's worth it or not. Why is it okay for the 7900xt not only to be 15% more expensive at launch, not only consuming 30% more power, but also getting it's ass handed to it in rt?
Actually it does matter what people think. Why you ask? Because we are going to buy these cards. You need to look closer to those numbers you have given cause it does not seem correct. You keep arguing about RT making Raster irrelevant and being omitted in your calculations. Rasterization is literally core for gaming nowadays not RT.
Also, there are games with RT where 7900xtx is faster than a 4070Ti like Forspoken, The Calisto protocol both at 4k but I'm guessing these games are not good to evaluate RT performance right?
 
Last edited:
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Maximum power draw is completely useless.
Lmao, no it's not. What a laughable statement.

In games as per tpu the 7900xt draws 50 more watts.
No, it's not "in games", it's in ray tracing games, as it's clearly labeled on the chart. Knowing how I typically prove you wrong on every occasion you didn't thought I'd notice that ? Here is the correct chart for that, not that it matters much, you're still wrong, 273 to 321 is 17% more not 30%, just say you're bad at math, it's understandable.

1676892535637.png


Also do you want to know why the 4070ti is drawing less power in ray tracing games and the 7900XT doesn't ? It's because it's clearly bottlenecked by something else, likely memory bandwidth, what a great well balanced architecture Nvidia designed lol.

In basic video playback it consumes 400% (lol) more power. 400 freaking percent. That numbers is insane.

Yeah bro maximum power draw is useless but I am sure everyone is picking a 4070ti over a 7900XT because of the video playback power draw, duh, now that's important stuff right there. Do you know what the power draw is when you use notepad ? I reckon that's even more important.

Clutching at straws much ?

The Calisto protocol noth at 4k but I'm guessing these games are not good to evaluate RT performance right?
Duh, obviously.
 
Joined
Jun 2, 2017
Messages
9,373 (3.39/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Can't see the problem. Here in Australia, the 7900XT seems to be cheaper than the 4070ti, and as far as TPU's own reviews of the two cards is concerned, I can't see the 7900XT being destroyed by the 4070ti anywhere - in fact, the average gaming framerate chart at the end shows the 7900XT above the overclocked 4070ti at every resolution (raster)...

...unless you meant RT, or video compression, or DLSS, or something else - but you didn't say any of that.
I have a 7900Xt and do not miss in anyway my 6800XT. Take that for what it is. Before people talk about the XT is $400 worth it to you? Not me.
 

las

Joined
Nov 14, 2012
Messages
1,693 (0.38/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
Beats it by 10% but until a few days ago it was 14% more expensive, while massively losing in rt and consuming 30% more power. Great deal my man
It has always been pretty much the same price where I am. 7900XT is like 5% more expensive, but has ~10% more performance and twice the memory with much higher bandwidth.

Most people don't really care about RT at all. I am using RTX but I would take more raster perf any day over RT perf which is a joke in most games unless you buy the absolute most expensive GPU, or you will be looking at crappy framerate with RT on anyway


The low bandwidth of 4070 Ti shows the higher the solution go, 504 GB/s is pretty low for a 799 dollar GPU and 12GB VRAM is also low in 2023.

It does not consume 30% more, haha. It consumes 12% more. 284 vs 320 watts in gaming load. And when you actually overclock those cards, 7900XT will perform even better, because Nvidia always maxes out their cards and chips so overclocking will net you a few percent performance at most.

7900 XT will age better for sure, you will see in 1-2 years. In some games, it's on par with 4080, just like 7900XTX beats 4090 in some games. In RT, no, but RASTER yes. Once again, pretty much no-one cares about ray tracing at this point. It's a gimmick and I will take 100% higher fps any day over RT.

Why are you even talking about power consumption when you have a 4090 which spikes at 500+ watts, it has terrible performance per watt and laughable performance per dollar.

Actually 4090 is the worst GPU ever in terms of perf per dollar.


:roll:

And 4080 Ti and 4090 Ti will probably release very soon making 4090 irellevant, just like last time.
 
Last edited:
Joined
Oct 28, 2012
Messages
1,195 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
let me put this to you

What if I where to combine AI Art Generation with ChatGPTs Natural Lanuage Interface with Something like Unreal Engine 5 (we really are not far away from this at all all the pieces exist it just takes somebody to bring it all togetor )

what if you could generate entire envroments just by telling a AI to "show me the bridge of the enterprise"
if you can't see the potental and the way the winds are shifting you may our soon to exist Ai-god have mercy on your fleshy soul
A lot of people are excited about "A.I democritizing creative/technical jobs", but not realizing that it's also going to oversaturate the market with low effort content. We are already finding faults on stuff that require a lot of money and willpower to do, A.I generated content is just going to make more of them.

We need to be carefull about how we use that tool, (who's becoming more than a tool) a few generation down the line, we might just end up with a society addicted to get instant results, and less interested to learn stuff. Studies shows that gen Z are really not that tech literate...because they don't need to understand how something actually work to use it, it's been simplified so much.
So in that sense I like AMD statement, we don't need to use A.I for every little thing. It's a wonderfull thing for sure, but overusing it might also have bad consequences.
 
Last edited:
Joined
May 17, 2021
Messages
3,043 (2.31/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
AMD:
AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU

Also AMD:​

AMD FidelityFX™ Super Resolution (FSR) uses cutting-edge upscaling technologies to help boost your framerates in select titles1 and deliver high-quality, high-resolution gaming experiences, without having to upgrade to a new graphics card.​



i assume they mean they no longer what us to have high quality without having to upgrade to a new gpu. From a sales perspective it makes sense.
 
Joined
Dec 6, 2018
Messages
342 (0.15/day)
Location
Hungary
Processor i5-9600K
Motherboard ASUS Prime Z390-A
Cooling Cooler Master Hyper 212 Black Edition PWM
Memory G.Skill DDR4 RipjawsV 3200MHz 16GB kit
Video Card(s) Asus RTX2060 ROG STRIX GAMING
Display(s) Samsung Odyssey G7 27"
Case Cooler Master MasterCase H500
Power Supply SUPER FLOWER Leadex Gold 650W
Mouse BenQ Zowie FK1+-B
Keyboard Cherry KC 1000
Software Win 10
twice the memory with much higher bandwidth.
twice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)
 

las

Joined
Nov 14, 2012
Messages
1,693 (0.38/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
twice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)

Twice the memory, for longevity

3080 10GB is already in trouble for 3440x1440 users

Nvidia gimping makes people upgrade faster, smart tactic

You are using 2060 6GB which barely does 1080p today, 2060 "Super" came out for a reason, now with 8GB VRAM :laugh:

You also bough into Intel 6C/6T is enough I see, sadly 6C/6T chips are choking only a few years after, 6C/12T is bare minimum for proper gaming, just like 8GB VRAM is bare minimum for 1440p and up

AMD generally gives you better longevity than both Nvidia and Intel, wake up and stop being milked so hard


milk GIF
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
twice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)

If only you'd have some technical knowledge on the matter you'd understand how this works.

The 4070ti has less memory bandwidth but a lot more L2 cache, L2 is going to be faster than the L3 AMD has on it's GPUs but then they also increased the memory bandwidth this generation as well. In other words bandwidth absolutely does matter, that's why they had to increase the L2 cache in the first place however more cache is not a complete substitute for VRAM bandwidth, the 4070ti is slower at 4K than than it is at lower resolutions, the only explanation for that is in fact the lack of memory bandwidth and possibly memory capacity as well depending on the game.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Lmao, no it's not. What a laughable statement.


No, it's not "in games", it's in ray tracing games, as it's clearly labeled on the chart. Knowing how I typically prove you wrong on every occasion you didn't thought I'd notice that ? Here is the correct chart for that, not that it matters much, you're still wrong, 273 to 321 is 17% more not 30%, just say you're bad at math, it's understandable.

View attachment 284697

Also do you want to know why the 4070ti is drawing less power in ray tracing games and the 7900XT doesn't ? It's because it's clearly bottlenecked by something else, likely memory bandwidth, what a great well balanced architecture Nvidia designed lol.



Yeah bro maximum power draw is useless but I am sure everyone is picking a 4070ti over a 7900XT because of the video playback power draw, duh, now that's important stuff right there. Do you know what the power draw is when you use notepad ? I reckon that's even more important.

Clutching at straws much ?


Duh, obviously.
Of course maximum power draw is absolutely useless. Card A has 400w max power draw and 200w average, card B has 280w max and 250w average. Card A is clearly absolutely unarguably better at power draw. You can't even argue that.

So you proved me wrong by agreeing with me that the XT draws a lot more power. Great, and yes that's usually the case, you prove me wrong every single time by admitting that everything I said is absolutely the case. Gj, keep it up

Twice the memory, for longevity

3080 10GB is already in trouble for 3440x1440 users

Nvidia gimping makes people upgrade faster, smart tactic

You are using 2060 6GB which barely does 1080p today, 2060 "Super" came out for a reason, now with 8GB VRAM :laugh:

You also bough into Intel 6C/6T is enough I see, sadly 6C/6T chips are choking only a few years after, 6C/12T is bare minimum for proper gaming, just like 8GB VRAM is bare minimum for 1440p and up

AMD generally gives you better longevity than both Nvidia and Intel, wake up and stop being milked so hard


milk GIF
Yeah, that 6c12t that amd launched in 2023 for 350 gives you great longevity over the 14c Intel offers. LOL
 
Joined
Oct 6, 2021
Messages
1,605 (1.37/day)
Thanks to Xilinx, AMD has the potential to not only match Nvidia in AI, but also consuming much less power and using less silicon (lower cost).

AMD_VCK5000_Slide15.png





What is being said is that they don't want to build this into the GPUs and force ordinary users to pay a lot more for something that can be adapted to run in regular shaders.
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Card A has 400w max power draw and 200w average, card B has 280w max and 250w average.
That doesn't happen in the real world, both AMD and Nvidia have very strict power limits, the 7900XT has a 300W TBP limit and the average and maximum power draw are, surprise, surprise, about the same, matter of fact both 4070ti and 7900XT have similar maximum and average power readings to their respective limits.

Actually if you'd use your head for a second you'd realize that what you're saying is complete nonsense anyway, a GPU is typically always facing 100% utilization, it makes no sense that a GPU with let's say 300W TDP limit would ever average out at 200W with 400W maximum readings, it just wouldn't happen. As usual your complete lack of understanding of how these things work prohibits you from ever making a coherent point.

But as I keep saying none of that matters, you're just wrong, it doesn't use 30% more power. Do you not know how to read or are you purposely ignoring this ?

So you proved me wrong by agreeing with me that the XT draws a lot more power.
Completely delusional.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
If only you'd have some technical knowledge on the matter you'd understand how this works.

The 4070ti has less memory bandwidth but a lot more L2 cache, L2 is going to be faster than the L3 AMD has on it's GPUs but then they also increased the memory bandwidth this generation as well. In other words bandwidth absolutely does matter, that's why they had to increase the L2 cache in the first place however more cache is not a complete substitute for VRAM bandwidth, the 4070ti is slower at 4K than than it is at lower resolutions, the only explanation for that is in fact the lack of memory bandwidth and possibly memory capacity as well depending on the game.

That doesn't happen in the real world, both AMD and Nvidia have very strict power limits, the 7900XT has a 300W TBP limit and the average and maximum power draw are, surprise, surprise, about the same, matter of fact both 4070ti and 7900XT have similar maximum and average power readings to their respective limits.

Actually if you'd use your head for a second you'd realize that what you're saying is complete nonsense anyway, a GPU is typically always facing 100% utilization, it makes no sense that a GPU with let's say 300W TDP limit would ever average out at 200W with 400W maximum readings, it just wouldn't happen. As usual your complete lack of understanding of how these things work prohibits you from ever making a coherent point.

But as I keep saying none of that matters, you're just wrong, it doesn't use 30% more power. Do you not know how to read or are you purposely ignoring this ?


Completely delusional.
Absolutely wrong. My 4090 has a power limit of 520 watts. It can actually supposedly draw that much, but average in games is way lower than that. And of course that insane power draw on the 7900xt while just watching videos is irrelevant to you. 400% - 4 times as much power to play a YouTube video, no biggie I guess lol
 
Joined
Sep 10, 2015
Messages
530 (0.16/day)
System Name My Addiction
Processor AMD Ryzen 7950X3D
Motherboard ASRock B650E PG-ITX WiFi
Cooling Alphacool Core Ocean T38 AIO 240mm
Memory G.Skill 32GB 6000MHz
Video Card(s) Sapphire Pulse 7900XTX
Storage Some SSDs
Display(s) 42" Samsung TV + 22" Dell monitor vertically
Case Lian Li A4-H2O
Audio Device(s) Denon + Bose
Power Supply Corsair SF750
Mouse Logitech
Keyboard Glorious
VR HMD None
Software Win 10
Benchmark Scores None taken
AMD:
AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU

Also AMD:​

AMD FidelityFX™ Super Resolution (FSR) uses cutting-edge upscaling technologies to help boost your framerates in select titles1 and deliver high-quality, high-resolution gaming experiences, without having to upgrade to a new graphics card.​



i assume they mean they no longer what us to have high quality without having to upgrade to a new gpu. From a sales perspective it makes sense.

FSR does not uses AI-processing resources.

Your welcome.
 
Joined
Dec 1, 2022
Messages
254 (0.34/day)
Of course maximum power draw is absolutely useless. Card A has 400w max power draw and 200w average, card B has 280w max and 250w average. Card A is clearly absolutely unarguably better at power draw. You can't even argue that.

So you proved me wrong by agreeing with me that the XT draws a lot more power. Great, and yes that's usually the case, you prove me wrong every single time by admitting that everything I said is absolutely the case. Gj, keep it up


Yeah, that 6c12t that amd launched in 2023 for 350 gives you great longevity over the 14c Intel offers. LOL
The gaming power draw of the 7900XT is 36W more, not a massive amount as you claim, a card that has more VRAM, higher bandwidth, and is faster of course draws a bit more power.
I'm not sure why you even brought up CPU's but launch prices are pointless, only people that always buy the latest thing care about launch prices. The 7600X has 6 performance cores, and now sells for less than $250, Intel is still charging over $300 for 6 performance cores, also you get less upgrades from an Intel board.
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
My 4090 has a power limit of 520 watts. It can actually supposedly draw that much, but average in games is way lower than that.
Then it's not utilized 100%, it's as simple as that.

If you have a 300W average, a 300W limit and a 400W power maximum then that means the contribution of that 400W maximum figure to the average is basically none whatsoever and the power limit is doing it's job as it's supposed to. That's why your example is dumb and nonsensical, this isn't a matter of opinion, you just don't know math.

There isn't a single card on those charts which has a disparity between average and maximum that big, nonetheless going by maximum is still preferable, for instance it's useful in choosing a power supply. You think it's useless because you simply don't know what you're talking about.
 
Last edited:
Joined
Mar 24, 2012
Messages
533 (0.11/day)
Who really cares, they're both right and wrong, besides upscaling the ML hardware accelerators really are worthless for the consumer space, at the same time they wont be used for anything else any time soon.




You're both beyond utterly wrong though, over 3 billion in revenue is not insignificant by any stretch of the imagination.
View attachment 284681


They've always made a huge chunk of their money from consumer products, sadly for Nvidia marketing and sponsorship deals don't work very well outside of the consumer market. You can't buy your way to success as easily and actually have to provide extremely competitive pricing because ROI is critical to businesses as opposed to regular consumers so you can't just price everything to infinity and provide shit value.

that is in 2021 when gaming GPU sales are being boosted significantly by crypto. look at nvidia numbers for Q3 2022. gaming sales is only half of that. gaming contribute less and less towards nvidia revenue.
 
Joined
Nov 10, 2020
Messages
21 (0.01/day)
Processor Core i9 10900k @ 5.1 Ghz
Motherboard Asus ROG Strix Z490-E
Cooling DH15 3x fans
Memory 4x16 Crucial 3600 CL16
Video Card(s) 3090 FE
Storage 2x 970 Evo Plus 2Tb + WD Gold 14Tb 2x
Display(s) Dell AW3418DW + BenQ PD2700Q
Case BeQuiet Dark Pro 900v² + 2 fans
Audio Device(s) Ext
Power Supply Be Quiet Dark Power 13 1000w
Mouse Zowie FK2 for Quake/Logitech G600 + Artisan Hien soft XL
I'm not sure I understand what this means.
He said "with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing."
Isn't it something that's up to the game and CPU entirely ?
Does it mean that they will now go towards a more bruteforce approach ? like full raster performance instead of going down the AI-path like Nvidia ?
I'm confused
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Then it's not utilized 100%, it's as simple as that.

If you have a 300W average, a 300W limit and a 400W power maximum then that means the contribution of that 400W maximum figure to the average is basically none whatsoever and the power limit is doing it's job as it's supposed to. That's why your example is dumb and nonsensical, this isn't a matter of opinion, you just don't know math.

There isn't a single card on those charts which has a disparity between average and maximum that big, nonetheless going by maximum is still preferable, for instance it's useful in choosing a power supply. You think it's useless because you simply don't know what you're talking about.
Next thing is he is going to tell you he uses Vsync or frame cap at 60. I've seen those user who claim that 4090 is very efficient and use very little power with Vsync enabled or frame cap. Then they measure power consumption and according to their calculation it is very efficient. Utter crap but it is what it is. Countless of those posts everywhere.
Or even better. Downclock it 2000Mhz and then measure. But when they check how fast can it render then obviously no limits but then they do not bring the power consumption up since it is irrelevant. :laugh:
 
Joined
May 13, 2015
Messages
632 (0.18/day)
Processor AMD Ryzen 3800X / AMD 8350
Motherboard ASRock X570 Phantom Gaming X / Gigabyte 990FXA-UD5 Revision 3.0
Cooling Stock / Corsair H100
Memory 32GB / 24GB
Video Card(s) Sapphire RX 6800 / AMD Radeon 290X (Toggling until 6950XT)
Storage C:\ 1TB SSD, D:\ RAID-1 1TB SSD, 2x4TB-RAID-1
Display(s) Samsung U32E850R
Case be quiet! Dark Base Pro 900 Black rev. 2 / Fractal Design
Audio Device(s) Creative Sound Blaster X-Fi
Power Supply EVGA Supernova 1300G2 / EVGA Supernova 850G+
Mouse Logitech M-U0007
Keyboard Logitech G110 / Logitech G110
Let me guess, when AMD introduced a 128MB stack of L3 ("Infinity Cache") to cushion the reduction in bus width and bandwidth, you hailed it as a technological breakthrough.
When Nvidia does the exact same thing with 48MB/64MB/72MB L2, you consider it "making the wrong bet". Okay.



In case you haven't noticed, a 530mm^2 aggregation of chiplets and an expensive new interconnect didn't exactly pass along the savings to gamers any more than Nvidia's 295mm^2 monolithic product did.
False arguments used: strawman, loaded question, black-or-white, ambiguity.

  • Strawman: fabricating AMD's L3 cache as the same thing as Nvidia's L2.
  • Loaded question: claiming that I somehow "hailed" a performance enhancement to try to make me appear to either blindly support it or shy away thus somehow "retracting" the statement about Nvidia's L2 cache.
  • Black-or-white: implying that my criticisms of Nvidia automatically make me blindly agree with allow any of AMD's actions.
  • Ambiguity: ignoring that AMD's L3 cache is cheaper to implement versus Nvidia's L2 cache using direct die space.
In short: AMD's implementations are more generic and better suited for long-term performance whereas Nvidia's is a knee-jerk reaction to do everything it can to have the "best of the best" in order to work off of the mindless sports mentality of "team green" versus "team red".
 
Top