• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA 4 GPU Memory and Infinity Cache Configurations Surface

Joined
Aug 21, 2013
Messages
1,889 (0.46/day)
Who came up with MCM GPUs, and failed miserably? Yeah AMD. Going MCM and STILL loosing in performance per watt and scalability was an utter fail.
Nvidia beats AMD with ease using monolithic, no need to go MCM.
So coming 18% below 4090 with a card that costs only 60% as much is failing miserably now?
I have a feeling that even if AMD were faster and cheaper you'd make up some crap about their "faults".
Yeah AMD used HBM first and failed big time as well. 4GB on Fury series, DoA before they even launched and 980 Ti absolutely wrecked Fury X. Especially with OC, 980 Ti gained massive performance here and Fury X barely gained 1% while watt usage exploded. The worst GPU release ever. Lisa Su even called Fury X an overclockers dream, which has to be the biggest joke ever. Still laugh hard when I watch the video.
Yes 4GB was too little. That being said 980 Ti was 6GB. Not exactly earth shattering capacity there either. I guess at that point it was deemed enough.
900 series were good cards. They improved over 700 series on the same node. Unfortunately this was also the last gen they allowed BIOS editing. After this they locked it down.
RDNA4 will be a joke, just wait and see. AMD spent no money developing it, its merely a RDNA3 bugfix with improved ray tracing, which is pointless since AMD can't do ray tracing and FSR/Frame Gen won't help them here either, because its mediocre as well.
Oh i will wait and see, believe me. AMD current cards can do RT as well as 3090 Ti. So you're effectively telling me that 3090 Ti can't do RT.
AMD even does RT on consoles. Something i thought was impossible so soon in this generation on that hardware.
Like i proved earlier their FG is pretty good. It's you who keeps on denying reality. Yes the upscaling part is not as good but as we've proved already it does not matter how good it is. As an Nvidia fanboy you cant accept that anyone but Nvidia can be competent or make a competitive product.
AMD thinks 110C hot spot temp is acceptable so yeah, AMD is hotter, also uses more power. Low demand means low resell value. You save nothing buying an AMD GPU in the end.
Show me one AMD card that actually reaches it. TPU's latest review of 7900 XTX clearly shows that most cards reach around 80c: https://www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/37.html
All GPU's and CPU have max temp limits near 100c or higher. As do capacitors and VRM's - even higher. You using this as some sort of "own" against AMD shows you have zero clue what that number actually represents and that in real world no one actually reaches it.

The age old "AMD hotter/much power" myth refuses to die because dimwits like you dont bother reading a couple of reviews.
4090 hotspot ~75c.
7900 XTX hotspot ~80c.
Both well withing air cooling limits. As for power - 360W. 4090 uses over 400W. Even 4080S uses over 300W.
Again both are acceptable for high end cards. It's Nvidia who has a 600W BIOS for 4090 and was planning (subsequently canceled) a massive cinder block cooler for it's 600W+ monstrosity. But AMD uses 360W - oh noes.
You are the fanboy here, obviously.
Ah yes. The one using actual, factual sources for it's arguments is the fanboy but the one spewing nonsensical, laughable arguments is not. Sure, sure.
Everything I state is fact.
I have already exposed multiple of your lies here in this thread. You seem to be well short of "facts" to prove your fanboyish comments here.
Just ten year old BS arguments that have since been mostly resolved.
I use AMD CPU, why? Because they make good CPUs. I don't use AMD GPU, why? Because their GPUs are crap. Worse than ever pretty much. Miss ATi.
And you dont see the hypocrisy in this statement? You say AMD is hot, power hungry, that it's drivers are bad etc and then you bring up ATI, who was way worse in those areas. Shows you have zero clue about history.

Usually MCM has better performance per watt
Wrong again. Especially idle power is higher on all MCM designs due to the need to spend energy to move data around between dies.
And like was said before - MCM is absolutely about making smaller dies and lower defect rates.
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.67/day)
Location
Ex-usa | slava the trolls
So coming 18% below 4090 with a card that costs only 60% as much is failing miserably now?

Yes, but the chiplets design failed, and the Radeon is 20-30% slower than it should be if it was monolithic.

Show me one AMD card that actually reaches it.

Radeon RX 5700 XT.

1725010333171.png

https://www.reddit.com/r/overclocking/comments/15yo3ng
 

las

Joined
Nov 14, 2012
Messages
1,693 (0.39/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
So coming 18% below 4090 with a card that costs only 60% as much is failing miserably now?
I have a feeling that even if AMD were faster and cheaper you'd make up some crap about their "faults".

Yes 4GB was too little. That being said 980 Ti was 6GB. Not exactly earth shattering capacity there either. I guess at that point it was deemed enough.
900 series were good cards. They improved over 700 series on the same node. Unfortunately this was also the last gen they allowed BIOS editing. After this they locked it down.

Oh i will wait and see, believe me. AMD current cards can do RT as well as 3090 Ti. So you're effectively telling me that 3090 Ti can't do RT.
AMD even does RT on consoles. Something i thought was impossible so soon in this generation on that hardware.
Like i proved earlier their FG is pretty good. It's you who keeps on denying reality. Yes the upscaling part is not as good but as we've proved already it does not matter how good it is. As an Nvidia fanboy you cant accept that anyone but Nvidia can be competent or make a competitive product.

Show me one AMD card that actually reaches it. TPU's latest review of 7900 XTX clearly shows that most cards reach around 80c: https://www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/37.html
All GPU's and CPU have max temp limits near 100c or higher. As do capacitors and VRM's - even higher. You using this as some sort of "own" against AMD shows you have zero clue what that number actually represents and that in real world no one actually reaches it.

The age old "AMD hotter/much power" myth refuses to die because dimwits like you dont bother reading a couple of reviews.
4090 hotspot ~75c.
7900 XTX hotspot ~80c.
Both well withing air cooling limits. As for power - 360W. 4090 uses over 400W. Even 4080S uses over 300W.
Again both are acceptable for high end cards. It's Nvidia who has a 600W BIOS for 4090 and was planning (subsequently canceled) a massive cinder block cooler for it's 600W+ monstrosity. But AMD uses 360W - oh noes.

Ah yes. The one using actual, factual sources for it's arguments is the fanboy but the one spewing nonsensical, laughable arguments is not. Sure, sure.

I have already exposed multiple of your lies here in this thread. You seem to be well short of "facts" to prove your fanboyish comments here.
Just ten year old BS arguments that have since been mostly resolved.

And you dont see the hypocrisy in this statement? You say AMD is hot, power hungry, that it's drivers are bad etc and then you bring up ATI, who was way worse in those areas. Shows you have zero clue about history.


Wrong again. Especially idle power is higher on all MCM designs due to the need to spend energy to move data around between dies.
And like was said before - MCM is absolutely about making smaller dies and lower defect rates.
7900XTX don't do RT as well as 3090 Ti LMAO and Path Tracing completely destroys 7900XTX.






You sound like a true fanboy, with 500 dollars ready to buy RDNA4 on release.

Sadly it will be another joke release from AMD.

Nothing from AMD will be worth buying till maybe RDNA5, completely new arch, on 3nm or better in late 2025 or early 2026.

RDNA4 is nothing but RDNA3 refined with slightly better RT performance. No-one really cares.
 
  • Like
Reactions: ARF
Joined
Aug 21, 2013
Messages
1,889 (0.46/day)
Yes, but the chiplets design failed, and the Radeon is 20-30% slower than it should be if it was monolithic.
We dont have a monolithic 7900 XTX to compare. Not sure where this 20-30% number comes from
Also if that's correct then why did monolithic 7800 XT not outperform this "failed" 7900XT?
Their performance difference is 30%. If your theory is correct then should not 7800 XT perform as well as 7900 XT because it's monolithic?
Wow you dug up a five year old card. AMD must be doing well if you had to go back five years to find one example.
And if we're talking about old cards then there was the FX 5800 "leaf blower" and the GTX 480 "Jensen's Grill" too.
7900XTX don't do RT as well as 3090 Ti LMAO and Path Tracing completely destroys 7900XTX.






You sound like a true fanboy, with 500 dollars ready to buy RDNA4 on release.

Sadly it will be another joke release from AMD.

Nothing from AMD will be worth buying till maybe RDNA5, completely new arch, on 3nm or better in late 2025 or early 2026.

RDNA4 is nothing but RDNA3 refined with slightly better RT performance. No-one really cares.
Ah yes. Every Nvidia's fanboy favorite tech demo. We'll here's one that is not custom made for Nvidia's hardware:


4% faster than 3090. 6% worse than 3090 Ti on average. Not bad for a card that supposedly cant even do RT.
You also forgot to mention that this destroys 4090 itself. 50fps at 1440p? 40fps with PT at 1440p? Unplayable slideshow on a $1700+ card.
But as long as AMD is below 30fps and 10fps respectively in this test it doesn't really matter for a fanboy does it?
 
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
If AMD was a normal company, Lisa Su's "head" exactly because the gpu compartment is not working, would have fallen long long time ago.
AMD must be GPU-centric and GPU-first company, in order to generate money as it should.
Stupid, stupid..
What have you been smoking? I want some! :roll:

Despite their GPUs not selling in as great numbers as Nvidia's, AMD is still a profitable company, mainly due to CPUs.

Let's also not forget the fact that a smaller company needs to sell lower quantities to stay profitable. Please don't tell me that your burger van has to compete with McDonald's in terms of sales numbers to stay competitive. :laugh:

MCM is about scalability, always have been. AMD said this officially. Usually MCM has better performance per watt, AMD GPUs don't - Their CPUs do.
Better performance per watt is due to the architecture, not to MCM. Otherwise we wouldn't see the 8000G series being as efficient as they are.

My CPUs low power consumption is mainly due to low clockspeeds, 3D cache is fragile. Has nothing to do with MCM since its single CCD. I wanted the best gaming chip, and sadly for AMD, the 7800X3D beats both 7900X3D and 7950X3D here. Dual CCD is just not very good for gaming due to latency issues and it does not help that only one CCD has 3D cache either. 7900X3D in particular is bad, since its only 6 cores with 3D cache.
Check your idle power consumption. ;)

RDNA4 is nothing but RDNA3 refined with slightly better RT performance. No-one really cares.
What's wrong with that? Why do you think no one cares? I had a 7800 XT which is a fine card. My only issue was the video playback power consumption, on which if AMD can improve, then I'll be interested.

Yes, but the chiplets design failed, and the Radeon is 20-30% slower than it should be if it was monolithic.
Yes, because the 7600 is clearly 30% faster than the 6600 XT. Oh wait... :slap:
 

las

Joined
Nov 14, 2012
Messages
1,693 (0.39/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
So coming 18% below 4090 with a card that costs only 60% as much is failing miserably now?
I have a feeling that even if AMD were faster and cheaper you'd make up some crap about their "faults".

Yes 4GB was too little. That being said 980 Ti was 6GB. Not exactly earth shattering capacity there either. I guess at that point it was deemed enough.
900 series were good cards. They improved over 700 series on the same node. Unfortunately this was also the last gen they allowed BIOS editing. After this they locked it down.

Oh i will wait and see, believe me. AMD current cards can do RT as well as 3090 Ti. So you're effectively telling me that 3090 Ti can't do RT.
AMD even does RT on consoles. Something i thought was impossible so soon in this generation on that hardware.
Like i proved earlier their FG is pretty good. It's you who keeps on denying reality. Yes the upscaling part is not as good but as we've proved already it does not matter how good it is. As an Nvidia fanboy you cant accept that anyone but Nvidia can be competent or make a competitive product.

Show me one AMD card that actually reaches it. TPU's latest review of 7900 XTX clearly shows that most cards reach around 80c: https://www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/37.html
All GPU's and CPU have max temp limits near 100c or higher. As do capacitors and VRM's - even higher. You using this as some sort of "own" against AMD shows you have zero clue what that number actually represents and that in real world no one actually reaches it.

The age old "AMD hotter/much power" myth refuses to die because dimwits like you dont bother reading a couple of reviews.
4090 hotspot ~75c.
7900 XTX hotspot ~80c.
Both well withing air cooling limits. As for power - 360W. 4090 uses over 400W. Even 4080S uses over 300W.
Again both are acceptable for high end cards. It's Nvidia who has a 600W BIOS for 4090 and was planning (subsequently canceled) a massive cinder block cooler for it's 600W+ monstrosity. But AMD uses 360W - oh noes.

Ah yes. The one using actual, factual sources for it's arguments is the fanboy but the one spewing nonsensical, laughable arguments is not. Sure, sure.

I have already exposed multiple of your lies here in this thread. You seem to be well short of "facts" to prove your fanboyish comments here.
Just ten year old BS arguments that have since been mostly resolved.

And you dont see the hypocrisy in this statement? You say AMD is hot, power hungry, that it's drivers are bad etc and then you bring up ATI, who was way worse in those areas. Shows you have zero clue about history.


Wrong again. Especially idle power is higher on all MCM designs due to the need to spend energy to move data around between dies.
And like was said before - MCM is absolutely about making smaller dies and lower defect rates.

4090 smashes 7900XTX in pretty much all new and demanding games, especially when not looking at raster only. 7900XTX competes with 4080 tops but barely, in many games 7900XTX performs closer to 4070Ti/4070Ti SUPER. Techpowerup has plenty of game tests showing that.

Lets have a look at their two recent ones:



4090 absolutely wrecks 7900XTX.

More than 50% faster in pure raster, way more when adding RT testing + DLSS/DLAA destroys FSR with ease and Nvidia Frame Gen is highly superior to AMD Frame Gen too.

In a nutshell, you get what you pay for.

4080 uses 300 watts on average in gaming. 7900XTX uses 360 watts with custom cards peaking at 400+ which is the same as 4090, that performs way way better.



Atleast AMD fixed the massive power spikes Radeon 6800/6900 series suffered from



What have you been smoking? I want some! :roll:

Despite their GPUs not selling in as great numbers as Nvidia's, AMD is still a profitable company, mainly due to CPUs.

Let's also not forget the fact that a smaller company needs to sell lower quantities to stay profitable. Please don't tell me that your burger van has to compete with McDonald's in terms of sales numbers to stay competitive. :laugh:


Better performance per watt is due to the architecture, not to MCM. Otherwise we wouldn't see the 8000G series being as efficient as they are.


Check your idle power consumption. ;)


What's wrong with that? Why do you think no one cares? I had a 7800 XT which is a fine card. My only issue was the video playback power consumption, on which if AMD can improve, then I'll be interested.


Yes, because the 7600 is clearly 30% faster than the 6600 XT. Oh wait... :slap:

Idle power consumption on AMD is crap, nothing new



AMD GPUs have much higher powerdraw than Nvidia in multiple scenarios -> Idle, multi monitor, video playback, and more.

Also AMD GPU generally sucks in alot of games, especially competitive -

AMD GPU also sucks for emulation, in betas, in early access titles and just lesser popular games in general. AMD often don't have drivers ready for new games launching. Nvidia always have gameready drivers on day one, often many days before.

So, in the end, you save absolutely nothing buying an AMD GPU, when you consider the much lower resell value and higher powerdraw.

RDNA4 will change nothing

RDNA5 might, yet its not even close, 2026 probably

News, RDNA4 looks to be even more disappointing, 8700XT is going to be the top card


And launch is like half a year away still. Reveal at CES 2025.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
4090 smashes 7900XTX in pretty much all new and demanding games, especially when not looking at raster only. 7900XTX competes with 4080 tops but barely, in many games 7900XTX performs closer to 4070Ti/4070Ti SUPER. Techpowerup has plenty of game tests showing that.

Lets have a look at their two recent ones:



4090 absolutely wrecks 7900XTX.

More than 50% faster in pure raster, way more when adding RT testing + DLSS/DLAA destroys FSR with ease and Nvidia Frame Gen is highly superior to AMD Frame Gen too.

In a nutshell, you get what you pay for.

4080 uses 300 watts on average in gaming. 7900XTX uses 360 watts with custom cards peaking at 400+ which is the same as 4090, that performs way way better.
A GPU that costs 50-60% more performs better? The outrage! :eek:

Idle power consumption on AMD is crap, nothing new

Are you sure? Look at the 8500G in that chart compared to any other AM5 CPU. ;)

It's only MCM CPUs that suck a lot of power at idle (because the IO die and the infinity fabric eat up to 30 W), the 8000G series don't have trouble with it.
 
Joined
Oct 30, 2020
Messages
249 (0.17/day)
That's the minimum it has to do, and if it fails to do it on the whole stack, it's a ridiculous release. The top end is dominated by better products from the competition, in any way you look at it, and the low end is a sidegrade or outright downgrade. Great job! You just failed the top end users that aren't married to the brand, and the value and low budget chasers that are the bulk of your sales.

Gotta love forgetting about the 7600 XT and 7700 XT while at it.

So by that definition, Ada provides no uplift over ampeer because the 4060 sucks royal balls?

No it doesn't, because it's fine as an architecture. Your statement was RDNA3 provides no uplift over RDNA2, which is false. Also 7700xt is 25% faster than 6700XT, but the point is you can't choose and pick a single model and extrapolate that to the whole architecture.

Who came up with MCM GPUs, and failed miserably? Yeah AMD. Going MCM and STILL loosing in performance per watt and scalability was an utter fail.
Nvidia beats AMD with ease using monolithic, no need to go MCM.

Yeah AMD used HBM first and failed big time as well. 4GB on Fury series, DoA before they even launched and 980 Ti absolutely wrecked Fury X. Especially with OC, 980 Ti gained massive performance here and Fury X barely gained 1% while watt usage exploded. The worst GPU release ever. Lisa Su even called Fury X an overclockers dream, which has to be the biggest joke ever. Still laugh hard when I watch the video.

AMD seems to be focusing on CPUs like they should. They are a CPU company first. They barely makes a dime on consumer GPUs and target AI and Enterprise now yet Nvidia is king of AI. AMD wants a piece of the pie here, they don't care about gaming GPUs. Which shows. Already below 10% dGPU marketshare and their offerings are meh.

RDNA4 will be a joke, just wait and see. AMD spent no money developing it, its merely a RDNA3 bugfix with improved ray tracing, which is pointless since AMD can't do ray tracing and FSR/Frame Gen won't help them here either, because its mediocre as well.

AMD thinks 110C hot spot temp is acceptable so yeah, AMD is hotter, also uses more power. Low demand means low resell value. You save nothing buying an AMD GPU in the end.

MCM was for cost savings due to higher yields, they took a hit to performance in the process and not the other way around. I forgot who did the analysis but they'd easily get another 10% performance with the same power if they didn't go MCM.

HBM for Fury was a necessity due to the power consumption of GCN when scaled to the max. But it's interesting you mention the 9xx generation, because in that generation the 970 was supposed to be the one that crushed 290x/390x. I have both, and the 290x aged far, far better than 970 . What was once supposed to be GTX780's competitor was competing with 780, then 780ti, then 970 and then GTX980. Point being, it's not always the case that there's no point in buying AMD it just depends on pricing and deals and regardless of the resale value, you can save a good chunk at times. Not to mention, the 970 specifically sucked when it came to longevity.

What AMD 'thinks' is what they've tested and come up with with regard to hotspot temperatures. If a chip runs hotter than another it makes 0 difference because the heat it dumps is the same whether it's 40'c or 100'c as long as the power is the same. If a chip is validated for a certain temperature there's no issue with that really as long as it's stable.

Where are you getting your sources about RDNA4 being a bugfix? What bug was there to fix? It's clear you have no idea about RDNA4 and are spouting stuff that don't really make sense but stating them anyway.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
So by that definition, Ada provides no uplift over ampeer because the 4060 sucks royal balls?

No it doesn't, because it's fine as an architecture. Your statement was RDNA3 provides no uplift over RDNA2, which is false. Also 7700xt is 25% faster than 6700XT, but the point is you can't choose and pick a single model and extrapolate that to the whole architecture.
The topic was comparing architectures, and not models. You can only compare RDNA 3 to 2 by looking at it shader-to-shader. Sure, the 7700 XT is faster than the 6700 XT, but it has more shader cores, too, so it's not a valid comparison. The 7600 vs 6650 XT, or the 7800 XT vs 6800 is more like what was discussed above. Interestingly, the x600 cards perform on par, but the 7800 XT is clearly faster while being MCM (maybe due to clock speed differences, maybe it's the architecture, I don't know, but probably the former).

MCM was for cost savings due to higher yields, they took a hit to performance in the process and not the other way around. I forgot who did the analysis but they'd easily get another 10% performance with the same power if they didn't go MCM.
I don't think anyone can prove if there's a performance hit without testing an MCM and non-MCM version of the same CPU, which pair unfortunately, doesn't exist.

Where are you getting your sources about RDNA4 being a bugfix? What bug was there to fix? It's clear you have no idea about RDNA4 and are spouting stuff that don't really make sense but stating them anyway.
From here: TechPowerUp article (link).
 
Joined
Oct 2, 2015
Messages
3,104 (0.93/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
So by that definition, Ada provides no uplift over ampeer because the 4060 sucks royal balls?

No it doesn't, because it's fine as an architecture. Your statement was RDNA3 provides no uplift over RDNA2, which is false. Also 7700xt is 25% faster than 6700XT, but the point is you can't choose and pick a single model and extrapolate that to the whole architecture.



MCM was for cost savings due to higher yields, they took a hit to performance in the process and not the other way around. I forgot who did the analysis but they'd easily get another 10% performance with the same power if they didn't go MCM.

HBM for Fury was a necessity due to the power consumption of GCN when scaled to the max. But it's interesting you mention the 9xx generation, because in that generation the 970 was supposed to be the one that crushed 290x/390x. I have both, and the 290x aged far, far better than 970 . What was once supposed to be GTX780's competitor was competing with 780, then 780ti, then 970 and then GTX980. Point being, it's not always the case that there's no point in buying AMD it just depends on pricing and deals and regardless of the resale value, you can save a good chunk at times. Not to mention, the 970 specifically sucked when it came to longevity.

What AMD 'thinks' is what they've tested and come up with with regard to hotspot temperatures. If a chip runs hotter than another it makes 0 difference because the heat it dumps is the same whether it's 40'c or 100'c as long as the power is the same. If a chip is validated for a certain temperature there's no issue with that really as long as it's stable.

Where are you getting your sources about RDNA4 being a bugfix? What bug was there to fix? It's clear you have no idea about RDNA4 and are spouting stuff that don't really make sense but stating them anyway.
Ada is an amazing architecture but it's a terrible product line. Anything under the 4070 SUPER is garbage, with the 4070 being mediocre at best.

RDNA3 is neither. Added nothing, regressed mid range performance and pricing, and is in no way technically superior to the architecture it replaced.

If RDNA4 is more of the same, it has to be a Polaris move, else it will just not sell at all.
 
Joined
Nov 26, 2021
Messages
1,624 (1.51/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Former, they are targeting RTX 4080 performance at the power footprint of a 7800 XT or 4070 Ti.

I think they'll be great cards if they can pull it off and if the price is right, but the high end will go uncontested.
Given the leaked specifications, I believe it will be in the ballpark of the 7900 XT, not the RTX 4080. Ray tracing performance may be in the region of the 7900 XTX or perhaps even higher, but that remains to be seen.
The topic was comparing architectures, and not models. You can only compare RDNA 3 to 2 by looking at it shader-to-shader. Sure, the 7700 XT is faster than the 6700 XT, but it has more shader cores, too, so it's not a valid comparison. The 7600 vs 6650 XT, or the 7800 XT vs 6800 is more like what was discussed above. Interestingly, the x600 cards perform on par, but the 7800 XT is clearly faster while being MCM (maybe due to clock speed differences, maybe it's the architecture, I don't know, but probably the former).


I don't think anyone can prove if there's a performance hit without testing an MCM and non-MCM version of the same CPU, which pair unfortunately, doesn't exist.


From here: TechPowerUp article (link).
I think both the architecture and the clock speeds contribute equally to the performance increase of the 7800 XT over the RX 6800. At stock, the 7800 XT doesn't clock as high as its bigger siblings. Comparing TPU's numbers, at least for Cyberpunk, the 7800 XT doesn't clock much higher than the RX 6800 (overclocked SKU with 1% fps increase over stock). Looking at other reviews, it seems like a 10% clock speed gap at best. That is significantly less than the 21% gap between the two in average fps at 1440p.
 
Joined
Oct 30, 2020
Messages
249 (0.17/day)
RDNA3 is neither. Added nothing, regressed mid range performance and pricing, and is in no way technically superior to the architecture it replaced.
But it is technically superior to the arch it replaced, and the performance numbers are there to back it up.

It might not have been as good as you'd hoped, but it's pointless to keep saying 'it adds nothing, not superior in any way' as that's just incorrect.
 
Joined
Oct 2, 2015
Messages
3,104 (0.93/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
A smaller node with the same die size MUST be faster. Besides that, nothing was gained.
 
Joined
Oct 30, 2020
Messages
249 (0.17/day)
A smaller node with the same die size MUST be faster. Besides that, nothing was gained.

Yeah but the arch is also faster clock for clock and had more features to boot. What's this nothing of yours?
 
Joined
Oct 2, 2015
Messages
3,104 (0.93/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
Yep the products under the top end are a sidegrade.

Only AMD can look at the 4060 and say "I'll do worse, hold my beer"
 
Joined
Dec 25, 2020
Messages
6,596 (4.66/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Yeah but the arch is also faster clock for clock and had more features to boot. What's this nothing of yours?

Dude, there is absolutely nothing on the table for RDNA 2 owners with RDNA 3. That's @GoldenX's primary point. He owns a Radeon, he's not a hater. RDNA 3 simply didn't pan out, one could even argue that at a technical level, there are respects in which that it is well thought out and decently architected, but it blew up on the field and it just doesn't measure up to Ada Lovelace as an architecture. AMD's luck is that the RTX 40 series are so badly fragmented and positioned so horribly in the market ladder, with obnoxious launch pricing, that allowed them to fill in the blanks - if Ada was cheap and the RTX 4090 sold for $999, 4080 for $650, and 4070 for $500, there wouldn't be a Radeon dGPU division left today.

All improvements are nominal and whenever it's a concern, the architectural regressions are very much real. The fact that the 6900 XT is generally outperformed by the 7900 XTX is merely due to the scale of the 7900 XTX - which was clearly and painfully obviously designed to be a competitor to the RTX 4090. The 384-bit bus, the six MCDs, the high clocks - it was all obviously targeted at the 4090, until something went wrong along the way and they just didn't get the core and clock scaling they wanted out of the processor. Until that point, fine, just lower the TGP and release it anyway - the thing is that nobody was expecting the 4090 to be that powerful despite it being cutdown hardware. You should remember that the 4090 and 4080 launched first, in that situation all AMD could do was dig into their margins and release the product line with the flagship product targeting the RTX 4080, which is a smaller processor with a cheaper BoM.

Think, why were the MSRPs for 7900 XTX at 999 and 7900 XT at 900? Because they were both meant to be higher, and they just weren't sure how cheaper could they make them at the time. The 7900 XT at $900 was probably one of the worst value for money GPUs in recent history, and that's including the RTX 4080 at $1200. This is all before we even dig in at the ever problematic driver situation, the fact that Nvidia simply gives you more on that front for your investment, etc.
 
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Dude, there is absolutely nothing on the table for RDNA 2 owners with RDNA 3. That's @GoldenX's primary point. He owns a Radeon, he's not a hater. RDNA 3 simply didn't pan out, one could even argue that at a technical level, there are respects in which that it is well thought out and decently architected, but it blew up on the field and it just doesn't measure up to Ada Lovelace as an architecture. AMD's luck is that the RTX 40 series are so badly fragmented and positioned so horribly in the market ladder, with obnoxious launch pricing, that allowed them to fill in the blanks - if Ada was cheap and the RTX 4090 sold for $999, 4080 for $650, and 4070 for $500, there wouldn't be a Radeon dGPU division left today.

All improvements are nominal and whenever it's a concern, the architectural regressions are very much real. The fact that the 6900 XT is generally outperformed by the 7900 XTX is merely due to the scale of the 7900 XTX - which was clearly and painfully obviously designed to be a competitor to the RTX 4090. The 384-bit bus, the six MCDs, the high clocks - it was all obviously targeted at the 4090, until something went wrong along the way and they just didn't get the core and clock scaling they wanted out of the processor. Until that point, fine, just lower the TGP and release it anyway - the thing is that nobody was expecting the 4090 to be that powerful despite it being cutdown hardware. You should remember that the 4090 and 4080 launched first, in that situation all AMD could do was dig into their margins and release the product line with the flagship product targeting the RTX 4080, which is a smaller processor with a cheaper BoM.

Think, why were the MSRPs for 7900 XTX at 999 and 7900 XT at 900? Because they were both meant to be higher, and they just weren't sure how cheaper could they make them at the time. The 7900 XT at $900 was probably one of the worst value for money GPUs in recent history, and that's including the RTX 4080 at $1200. This is all before we even dig in at the ever problematic driver situation, the fact that Nvidia simply gives you more on that front for your investment, etc.
If AMD originally targeted the 7900 XTX against the 4090, then how do you think they still make a profit on it today when it's slowly inching towards the £900 mark? What you're saying is speculation at best.
 
Joined
Oct 2, 2015
Messages
3,104 (0.93/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
The report is out, Radeon is losing money, and RDNA3 doesn't sell.
 
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
The report is out, Radeon is losing money, and RDNA3 doesn't sell.
That's because of the product itself (mainly due to RDNA 2 having been successful, and 3 offering not much on top), and not its pricing. That is, they're losing money on the whole lot, not on individual units sold.
 
Joined
Dec 25, 2020
Messages
6,596 (4.66/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
If AMD originally targeted the 7900 XTX against the 4090, then how do you think they still make a profit on it today when it's slowly inching towards the £900 mark? What you're saying is speculation at best.

Last earnings call stated that Radeon is currently AMD's lowest performing business.

The entire point is that Nvidia's margins on the RTX 4090 are extreme - they're probably taking $1200+ per card sold. It's something previously unheard of in consumer-grade products, if there was any real pressure Nvidia could still make an insane amount of money by dropping the recommended price on the 4090 by $800. 1000 even. I sincerely don't think that it costs Nvidia more than $500 to manufacture an RTX 4090.
 
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Last earnings call stated that Radeon is currently AMD's lowest performing business.
That may very well be due to the low number of units sold, not necessarily their price.

The entire point is that Nvidia's margins on the RTX 4090 are extreme - they're probably taking $1200+ per card sold. It's something previously unheard of in consumer-grade products, if there was any real pressure Nvidia could still make an insane amount of money by dropping the recommended price on the 4090 by $800. 1000 even. I sincerely don't think that it costs Nvidia more than $500 to manufacture an RTX 4090.
Still speculation, but plausible.
 

las

Joined
Nov 14, 2012
Messages
1,693 (0.39/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
Newest RDNA4 leaks look like its going to be an utter failure, not delivering anything new to the table, lets hope RDNA5 won't disappoint as much

AMD probably spend sub 1% R&D funds developing it, DOA release

Even the AMD biased Youtubers are disappointed :laugh:

RDNA5 can't come soon enough.
 
Joined
Oct 30, 2020
Messages
249 (0.17/day)
Dude, there is absolutely nothing on the table for RDNA 2 owners with RDNA 3. That's@GoldenX primary point. He owns a Radeon, he's not a hater. RDNA 3 simply didn't pan out, one could even argue that at a technical level, there are respects in which that it is well thought out and decently architected, but it blew up on the field and it just doesn't measure up to Ada Lovelace as an architecture. AMD's luck is that the RTX 40 series are so badly fragmented and positioned so horribly in the market ladder, with obnoxious launch pricing, that allowed them to fill in the blanks - if Ada was cheap and the RTX 4090 sold for $999, 4080 for $650, and 4070 for $500, there wouldn't be a Radeon dGPU division left today.

All improvements are nominal and whenever it's a concern, the architectural regressions are very much real. The fact that the 6900 XT is generally outperformed by the 7900 XTX is merely due to the scale of the 7900 XTX - which was clearly and painfully obviously designed to be a competitor to the RTX 4090. The 384-bit bus, the six MCDs, the high clocks - it was all obviously targeted at the 4090, until something went wrong along the way and they just didn't get the core and clock scaling they wanted out of the processor. Until that point, fine, just lower the TGP and release it anyway - the thing is that nobody was expecting the 4090 to be that powerful despite it being cutdown hardware. You should remember that the 4090 and 4080 launched first, in that situation all AMD could do was dig into their margins and release the product line with the flagship product targeting the RTX 4080, which is a smaller processor with a cheaper BoM.

Think, why were the MSRPs for 7900 XTX at 999 and 7900 XT at 900? Because they were both meant to be higher, and they just weren't sure how cheaper could they make them at the time. The 7900 XT at $900 was probably one of the worst value for money GPUs in recent history, and that's including the RTX 4080 at $1200. This is all before we even dig in at the ever problematic driver situation, the fact that Nvidia simply gives you more on that front for your investment, etc.

At this point we have to agree to disagree because what I was arguing about is there's 'absolutely nothing' that RDNA3 brings over RDNA2 (no, 7600 doesn't matter because it's a gimped RDNA3) which I believe isn't true. Clock for clock there are decent gains over the previous gen, especially in RT but also raster, and coupled with a bigger arch a greater than 40% performance increase over the previous flagship isn't what I would say nothing. It's that performance figure that almost made me upgrade to it, but I skipped that and 4090 altogether.

Yes I know that there are things that didn't meet AMD's expectations internally but they absolutely weren't drastically off. But the 7900XTX wasn't meant to be a 4090 competitor. None of the things you said matter - 384bit bus was a necessity to feed the cores and deal with the reduced cache (as tested). High clocks weren't really much higher than RDNA2 at all, pretty close actually. These two things are standard architectural progressions and have nothing to do with the 4090. It's been a while and I can't remember which interview it was, but AMD knew they were incurring a performance penalty by not going monolithic (and reduction of cache because they're on the MCD now). This was done for yields but also as an experimentation or a 'tech demo' of sorts to see how far they can push the links between the dies and it wasn't an easy feat. The fact that they were intentionally taking a performance hit for yields shows they weren't really targeting the 4090 because they knew full well they need every % they can get in order to compete with it.

There were a few other reasons as well, like the dual issue SIMD's they introduced in RDNA3 will not be able to extract much performance because of quite a few bottlenecks present, but there's potential. Best way to improve it? Releasing hardware with that functionality, which is exactly what they've done.

I think this article outlines most of the changes in detail when going from RDNA2 to RDNA3. If you look closely, 7900XTX's GCD+MCD has about the same transistor count as the 4080 at slightly higher transistor density. I don't think AMD were thinking 'hey, let's compete with the 4090 with the same transistor budget as the 4080 while also taking a hit by going chiplet'. The conclusion sort of alludes to the same, have a read.

Newest RDNA4 leaks look like its going to be an utter failure, not delivering anything new to the table, lets hope RDNA5 won't disappoint as much

AMD probably spend sub 1% R&D funds developing it, DOA release

Even the AMD biased Youtubers are disappointed :laugh:

RDNA5 can't come soon enough.

I think at this point it would do well for you to stop reading those leaks and wait for release. None of these leaks are accurate and you have been told as such earlier yet you keep spamming the same thing every couple of days like you're a bot.

edit: Since you seem really into leaks, have a read at something that isn't one. You'll at least learn a thing or two unlike the silly leaks
 
Last edited:

las

Joined
Nov 14, 2012
Messages
1,693 (0.39/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
I think at this point it would do well for you to stop reading those leaks and wait for release. None of these leaks are accurate and you have been told as such earlier yet you keep spamming the same thing every couple of days like you're a bot.

edit: Since you seem really into leaks, have a read at something that isn't one. You'll at least learn a thing or two unlike the silly leaks

RDNA4 is merely a RDNA3 bugfix with better RT

Those silly leaks will be the official specs, you will see in a few months

I will be very impressed if they even come close to 7900XT (while charging 500 dollars tops). Way less cores on the top RDNA4 chip, smaller bus, I bet it will be around 7900 GRE only.

It will be a forgettable release. Low to mid-end focus. Meh.

RDNA5 = Brand new arch on 3nm TSMC or better. RDNA4 is a stop gap solution and nothing else. Maybe they can take back some marketshare if price is low enough. Nothing else is going to matter really. AMD is like sub 10% dGPU marketshare now.
 
Last edited:
Joined
Oct 30, 2020
Messages
249 (0.17/day)
RDNA4 is merely a RDNA3 bugfix with better RT

Those silly leaks will be the official specs, you will see in a few months

I will be very impressed if they even come close to 7900XT (while charging 500 dollars tops). Way less cores on the top RDNA4 chip, smaller bus, I bet it will be around 7900 GRE only.

It will be a forgettable release. Low to mid-end focus. Meh.

RDNA5 = Brand new arch on 3nm TSMC or better. RDNA4 is a stop gap solution and nothing else. Maybe they can take back some marketshare if price is low enough. Nothing else is going to matter really. AMD is like sub 10% dGPU marketshare now.

Did you click on the link I sent you? What's this bug and what's there to fix?
 
Top