Wednesday, August 28th 2024

AMD RDNA 4 GPU Memory and Infinity Cache Configurations Surface

AMD's next generation RDNA 4 graphics architecture will see the company focus on the performance segment of the market. The company is rumored to not be making a successor to the enthusiast-segment "Navi 21" and "Navi 31" chips based on RDNA 4, and will instead focus on improving performance and efficiency in the most high-volume segments, just like the original RDNA-powered generation, the Radeon RX 5000 series. There are two chips in the new RDNA 4 generation that have hit the rumor mill, the "Navi 48" and the "Navi 44." The "Navi 48" is the faster of the two, powering the top SKUs in this generation, while the "Navi 44" is expected to be the mid-tier chip.

According to Kepler_L2, a reliable source with GPU leaks, and VideoCardz, which connected the tweet to the RDNA 4 generation, the top "Navi 48" silicon is expected to feature a 256-bit wide GDDR6 memory interface—so there's no upgrade to GDDR7. The top SKU based on this chip, the "Navi 48 XTX," will feature a memory speed of 20 Gbps, for 640 GB/s of memory bandwidth. The next-best SKU, codenamed "Navi 48 XT," will feature a slightly lower 18 Gbps memory speed at the same bus-width, for 576 GB/s of memory bandwidth. The "Navi 44" chip has a respectable 192-bit wide memory bus, and its top SKU will feature a 19 Gbps speed, for 456 GB/s of bandwidth on tap.
Another set of rumors from the same sources also point to the Infinity Cache sizes of these chips. "Navi 48" comes with 64 MB of it, which will be available on both the "Navi 48 XTX" and "Navi 48 XT," while the "Navi 44" silicon comes with 48 MB of it. We are hearing from multiple sources that the "Navi 4x" GPU family will stick to traditional monolithic silicon designs, and not venture out into chiplet disaggregation like the company did with the "Navi 31" and the "Navi 32."

Yet another set of rumors, these from Moore's Law is Dead, talk about how AMD's design focus with RDNA 4 will be to ace performance, performance-per-Watt, and performance cost of ray tracing, in the segments of the market that NVIDIA makes the most volumes in, if not the most margins in. MLID points to the likelihood of the ray tracing performance improvements riding on there being not one, but two ray accelerators per compute unit, with a greater degree of fixed-function acceleration for the ray tracing workflow (i.e. less of it will be delegated to the programmable shaders).
Sources: Kepler_L2 (memory speeds), Wccftech, VideoCardz (memory speeds), Kepler_L2 (cache size), VideoCardz (cache size), Moore's Law is Dead (YouTube)
Add your own comment

104 Comments on AMD RDNA 4 GPU Memory and Infinity Cache Configurations Surface

#76
ARF
TomorrowSo coming 18% below 4090 with a card that costs only 60% as much is failing miserably now?
Yes, but the chiplets design failed, and the Radeon is 20-30% slower than it should be if it was monolithic.
TomorrowShow me one AMD card that actually reaches it.
Radeon RX 5700 XT.


overclocking/comments/15yo3ng
Posted on Reply
#77
las
TomorrowSo coming 18% below 4090 with a card that costs only 60% as much is failing miserably now?
I have a feeling that even if AMD were faster and cheaper you'd make up some crap about their "faults".

Yes 4GB was too little. That being said 980 Ti was 6GB. Not exactly earth shattering capacity there either. I guess at that point it was deemed enough.
900 series were good cards. They improved over 700 series on the same node. Unfortunately this was also the last gen they allowed BIOS editing. After this they locked it down.

Oh i will wait and see, believe me. AMD current cards can do RT as well as 3090 Ti. So you're effectively telling me that 3090 Ti can't do RT.
AMD even does RT on consoles. Something i thought was impossible so soon in this generation on that hardware.
Like i proved earlier their FG is pretty good. It's you who keeps on denying reality. Yes the upscaling part is not as good but as we've proved already it does not matter how good it is. As an Nvidia fanboy you cant accept that anyone but Nvidia can be competent or make a competitive product.

Show me one AMD card that actually reaches it. TPU's latest review of 7900 XTX clearly shows that most cards reach around 80c: www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/37.html
All GPU's and CPU have max temp limits near 100c or higher. As do capacitors and VRM's - even higher. You using this as some sort of "own" against AMD shows you have zero clue what that number actually represents and that in real world no one actually reaches it.

The age old "AMD hotter/much power" myth refuses to die because dimwits like you dont bother reading a couple of reviews.
4090 hotspot ~75c.
7900 XTX hotspot ~80c.
Both well withing air cooling limits. As for power - 360W. 4090 uses over 400W. Even 4080S uses over 300W.
Again both are acceptable for high end cards. It's Nvidia who has a 600W BIOS for 4090 and was planning (subsequently canceled) a massive cinder block cooler for it's 600W+ monstrosity. But AMD uses 360W - oh noes.

Ah yes. The one using actual, factual sources for it's arguments is the fanboy but the one spewing nonsensical, laughable arguments is not. Sure, sure.

I have already exposed multiple of your lies here in this thread. You seem to be well short of "facts" to prove your fanboyish comments here.
Just ten year old BS arguments that have since been mostly resolved.

And you dont see the hypocrisy in this statement? You say AMD is hot, power hungry, that it's drivers are bad etc and then you bring up ATI, who was way worse in those areas. Shows you have zero clue about history.


Wrong again. Especially idle power is higher on all MCM designs due to the need to spend energy to move data around between dies.
And like was said before - MCM is absolutely about making smaller dies and lower defect rates.
7900XTX don't do RT as well as 3090 Ti LMAO and Path Tracing completely destroys 7900XTX.

www.techpowerup.com/review/cyberpunk-2077-phantom-liberty-benchmark-test-performance-analysis/6.html





You sound like a true fanboy, with 500 dollars ready to buy RDNA4 on release.

Sadly it will be another joke release from AMD.

Nothing from AMD will be worth buying till maybe RDNA5, completely new arch, on 3nm or better in late 2025 or early 2026.

RDNA4 is nothing but RDNA3 refined with slightly better RT performance. No-one really cares.
Posted on Reply
#78
Tomorrow
ARFYes, but the chiplets design failed, and the Radeon is 20-30% slower than it should be if it was monolithic.
We dont have a monolithic 7900 XTX to compare. Not sure where this 20-30% number comes from
Also if that's correct then why did monolithic 7800 XT not outperform this "failed" 7900XT?
Their performance difference is 30%. If your theory is correct then should not 7800 XT perform as well as 7900 XT because it's monolithic?
ARFRadeon RX 5700 XT.


overclocking/comments/15yo3ng
Wow you dug up a five year old card. AMD must be doing well if you had to go back five years to find one example.
And if we're talking about old cards then there was the FX 5800 "leaf blower" and the GTX 480 "Jensen's Grill" too.
las7900XTX don't do RT as well as 3090 Ti LMAO and Path Tracing completely destroys 7900XTX.

www.techpowerup.com/review/cyberpunk-2077-phantom-liberty-benchmark-test-performance-analysis/6.html





You sound like a true fanboy, with 500 dollars ready to buy RDNA4 on release.

Sadly it will be another joke release from AMD.

Nothing from AMD will be worth buying till maybe RDNA5, completely new arch, on 3nm or better in late 2025 or early 2026.

RDNA4 is nothing but RDNA3 refined with slightly better RT performance. No-one really cares.
Ah yes. Every Nvidia's fanboy favorite tech demo. We'll here's one that is not custom made for Nvidia's hardware:


4% faster than 3090. 6% worse than 3090 Ti on average. Not bad for a card that supposedly cant even do RT.
You also forgot to mention that this destroys 4090 itself. 50fps at 1440p? 40fps with PT at 1440p? Unplayable slideshow on a $1700+ card.
But as long as AMD is below 30fps and 10fps respectively in this test it doesn't really matter for a fanboy does it?
Posted on Reply
#79
AusWolf
ARFIf AMD was a normal company, Lisa Su's "head" exactly because the gpu compartment is not working, would have fallen long long time ago.
AMD must be GPU-centric and GPU-first company, in order to generate money as it should.
Stupid, stupid..
What have you been smoking? I want some! :roll:

Despite their GPUs not selling in as great numbers as Nvidia's, AMD is still a profitable company, mainly due to CPUs.

Let's also not forget the fact that a smaller company needs to sell lower quantities to stay profitable. Please don't tell me that your burger van has to compete with McDonald's in terms of sales numbers to stay competitive. :laugh:
lasMCM is about scalability, always have been. AMD said this officially. Usually MCM has better performance per watt, AMD GPUs don't - Their CPUs do.
Better performance per watt is due to the architecture, not to MCM. Otherwise we wouldn't see the 8000G series being as efficient as they are.
lasMy CPUs low power consumption is mainly due to low clockspeeds, 3D cache is fragile. Has nothing to do with MCM since its single CCD. I wanted the best gaming chip, and sadly for AMD, the 7800X3D beats both 7900X3D and 7950X3D here. Dual CCD is just not very good for gaming due to latency issues and it does not help that only one CCD has 3D cache either. 7900X3D in particular is bad, since its only 6 cores with 3D cache.
Check your idle power consumption. ;)
lasRDNA4 is nothing but RDNA3 refined with slightly better RT performance. No-one really cares.
What's wrong with that? Why do you think no one cares? I had a 7800 XT which is a fine card. My only issue was the video playback power consumption, on which if AMD can improve, then I'll be interested.
ARFYes, but the chiplets design failed, and the Radeon is 20-30% slower than it should be if it was monolithic.
Yes, because the 7600 is clearly 30% faster than the 6600 XT. Oh wait... :slap:
Posted on Reply
#80
las
TomorrowSo coming 18% below 4090 with a card that costs only 60% as much is failing miserably now?
I have a feeling that even if AMD were faster and cheaper you'd make up some crap about their "faults".

Yes 4GB was too little. That being said 980 Ti was 6GB. Not exactly earth shattering capacity there either. I guess at that point it was deemed enough.
900 series were good cards. They improved over 700 series on the same node. Unfortunately this was also the last gen they allowed BIOS editing. After this they locked it down.

Oh i will wait and see, believe me. AMD current cards can do RT as well as 3090 Ti. So you're effectively telling me that 3090 Ti can't do RT.
AMD even does RT on consoles. Something i thought was impossible so soon in this generation on that hardware.
Like i proved earlier their FG is pretty good. It's you who keeps on denying reality. Yes the upscaling part is not as good but as we've proved already it does not matter how good it is. As an Nvidia fanboy you cant accept that anyone but Nvidia can be competent or make a competitive product.

Show me one AMD card that actually reaches it. TPU's latest review of 7900 XTX clearly shows that most cards reach around 80c: www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/37.html
All GPU's and CPU have max temp limits near 100c or higher. As do capacitors and VRM's - even higher. You using this as some sort of "own" against AMD shows you have zero clue what that number actually represents and that in real world no one actually reaches it.

The age old "AMD hotter/much power" myth refuses to die because dimwits like you dont bother reading a couple of reviews.
4090 hotspot ~75c.
7900 XTX hotspot ~80c.
Both well withing air cooling limits. As for power - 360W. 4090 uses over 400W. Even 4080S uses over 300W.
Again both are acceptable for high end cards. It's Nvidia who has a 600W BIOS for 4090 and was planning (subsequently canceled) a massive cinder block cooler for it's 600W+ monstrosity. But AMD uses 360W - oh noes.

Ah yes. The one using actual, factual sources for it's arguments is the fanboy but the one spewing nonsensical, laughable arguments is not. Sure, sure.

I have already exposed multiple of your lies here in this thread. You seem to be well short of "facts" to prove your fanboyish comments here.
Just ten year old BS arguments that have since been mostly resolved.

And you dont see the hypocrisy in this statement? You say AMD is hot, power hungry, that it's drivers are bad etc and then you bring up ATI, who was way worse in those areas. Shows you have zero clue about history.


Wrong again. Especially idle power is higher on all MCM designs due to the need to spend energy to move data around between dies.
And like was said before - MCM is absolutely about making smaller dies and lower defect rates.
4090 smashes 7900XTX in pretty much all new and demanding games, especially when not looking at raster only. 7900XTX competes with 4080 tops but barely, in many games 7900XTX performs closer to 4070Ti/4070Ti SUPER. Techpowerup has plenty of game tests showing that.

Lets have a look at their two recent ones:

www.techpowerup.com/review/star-wars-outlaws-fps-performance-benchmark/5.html

www.techpowerup.com/review/black-myth-wukong-fps-performance-benchmark/5.html

4090 absolutely wrecks 7900XTX.

More than 50% faster in pure raster, way more when adding RT testing + DLSS/DLAA destroys FSR with ease and Nvidia Frame Gen is highly superior to AMD Frame Gen too.

In a nutshell, you get what you pay for.

4080 uses 300 watts on average in gaming. 7900XTX uses 360 watts with custom cards peaking at 400+ which is the same as 4090, that performs way way better.



Atleast AMD fixed the massive power spikes Radeon 6800/6900 series suffered from

AusWolfWhat have you been smoking? I want some! :roll:

Despite their GPUs not selling in as great numbers as Nvidia's, AMD is still a profitable company, mainly due to CPUs.

Let's also not forget the fact that a smaller company needs to sell lower quantities to stay profitable. Please don't tell me that your burger van has to compete with McDonald's in terms of sales numbers to stay competitive. :laugh:


Better performance per watt is due to the architecture, not to MCM. Otherwise we wouldn't see the 8000G series being as efficient as they are.


Check your idle power consumption. ;)


What's wrong with that? Why do you think no one cares? I had a 7800 XT which is a fine card. My only issue was the video playback power consumption, on which if AMD can improve, then I'll be interested.


Yes, because the 7600 is clearly 30% faster than the 6600 XT. Oh wait... :slap:
Idle power consumption on AMD is crap, nothing new



AMD GPUs have much higher powerdraw than Nvidia in multiple scenarios -> Idle, multi monitor, video playback, and more.

Also AMD GPU generally sucks in alot of games, especially competitive -

AMD GPU also sucks for emulation, in betas, in early access titles and just lesser popular games in general. AMD often don't have drivers ready for new games launching. Nvidia always have gameready drivers on day one, often many days before.

So, in the end, you save absolutely nothing buying an AMD GPU, when you consider the much lower resell value and higher powerdraw.

RDNA4 will change nothing

RDNA5 might, yet its not even close, 2026 probably

News, RDNA4 looks to be even more disappointing, 8700XT is going to be the top card

videocardz.com/newz/amd-rdna4-radeon-gpus-rumored-to-mirror-rdna1-in-product-positioning

And launch is like half a year away still. Reveal at CES 2025.
Posted on Reply
#81
AusWolf
las4090 smashes 7900XTX in pretty much all new and demanding games, especially when not looking at raster only. 7900XTX competes with 4080 tops but barely, in many games 7900XTX performs closer to 4070Ti/4070Ti SUPER. Techpowerup has plenty of game tests showing that.

Lets have a look at their two recent ones:

www.techpowerup.com/review/star-wars-outlaws-fps-performance-benchmark/5.html

www.techpowerup.com/review/black-myth-wukong-fps-performance-benchmark/5.html

4090 absolutely wrecks 7900XTX.

More than 50% faster in pure raster, way more when adding RT testing + DLSS/DLAA destroys FSR with ease and Nvidia Frame Gen is highly superior to AMD Frame Gen too.

In a nutshell, you get what you pay for.

4080 uses 300 watts on average in gaming. 7900XTX uses 360 watts with custom cards peaking at 400+ which is the same as 4090, that performs way way better.
A GPU that costs 50-60% more performs better? The outrage! :eek:
lasIdle power consumption on AMD is crap, nothing new

Are you sure? Look at the 8500G in that chart compared to any other AM5 CPU. ;)

It's only MCM CPUs that suck a lot of power at idle (because the IO die and the infinity fabric eat up to 30 W), the 8000G series don't have trouble with it.
Posted on Reply
#82
mkppo
GoldenXThat's the minimum it has to do, and if it fails to do it on the whole stack, it's a ridiculous release. The top end is dominated by better products from the competition, in any way you look at it, and the low end is a sidegrade or outright downgrade. Great job! You just failed the top end users that aren't married to the brand, and the value and low budget chasers that are the bulk of your sales.

Gotta love forgetting about the 7600 XT and 7700 XT while at it.
So by that definition, Ada provides no uplift over ampeer because the 4060 sucks royal balls?

No it doesn't, because it's fine as an architecture. Your statement was RDNA3 provides no uplift over RDNA2, which is false. Also 7700xt is 25% faster than 6700XT, but the point is you can't choose and pick a single model and extrapolate that to the whole architecture.
lasWho came up with MCM GPUs, and failed miserably? Yeah AMD. Going MCM and STILL loosing in performance per watt and scalability was an utter fail.
Nvidia beats AMD with ease using monolithic, no need to go MCM.

Yeah AMD used HBM first and failed big time as well. 4GB on Fury series, DoA before they even launched and 980 Ti absolutely wrecked Fury X. Especially with OC, 980 Ti gained massive performance here and Fury X barely gained 1% while watt usage exploded. The worst GPU release ever. Lisa Su even called Fury X an overclockers dream, which has to be the biggest joke ever. Still laugh hard when I watch the video.

AMD seems to be focusing on CPUs like they should. They are a CPU company first. They barely makes a dime on consumer GPUs and target AI and Enterprise now yet Nvidia is king of AI. AMD wants a piece of the pie here, they don't care about gaming GPUs. Which shows. Already below 10% dGPU marketshare and their offerings are meh.

RDNA4 will be a joke, just wait and see. AMD spent no money developing it, its merely a RDNA3 bugfix with improved ray tracing, which is pointless since AMD can't do ray tracing and FSR/Frame Gen won't help them here either, because its mediocre as well.

AMD thinks 110C hot spot temp is acceptable so yeah, AMD is hotter, also uses more power. Low demand means low resell value. You save nothing buying an AMD GPU in the end.
MCM was for cost savings due to higher yields, they took a hit to performance in the process and not the other way around. I forgot who did the analysis but they'd easily get another 10% performance with the same power if they didn't go MCM.

HBM for Fury was a necessity due to the power consumption of GCN when scaled to the max. But it's interesting you mention the 9xx generation, because in that generation the 970 was supposed to be the one that crushed 290x/390x. I have both, and the 290x aged far, far better than 970 . What was once supposed to be GTX780's competitor was competing with 780, then 780ti, then 970 and then GTX980. Point being, it's not always the case that there's no point in buying AMD it just depends on pricing and deals and regardless of the resale value, you can save a good chunk at times. Not to mention, the 970 specifically sucked when it came to longevity.

What AMD 'thinks' is what they've tested and come up with with regard to hotspot temperatures. If a chip runs hotter than another it makes 0 difference because the heat it dumps is the same whether it's 40'c or 100'c as long as the power is the same. If a chip is validated for a certain temperature there's no issue with that really as long as it's stable.

Where are you getting your sources about RDNA4 being a bugfix? What bug was there to fix? It's clear you have no idea about RDNA4 and are spouting stuff that don't really make sense but stating them anyway.
Posted on Reply
#83
AusWolf
mkppoSo by that definition, Ada provides no uplift over ampeer because the 4060 sucks royal balls?

No it doesn't, because it's fine as an architecture. Your statement was RDNA3 provides no uplift over RDNA2, which is false. Also 7700xt is 25% faster than 6700XT, but the point is you can't choose and pick a single model and extrapolate that to the whole architecture.
The topic was comparing architectures, and not models. You can only compare RDNA 3 to 2 by looking at it shader-to-shader. Sure, the 7700 XT is faster than the 6700 XT, but it has more shader cores, too, so it's not a valid comparison. The 7600 vs 6650 XT, or the 7800 XT vs 6800 is more like what was discussed above. Interestingly, the x600 cards perform on par, but the 7800 XT is clearly faster while being MCM (maybe due to clock speed differences, maybe it's the architecture, I don't know, but probably the former).
mkppoMCM was for cost savings due to higher yields, they took a hit to performance in the process and not the other way around. I forgot who did the analysis but they'd easily get another 10% performance with the same power if they didn't go MCM.
I don't think anyone can prove if there's a performance hit without testing an MCM and non-MCM version of the same CPU, which pair unfortunately, doesn't exist.
mkppoWhere are you getting your sources about RDNA4 being a bugfix? What bug was there to fix? It's clear you have no idea about RDNA4 and are spouting stuff that don't really make sense but stating them anyway.
From here: TechPowerUp article (link).
Posted on Reply
#84
GoldenX
mkppoSo by that definition, Ada provides no uplift over ampeer because the 4060 sucks royal balls?

No it doesn't, because it's fine as an architecture. Your statement was RDNA3 provides no uplift over RDNA2, which is false. Also 7700xt is 25% faster than 6700XT, but the point is you can't choose and pick a single model and extrapolate that to the whole architecture.



MCM was for cost savings due to higher yields, they took a hit to performance in the process and not the other way around. I forgot who did the analysis but they'd easily get another 10% performance with the same power if they didn't go MCM.

HBM for Fury was a necessity due to the power consumption of GCN when scaled to the max. But it's interesting you mention the 9xx generation, because in that generation the 970 was supposed to be the one that crushed 290x/390x. I have both, and the 290x aged far, far better than 970 . What was once supposed to be GTX780's competitor was competing with 780, then 780ti, then 970 and then GTX980. Point being, it's not always the case that there's no point in buying AMD it just depends on pricing and deals and regardless of the resale value, you can save a good chunk at times. Not to mention, the 970 specifically sucked when it came to longevity.

What AMD 'thinks' is what they've tested and come up with with regard to hotspot temperatures. If a chip runs hotter than another it makes 0 difference because the heat it dumps is the same whether it's 40'c or 100'c as long as the power is the same. If a chip is validated for a certain temperature there's no issue with that really as long as it's stable.

Where are you getting your sources about RDNA4 being a bugfix? What bug was there to fix? It's clear you have no idea about RDNA4 and are spouting stuff that don't really make sense but stating them anyway.
Ada is an amazing architecture but it's a terrible product line. Anything under the 4070 SUPER is garbage, with the 4070 being mediocre at best.

RDNA3 is neither. Added nothing, regressed mid range performance and pricing, and is in no way technically superior to the architecture it replaced.

If RDNA4 is more of the same, it has to be a Polaris move, else it will just not sell at all.
Posted on Reply
#85
AnotherReader
Dr. DroFormer, they are targeting RTX 4080 performance at the power footprint of a 7800 XT or 4070 Ti.

I think they'll be great cards if they can pull it off and if the price is right, but the high end will go uncontested.
Given the leaked specifications, I believe it will be in the ballpark of the 7900 XT, not the RTX 4080. Ray tracing performance may be in the region of the 7900 XTX or perhaps even higher, but that remains to be seen.
AusWolfThe topic was comparing architectures, and not models. You can only compare RDNA 3 to 2 by looking at it shader-to-shader. Sure, the 7700 XT is faster than the 6700 XT, but it has more shader cores, too, so it's not a valid comparison. The 7600 vs 6650 XT, or the 7800 XT vs 6800 is more like what was discussed above. Interestingly, the x600 cards perform on par, but the 7800 XT is clearly faster while being MCM (maybe due to clock speed differences, maybe it's the architecture, I don't know, but probably the former).


I don't think anyone can prove if there's a performance hit without testing an MCM and non-MCM version of the same CPU, which pair unfortunately, doesn't exist.


From here: TechPowerUp article (link).
I think both the architecture and the clock speeds contribute equally to the performance increase of the 7800 XT over the RX 6800. At stock, the 7800 XT doesn't clock as high as its bigger siblings. Comparing TPU's numbers, at least for Cyberpunk, the 7800 XT doesn't clock much higher than the RX 6800 (overclocked SKU with 1% fps increase over stock). Looking at other reviews, it seems like a 10% clock speed gap at best. That is significantly less than the 21% gap between the two in average fps at 1440p.
Posted on Reply
#86
mkppo
GoldenXRDNA3 is neither. Added nothing, regressed mid range performance and pricing, and is in no way technically superior to the architecture it replaced.
But it is technically superior to the arch it replaced, and the performance numbers are there to back it up.

It might not have been as good as you'd hoped, but it's pointless to keep saying 'it adds nothing, not superior in any way' as that's just incorrect.
Posted on Reply
#87
GoldenX
A smaller node with the same die size MUST be faster. Besides that, nothing was gained.
Posted on Reply
#88
mkppo
GoldenXA smaller node with the same die size MUST be faster. Besides that, nothing was gained.
Yeah but the arch is also faster clock for clock and had more features to boot. What's this nothing of yours?
Posted on Reply
#89
GoldenX
Yep the products under the top end are a sidegrade.

Only AMD can look at the 4060 and say "I'll do worse, hold my beer"
Posted on Reply
#90
Dr. Dro
mkppoYeah but the arch is also faster clock for clock and had more features to boot. What's this nothing of yours?
Dude, there is absolutely nothing on the table for RDNA 2 owners with RDNA 3. That's @GoldenX's primary point. He owns a Radeon, he's not a hater. RDNA 3 simply didn't pan out, one could even argue that at a technical level, there are respects in which that it is well thought out and decently architected, but it blew up on the field and it just doesn't measure up to Ada Lovelace as an architecture. AMD's luck is that the RTX 40 series are so badly fragmented and positioned so horribly in the market ladder, with obnoxious launch pricing, that allowed them to fill in the blanks - if Ada was cheap and the RTX 4090 sold for $999, 4080 for $650, and 4070 for $500, there wouldn't be a Radeon dGPU division left today.

All improvements are nominal and whenever it's a concern, the architectural regressions are very much real. The fact that the 6900 XT is generally outperformed by the 7900 XTX is merely due to the scale of the 7900 XTX - which was clearly and painfully obviously designed to be a competitor to the RTX 4090. The 384-bit bus, the six MCDs, the high clocks - it was all obviously targeted at the 4090, until something went wrong along the way and they just didn't get the core and clock scaling they wanted out of the processor. Until that point, fine, just lower the TGP and release it anyway - the thing is that nobody was expecting the 4090 to be that powerful despite it being cutdown hardware. You should remember that the 4090 and 4080 launched first, in that situation all AMD could do was dig into their margins and release the product line with the flagship product targeting the RTX 4080, which is a smaller processor with a cheaper BoM.

Think, why were the MSRPs for 7900 XTX at 999 and 7900 XT at 900? Because they were both meant to be higher, and they just weren't sure how cheaper could they make them at the time. The 7900 XT at $900 was probably one of the worst value for money GPUs in recent history, and that's including the RTX 4080 at $1200. This is all before we even dig in at the ever problematic driver situation, the fact that Nvidia simply gives you more on that front for your investment, etc.
Posted on Reply
#91
AusWolf
Dr. DroDude, there is absolutely nothing on the table for RDNA 2 owners with RDNA 3. That's @GoldenX's primary point. He owns a Radeon, he's not a hater. RDNA 3 simply didn't pan out, one could even argue that at a technical level, there are respects in which that it is well thought out and decently architected, but it blew up on the field and it just doesn't measure up to Ada Lovelace as an architecture. AMD's luck is that the RTX 40 series are so badly fragmented and positioned so horribly in the market ladder, with obnoxious launch pricing, that allowed them to fill in the blanks - if Ada was cheap and the RTX 4090 sold for $999, 4080 for $650, and 4070 for $500, there wouldn't be a Radeon dGPU division left today.

All improvements are nominal and whenever it's a concern, the architectural regressions are very much real. The fact that the 6900 XT is generally outperformed by the 7900 XTX is merely due to the scale of the 7900 XTX - which was clearly and painfully obviously designed to be a competitor to the RTX 4090. The 384-bit bus, the six MCDs, the high clocks - it was all obviously targeted at the 4090, until something went wrong along the way and they just didn't get the core and clock scaling they wanted out of the processor. Until that point, fine, just lower the TGP and release it anyway - the thing is that nobody was expecting the 4090 to be that powerful despite it being cutdown hardware. You should remember that the 4090 and 4080 launched first, in that situation all AMD could do was dig into their margins and release the product line with the flagship product targeting the RTX 4080, which is a smaller processor with a cheaper BoM.

Think, why were the MSRPs for 7900 XTX at 999 and 7900 XT at 900? Because they were both meant to be higher, and they just weren't sure how cheaper could they make them at the time. The 7900 XT at $900 was probably one of the worst value for money GPUs in recent history, and that's including the RTX 4080 at $1200. This is all before we even dig in at the ever problematic driver situation, the fact that Nvidia simply gives you more on that front for your investment, etc.
If AMD originally targeted the 7900 XTX against the 4090, then how do you think they still make a profit on it today when it's slowly inching towards the £900 mark? What you're saying is speculation at best.
Posted on Reply
#92
GoldenX
The report is out, Radeon is losing money, and RDNA3 doesn't sell.
Posted on Reply
#93
AusWolf
GoldenXThe report is out, Radeon is losing money, and RDNA3 doesn't sell.
That's because of the product itself (mainly due to RDNA 2 having been successful, and 3 offering not much on top), and not its pricing. That is, they're losing money on the whole lot, not on individual units sold.
Posted on Reply
#94
Dr. Dro
AusWolfIf AMD originally targeted the 7900 XTX against the 4090, then how do you think they still make a profit on it today when it's slowly inching towards the £900 mark? What you're saying is speculation at best.
Last earnings call stated that Radeon is currently AMD's lowest performing business.

The entire point is that Nvidia's margins on the RTX 4090 are extreme - they're probably taking $1200+ per card sold. It's something previously unheard of in consumer-grade products, if there was any real pressure Nvidia could still make an insane amount of money by dropping the recommended price on the 4090 by $800. 1000 even. I sincerely don't think that it costs Nvidia more than $500 to manufacture an RTX 4090.
Posted on Reply
#95
AusWolf
Dr. DroLast earnings call stated that Radeon is currently AMD's lowest performing business.
That may very well be due to the low number of units sold, not necessarily their price.
Dr. DroThe entire point is that Nvidia's margins on the RTX 4090 are extreme - they're probably taking $1200+ per card sold. It's something previously unheard of in consumer-grade products, if there was any real pressure Nvidia could still make an insane amount of money by dropping the recommended price on the 4090 by $800. 1000 even. I sincerely don't think that it costs Nvidia more than $500 to manufacture an RTX 4090.
Still speculation, but plausible.
Posted on Reply
#96
las
Newest RDNA4 leaks look like its going to be an utter failure, not delivering anything new to the table, lets hope RDNA5 won't disappoint as much

AMD probably spend sub 1% R&D funds developing it, DOA release

Even the AMD biased Youtubers are disappointed :laugh:

RDNA5 can't come soon enough.
Posted on Reply
#97
mkppo
Dr. DroDude, there is absolutely nothing on the table for RDNA 2 owners with RDNA 3. That's@GoldenX primary point. He owns a Radeon, he's not a hater. RDNA 3 simply didn't pan out, one could even argue that at a technical level, there are respects in which that it is well thought out and decently architected, but it blew up on the field and it just doesn't measure up to Ada Lovelace as an architecture. AMD's luck is that the RTX 40 series are so badly fragmented and positioned so horribly in the market ladder, with obnoxious launch pricing, that allowed them to fill in the blanks - if Ada was cheap and the RTX 4090 sold for $999, 4080 for $650, and 4070 for $500, there wouldn't be a Radeon dGPU division left today.

All improvements are nominal and whenever it's a concern, the architectural regressions are very much real. The fact that the 6900 XT is generally outperformed by the 7900 XTX is merely due to the scale of the 7900 XTX - which was clearly and painfully obviously designed to be a competitor to the RTX 4090. The 384-bit bus, the six MCDs, the high clocks - it was all obviously targeted at the 4090, until something went wrong along the way and they just didn't get the core and clock scaling they wanted out of the processor. Until that point, fine, just lower the TGP and release it anyway - the thing is that nobody was expecting the 4090 to be that powerful despite it being cutdown hardware. You should remember that the 4090 and 4080 launched first, in that situation all AMD could do was dig into their margins and release the product line with the flagship product targeting the RTX 4080, which is a smaller processor with a cheaper BoM.

Think, why were the MSRPs for 7900 XTX at 999 and 7900 XT at 900? Because they were both meant to be higher, and they just weren't sure how cheaper could they make them at the time. The 7900 XT at $900 was probably one of the worst value for money GPUs in recent history, and that's including the RTX 4080 at $1200. This is all before we even dig in at the ever problematic driver situation, the fact that Nvidia simply gives you more on that front for your investment, etc.
At this point we have to agree to disagree because what I was arguing about is there's 'absolutely nothing' that RDNA3 brings over RDNA2 (no, 7600 doesn't matter because it's a gimped RDNA3) which I believe isn't true. Clock for clock there are decent gains over the previous gen, especially in RT but also raster, and coupled with a bigger arch a greater than 40% performance increase over the previous flagship isn't what I would say nothing. It's that performance figure that almost made me upgrade to it, but I skipped that and 4090 altogether.

Yes I know that there are things that didn't meet AMD's expectations internally but they absolutely weren't drastically off. But the 7900XTX wasn't meant to be a 4090 competitor. None of the things you said matter - 384bit bus was a necessity to feed the cores and deal with the reduced cache (as tested). High clocks weren't really much higher than RDNA2 at all, pretty close actually. These two things are standard architectural progressions and have nothing to do with the 4090. It's been a while and I can't remember which interview it was, but AMD knew they were incurring a performance penalty by not going monolithic (and reduction of cache because they're on the MCD now). This was done for yields but also as an experimentation or a 'tech demo' of sorts to see how far they can push the links between the dies and it wasn't an easy feat. The fact that they were intentionally taking a performance hit for yields shows they weren't really targeting the 4090 because they knew full well they need every % they can get in order to compete with it.

There were a few other reasons as well, like the dual issue SIMD's they introduced in RDNA3 will not be able to extract much performance because of quite a few bottlenecks present, but there's potential. Best way to improve it? Releasing hardware with that functionality, which is exactly what they've done.

I think this article outlines most of the changes in detail when going from RDNA2 to RDNA3. If you look closely, 7900XTX's GCD+MCD has about the same transistor count as the 4080 at slightly higher transistor density. I don't think AMD were thinking 'hey, let's compete with the 4090 with the same transistor budget as the 4080 while also taking a hit by going chiplet'. The conclusion sort of alludes to the same, have a read.
lasNewest RDNA4 leaks look like its going to be an utter failure, not delivering anything new to the table, lets hope RDNA5 won't disappoint as much

AMD probably spend sub 1% R&D funds developing it, DOA release

Even the AMD biased Youtubers are disappointed :laugh:

RDNA5 can't come soon enough.
I think at this point it would do well for you to stop reading those leaks and wait for release. None of these leaks are accurate and you have been told as such earlier yet you keep spamming the same thing every couple of days like you're a bot.

edit: Since you seem really into leaks, have a read at something that isn't one. You'll at least learn a thing or two unlike the silly leaks
Posted on Reply
#98
las
mkppoI think at this point it would do well for you to stop reading those leaks and wait for release. None of these leaks are accurate and you have been told as such earlier yet you keep spamming the same thing every couple of days like you're a bot.

edit: Since you seem really into leaks, have a read at something that isn't one. You'll at least learn a thing or two unlike the silly leaks
RDNA4 is merely a RDNA3 bugfix with better RT

Those silly leaks will be the official specs, you will see in a few months

I will be very impressed if they even come close to 7900XT (while charging 500 dollars tops). Way less cores on the top RDNA4 chip, smaller bus, I bet it will be around 7900 GRE only.

It will be a forgettable release. Low to mid-end focus. Meh.

RDNA5 = Brand new arch on 3nm TSMC or better. RDNA4 is a stop gap solution and nothing else. Maybe they can take back some marketshare if price is low enough. Nothing else is going to matter really. AMD is like sub 10% dGPU marketshare now.
Posted on Reply
#99
mkppo
lasRDNA4 is merely a RDNA3 bugfix with better RT

Those silly leaks will be the official specs, you will see in a few months

I will be very impressed if they even come close to 7900XT (while charging 500 dollars tops). Way less cores on the top RDNA4 chip, smaller bus, I bet it will be around 7900 GRE only.

It will be a forgettable release. Low to mid-end focus. Meh.

RDNA5 = Brand new arch on 3nm TSMC or better. RDNA4 is a stop gap solution and nothing else. Maybe they can take back some marketshare if price is low enough. Nothing else is going to matter really. AMD is like sub 10% dGPU marketshare now.
Did you click on the link I sent you? What's this bug and what's there to fix?
Posted on Reply
#100
las
mkppoDid you click on the link I sent you? What's this bug and what's there to fix?
Google: RDNA4 is merely a bug fix for RDNA3

RDNA4 will bring absolutely nothing new to the table. Lets hope for slightly better perf per watt and dollar.

RDNA5 is the next brand new arch and is not even close, expect late 2025 or even 2026. Probably won't have high-end SKUs either tho, since AMD officially left high-end market now. Lets hope they can compete with 5070, 5070 Ti and 5080 at least.
Posted on Reply
Add your own comment
Nov 21st, 2024 13:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts