Wednesday, August 28th 2024
AMD RDNA 4 GPU Memory and Infinity Cache Configurations Surface
AMD's next generation RDNA 4 graphics architecture will see the company focus on the performance segment of the market. The company is rumored to not be making a successor to the enthusiast-segment "Navi 21" and "Navi 31" chips based on RDNA 4, and will instead focus on improving performance and efficiency in the most high-volume segments, just like the original RDNA-powered generation, the Radeon RX 5000 series. There are two chips in the new RDNA 4 generation that have hit the rumor mill, the "Navi 48" and the "Navi 44." The "Navi 48" is the faster of the two, powering the top SKUs in this generation, while the "Navi 44" is expected to be the mid-tier chip.
According to Kepler_L2, a reliable source with GPU leaks, and VideoCardz, which connected the tweet to the RDNA 4 generation, the top "Navi 48" silicon is expected to feature a 256-bit wide GDDR6 memory interface—so there's no upgrade to GDDR7. The top SKU based on this chip, the "Navi 48 XTX," will feature a memory speed of 20 Gbps, for 640 GB/s of memory bandwidth. The next-best SKU, codenamed "Navi 48 XT," will feature a slightly lower 18 Gbps memory speed at the same bus-width, for 576 GB/s of memory bandwidth. The "Navi 44" chip has a respectable 192-bit wide memory bus, and its top SKU will feature a 19 Gbps speed, for 456 GB/s of bandwidth on tap.Another set of rumors from the same sources also point to the Infinity Cache sizes of these chips. "Navi 48" comes with 64 MB of it, which will be available on both the "Navi 48 XTX" and "Navi 48 XT," while the "Navi 44" silicon comes with 48 MB of it. We are hearing from multiple sources that the "Navi 4x" GPU family will stick to traditional monolithic silicon designs, and not venture out into chiplet disaggregation like the company did with the "Navi 31" and the "Navi 32."
Yet another set of rumors, these from Moore's Law is Dead, talk about how AMD's design focus with RDNA 4 will be to ace performance, performance-per-Watt, and performance cost of ray tracing, in the segments of the market that NVIDIA makes the most volumes in, if not the most margins in. MLID points to the likelihood of the ray tracing performance improvements riding on there being not one, but two ray accelerators per compute unit, with a greater degree of fixed-function acceleration for the ray tracing workflow (i.e. less of it will be delegated to the programmable shaders).
Sources:
Kepler_L2 (memory speeds), Wccftech, VideoCardz (memory speeds), Kepler_L2 (cache size), VideoCardz (cache size), Moore's Law is Dead (YouTube)
According to Kepler_L2, a reliable source with GPU leaks, and VideoCardz, which connected the tweet to the RDNA 4 generation, the top "Navi 48" silicon is expected to feature a 256-bit wide GDDR6 memory interface—so there's no upgrade to GDDR7. The top SKU based on this chip, the "Navi 48 XTX," will feature a memory speed of 20 Gbps, for 640 GB/s of memory bandwidth. The next-best SKU, codenamed "Navi 48 XT," will feature a slightly lower 18 Gbps memory speed at the same bus-width, for 576 GB/s of memory bandwidth. The "Navi 44" chip has a respectable 192-bit wide memory bus, and its top SKU will feature a 19 Gbps speed, for 456 GB/s of bandwidth on tap.Another set of rumors from the same sources also point to the Infinity Cache sizes of these chips. "Navi 48" comes with 64 MB of it, which will be available on both the "Navi 48 XTX" and "Navi 48 XT," while the "Navi 44" silicon comes with 48 MB of it. We are hearing from multiple sources that the "Navi 4x" GPU family will stick to traditional monolithic silicon designs, and not venture out into chiplet disaggregation like the company did with the "Navi 31" and the "Navi 32."
Yet another set of rumors, these from Moore's Law is Dead, talk about how AMD's design focus with RDNA 4 will be to ace performance, performance-per-Watt, and performance cost of ray tracing, in the segments of the market that NVIDIA makes the most volumes in, if not the most margins in. MLID points to the likelihood of the ray tracing performance improvements riding on there being not one, but two ray accelerators per compute unit, with a greater degree of fixed-function acceleration for the ray tracing workflow (i.e. less of it will be delegated to the programmable shaders).
104 Comments on AMD RDNA 4 GPU Memory and Infinity Cache Configurations Surface
RT in RDNA 3 is still quite meh in my opinion. I hope rumours about doubling the RT cores per shader in RDNA 4 are true, and we'll see some actual uplift that's worth talking about. Yes, that's a pretty poor upgrade from AMD. With that mindset, though, you can pick good and bad examples from practically everywhere.
They have to develop so that the improvements can be seen everywhere, not to be load/app/support dependent.
Though still weird if they're not gonna compete in the high-end, 6800/6900 and 7900 series have been great.
Personally, I just call RDNA 3 RDNA 2 refresh (just like Ada Ampere refresh).
bad - bad - bad word -> Mindfactory = racists.Alternate has the joke with alternate.at with different prices to alternate.de. Austria and Germany have a common boarder in "Central Europe". Different inventory .de refuses to sell to .at
Proshop also distinguish between countries.
nbb.com
I prefer if you had written mindfactory is a retailer in GERMANY. Not europe!
There are more countries in europe as only Germany.
RDNA5 is going to be the next somewhat exiting release but is not even close, late 2025 or even 2026
AMD probably spends 2% of R&D funds on consumer GPUs tops, they are getting worse and worse, sadly. MCM pretty much failed here. RDNA4 looks to be monolithic.
Ray Tracing should not be a focus from AMD. Improving FSR and Frame Gen should be a prime focus, its the reason Nvidia cards are selling like hotcakes. DLDSR, DLSS, DLAA, Frame Gen is the true magic of RTX, not Ray Tracing or Path Tracing, and no AMD is not even close, tried tons of AMD cards, including several SKUs in both 6000 and 7000 series. Had a 6800XT as primary card before I got a 4090.
AMD needs to release some marketshare-grapping cards, meaning top performance per dollar with FSR and Frame Gen close to or matching Nvidia's solutions. AMD lacks behind on all features today. Anyone who denies this fact, have not tried both recently.
www.tomshardware.com/pc-components/gpus/nvidias-grasp-of-desktop-gpu-market-balloons-to-88-amd-has-just-12-intel-negligible-says-jpr
Remove iGPUs and AMD is already below 10% dGPU marketshare, Nvidia dominates without even trying (AI focus)
I guess AMD and Intel can fight for the low-end dGPU market, maybe mid-end but high-end is already lost long ago
Having a Halo card certainly has it's benefits in terms of marketing and overall image of a series, but financially it's pretty expensive and getting more expensive every year. I hope so too but seeing AMD's recent pricing they always seem to be able to shoot themselves in the foot with bad pricing and thus negative reviews only for the price to fall immideatly after launch. How are they getting worse and worse? At least they tried. I would not say they failed. They fell short of their own expectations and of those who though this was going to be a "4090 killer" but they're still competitive in most aspects from 4080S and down. You say as if the two are mutually exclusive. They are not. Why cant they improve both RT in hardware and FSR/FG in software at the same time?
It's not like one is taking resources away from each other. Engineers who design RT units in hardware are not coding FSR/FG the next day and vice-versa.
I would also say FSR FG was pretty good right out of the gate (despite being late) with wider compatibility. Even with Nvidia's own 20 and 30 series cards that were deprived of a feature that clearly could have worked on those cards (despite what Nvidia said). Even reviewers critical of FSR upscaling portion praised FSR FG as nearly indistinguishable from DLSS FG. Nvidia lacked behind in terms of driver control panel for a long time. AMD's was unified and modern where as Nvidia's was fragmented and disjointed.
Only last year they started developing Nvidia App that while still in Beta has shown great progress towards unification.
You also say performance per dollar but then lambast AMD for not releasing a halo card - but a halo card is almost never top performance per dollar. Nvidia dominates also largely because of "old fat" ie older cards like 30 series. Not their latest and greatest.
Clearly you are out of touch with the actual market. I do b2b sales for a living, Nvidia completely crushes AMD in terms of GPU sales, gaming, AI, enterprise, don't matter, Nvidia is the king.
Techpowerup has like 50+ FSR vs DLSS/DLAA tests and Nvidia wins every time. They also have superior Frame Gen without artifacts and ghosting.
Nvidia have superior drivers, by far. Nvidia runs flawlessly no matter which game you open. Early access, Betas, Emulation, Nvidia does it all without a problem. AMD has wonky drivers and I know this for sure since I am coming from a 6800XT and built like 100+ mid to high-end rigs in the last 5 years, minimum. 9 out of 10 people want Nvidia, thats the hard reality for you.
AMD GPUs gotten worse and worse in the last few generations, their focus shifted away from dGPUs, which shows.
Nvidia dominates because 9 out of 10 want Nvidia, its as simple as that. Many tried AMD at some point but came rushing back to Nvidia.
AMD is cheaper for a reason. If they were actually good, they would gain marketshare, not lose it, year after year. They have improved nothing in the last many generations. Rushed features that are cheap knockoffs of Nvidia's tech is what they do.
DLDSR beats VSR
DLSS/DLAA beats FSR
Nvidia Frame Gen beats AMD Frame Gen.
Reflex beats Anti Lag+ (and AL+ got people steam banned haha)
Nvidia have longer support, even GTX 600 series from 15 years ago still get drivers, meanwhile AMD pulled Polaris and Vega support
Nvidia cards can use RT and even Path Tracing
ShadowPlay beats ReLive
Every single feature, Nvidia invented and AMD tried to copy it, but failed.
Also AMD uses more power and has lower resell value, you save nothing by going AMD GPU in the end.
Thats why AMD GPUs are cheaper, and still don't sell.
Exhibit B: Vega 56 to 5700 XT. From 399 to 399 at 121% of the performance. Most specs were downgrades, but performance actually increased.
Both cases where AMD released a mid-range cards after high-end cards failed. Fury failed against Maxwell based 900 series and Vega failed against Pascal based 10 series.
And the history is about to repeat the third time. But sure. You believe what you want to believe in the hopes that no way history would repeat itself so soon, or ever. And is this crushing based on exclusively 4090 sales?
I was not arguing that people dont want, or dont buy Nvidia.
I was arguing that most people want cheaper cards with better performance, not faster cards at even higher prices. I was not talking about upscaling. I was talking about Frame generation.
From TPU's on conclusion of FSR FG vs DLSS FG: Spoken like a fanboy. Not a single card, no matter how "superior" it's drivers are runs "flawlessly" in every game.
Just open Nvidia forums and you'll see plenty of people with driver problems. It's true that Nvidia has less issues than AMD or especially Intel but i never claimed otherwise.
Nvidia lists known issues in their driver releases every time and often they stay there for months on end before finally (i presume) getting fixed.
Nvidia historically also has had worse drivers in Linux. You know the OS most of the world uses? (in enterprise, embedded, smartphones etc).
Only recently have they been starting to improve their Linux drivers by opening up more previously closed source code. I too have AMD boxes in addition to Nvidia and i've yet to see these "wonky drivers" you speak of. Granted i only use WHQL versions.
I have friends who have AMD cards and they dont complain to me about "wonky drivers".
If you search the internet then there are plenty drivers problems with every product, no matter the manufacturer. Again i ask how? You speak about drivers. I assume you mean that? Or is it features? Again spoken like a fanboy failing to see any progress from "cheaper" competitors who no doubt are worse and keep getting worse every year. Keep this positive outlook going buddy... Have i said they dont?
Most of those features are also exclusive to Nvidia's own cards or even their latest series, screwing over their previous series customers.
Longer support? When we look at latest drivers then quarterly driver releases for Vega is not "pulling support". This is a myth that started to spread and keeps spreading. People that keep repeating this lie never actually bother to visit AMD's site and check for themselves because that would be too hard and disrupt their narrative.
Nvidia with their current drivers actively supports 900 series and newer. Released in 2014.
AMD supports 400 series and newer. Released in 2016.
The difference is between 10 vs 8 years.
So Nvidia has active support for 2 years more, not 4 years like you claim.
Vega series has very recent drivers from March of this year as does 400/500 series. Only the very old R9 200/300/Fury series are using legacy drivers from a few years back.
R9 200/300/Fury:
Adrenalin 22.6.1 WHQL
Release Date: 2022-06-23
Radeon VII/Vega 56 & 64 + RX 400/500:
Adrenalin 24.3.1 WHQL
Release Date: 2024-03-20
So it seems Nvidia supports their oldest series for up to ten years. Meaning 900 series support will likely be dropped next year. AMD seems to support their older series for 7-8 years.
Next time educate yourself, instead of spouting random nonsense you might have read or heard on the internet.
Hilarious that you say Nvidia can use RT? And AMD cant?
PT is a total non-issue (how many games actually use it?) as even 4090 struggles with it and needs every performance enhancing toggle enabled to get playable framerates. People who buy a 1700+ card to play at 60fps with upscaling and FG enabled in a handful of games are idiots.
PT is essentially a tech demo of what will one day be possible. Today it's a tech demo. AMD seems to be focusing more on hardware, not software features.
Who came up with MCM GPU's first? Nvidia has not even tried to copy it yet. Arguably they dont need to but one day they will have to by necessity as making huge monolithic chips on ever more expensive wafers is a big loss if it has any defects. They already do it to some degree with Blackwell where two big dies are joined together by a high speed interconnect. Not too dissimilar to AMD Infinity Fabric. It's only a matter of time before all three manufacturers move to MCM GPU's. At least for high-end cards.
Who introduced ReBAR first and who copied it?
Historically AMD has also been the first the use a new generation of VRAM. They did it with GDDR4, they did it HBM and HBM2 etc.
AMD cards are also more forward looking (in terms of hardware) with more VRAM out of the box, newer display outputs, hardware scheduling, async compute etc. Is this the old "AMD is hot and loud" argument again? I thought this had died in the R9 300 era but apparently not.
I see AMD cards reselling for quite some money. If you would be right i should be able to pick up high-end cards for pennies.
I think they'll be great cards if they can pull it off and if the price is right, but the high end will go uncontested.
So get cracking at it AMD
NVIDIA is expensive, everyone knows that, the point is that AMD copying prices without offering anything in return made RDNA3 s terrible release.
Leaving the RT discussion aside, AMD is at a disadvantage in encoding and decoding, compute software quality, stability, and hardware support for it, a tensor equivalent and the software that takes advantage of it, system stability particularly in Linux with the lack of hardware support for GPU resets, etc.
It's a product that can only game well, it's a lower quality on anything else, and that merits a lower price. AMD themselves know this, so this is how they intend to tackle the problem.
For all of Nvidia's faults... it could be a lot worse. Their competition is completely misguided as usual, and Arc isn't quite there yet. Things will heat up once BMG arrives, but Alchemist is a done deal at this point.
Boring release yada yada that's fine. But it offers a substantial uplift at the top end
Nvidia beats AMD with ease using monolithic, no need to go MCM.
Yeah AMD used HBM first and failed big time as well. 4GB on Fury series, DoA before they even launched and 980 Ti absolutely wrecked Fury X. Especially with OC, 980 Ti gained massive performance here and Fury X barely gained 1% while watt usage exploded. The worst GPU release ever. Lisa Su even called Fury X an overclockers dream, which has to be the biggest joke ever. Still laugh hard when I watch the video.
AMD seems to be focusing on CPUs like they should. They are a CPU company first. They barely makes a dime on consumer GPUs and target AI and Enterprise now yet Nvidia is king of AI. AMD wants a piece of the pie here, they don't care about gaming GPUs. Which shows. Already below 10% dGPU marketshare and their offerings are meh.
RDNA4 will be a joke, just wait and see. AMD spent no money developing it, its merely a RDNA3 bugfix with improved ray tracing, which is pointless since AMD can't do ray tracing and FSR/Frame Gen won't help them here either, because its mediocre as well.
AMD thinks 110C hot spot temp is acceptable so yeah, AMD is hotter, also uses more power. Low demand means low resell value. You save nothing buying an AMD GPU in the end.
You are the fanboy here, obviously. Everything I state is fact. AMDs features are mediocre, AMDs drivers are wonky, game support is meh. AMD spends most of their time improving performance in games that gets benchmarked, so they look decent in reviews, thats why most early access games, betas and just lesser popular games in general, tends to run like crap on AMD GPUs. Zero focus from AMD. Zero focus from dev's because 9 out of 10 uses Nvidia.
I use AMD CPU, why? Because they make good CPUs. I don't use AMD GPU, why? Because their GPUs are crap. Worse than ever pretty much. Miss ATi.
Gotta love forgetting about the 7600 XT and 7700 XT while at it.
AMD must be GPU-centric and GPU-first company, in order to generate money as it should.
Stupid, stupid..
My CPUs low power consumption is mainly due to low clockspeeds, 3D cache is fragile. Has nothing to do with MCM since its single CCD. I wanted the best gaming chip, and sadly for AMD, the 7800X3D beats both 7900X3D and 7950X3D here. Dual CCD is just not very good for gaming due to latency issues and it does not help that only one CCD has 3D cache either. 7900X3D in particular is bad, since its only 6 cores with 3D cache.
I have a feeling that even if AMD were faster and cheaper you'd make up some crap about their "faults". Yes 4GB was too little. That being said 980 Ti was 6GB. Not exactly earth shattering capacity there either. I guess at that point it was deemed enough.
900 series were good cards. They improved over 700 series on the same node. Unfortunately this was also the last gen they allowed BIOS editing. After this they locked it down. Oh i will wait and see, believe me. AMD current cards can do RT as well as 3090 Ti. So you're effectively telling me that 3090 Ti can't do RT.
AMD even does RT on consoles. Something i thought was impossible so soon in this generation on that hardware.
Like i proved earlier their FG is pretty good. It's you who keeps on denying reality. Yes the upscaling part is not as good but as we've proved already it does not matter how good it is. As an Nvidia fanboy you cant accept that anyone but Nvidia can be competent or make a competitive product. Show me one AMD card that actually reaches it. TPU's latest review of 7900 XTX clearly shows that most cards reach around 80c: www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/37.html
All GPU's and CPU have max temp limits near 100c or higher. As do capacitors and VRM's - even higher. You using this as some sort of "own" against AMD shows you have zero clue what that number actually represents and that in real world no one actually reaches it.
The age old "AMD hotter/much power" myth refuses to die because dimwits like you dont bother reading a couple of reviews.
4090 hotspot ~75c.
7900 XTX hotspot ~80c.
Both well withing air cooling limits. As for power - 360W. 4090 uses over 400W. Even 4080S uses over 300W.
Again both are acceptable for high end cards. It's Nvidia who has a 600W BIOS for 4090 and was planning (subsequently canceled) a massive cinder block cooler for it's 600W+ monstrosity. But AMD uses 360W - oh noes. Ah yes. The one using actual, factual sources for it's arguments is the fanboy but the one spewing nonsensical, laughable arguments is not. Sure, sure. I have already exposed multiple of your lies here in this thread. You seem to be well short of "facts" to prove your fanboyish comments here.
Just ten year old BS arguments that have since been mostly resolved. And you dont see the hypocrisy in this statement? You say AMD is hot, power hungry, that it's drivers are bad etc and then you bring up ATI, who was way worse in those areas. Shows you have zero clue about history. Wrong again. Especially idle power is higher on all MCM designs due to the need to spend energy to move data around between dies.
And like was said before - MCM is absolutely about making smaller dies and lower defect rates.