Wednesday, April 24th 2024
AMD's RDNA 4 GPUs Could Stick with 18 Gbps GDDR6 Memory
Today, we have the latest round of leaks that suggest that AMD's upcoming RDNA 4 graphics cards, codenamed the "RX 8000-series," might continue to rely on GDDR6 memory modules. According to Kepler on X, the next-generation GPUs from AMD are expected to feature 18 Gbps GDDR6 memory, marking the fourth consecutive RDNA architecture to employ this memory standard. While GDDR6 may not offer the same bandwidth capabilities as the newer GDDR7 standard, this decision does not necessarily imply that RDNA 4 GPUs will be slow performers. AMD's choice to stick with GDDR6 is likely driven by factors such as meeting specific memory bandwidth requirements and cost optimization for PCB designs. However, if the rumor of 18 Gbps GDDR6 memory proves accurate, it would represent a slight step back from the 18-20 Gbps GDDR6 memory used in AMD's current RDNA 3 offerings, such as the RX 7900 XT and RX 7900 XTX GPUs.
AMD's first generation RDNA used GDDR6 with 12-14 Gbps speeds, RDNA 2 came with GDDR6 at 14-18 Gbps, and the current RDNA 3 used 18-20 Gbps GDDR6. Without an increment in memory generation, speeds should stay the same at 18 Gbps. However, it is crucial to remember that leaks should be treated with skepticism, as AMD's final memory choices for RDNA 4 could change before the official launch. The decision to use GDDR6 versus GDDR7 could have significant implications in the upcoming battle between AMD, NVIDIA, and Intel's next-generation GPU architectures. If AMD indeed opts for GDDR6 while NVIDIA pivots to GDDR7 for its "Blackwell" GPUs, it could create a disparity in memory bandwidth performance between the competing products. All three major GPU manufacturers—AMD, NVIDIA, and Intel with its "Battlemage" architecture—are expected to unveil their next-generation offerings in the fall of this year. As we approach these highly anticipated releases, more concrete details on specifications and performance capabilities will emerge, providing a clearer picture of the competitive landscape.
Sources:
@Kepler_L2 (on X), via Tom's Hardware
AMD's first generation RDNA used GDDR6 with 12-14 Gbps speeds, RDNA 2 came with GDDR6 at 14-18 Gbps, and the current RDNA 3 used 18-20 Gbps GDDR6. Without an increment in memory generation, speeds should stay the same at 18 Gbps. However, it is crucial to remember that leaks should be treated with skepticism, as AMD's final memory choices for RDNA 4 could change before the official launch. The decision to use GDDR6 versus GDDR7 could have significant implications in the upcoming battle between AMD, NVIDIA, and Intel's next-generation GPU architectures. If AMD indeed opts for GDDR6 while NVIDIA pivots to GDDR7 for its "Blackwell" GPUs, it could create a disparity in memory bandwidth performance between the competing products. All three major GPU manufacturers—AMD, NVIDIA, and Intel with its "Battlemage" architecture—are expected to unveil their next-generation offerings in the fall of this year. As we approach these highly anticipated releases, more concrete details on specifications and performance capabilities will emerge, providing a clearer picture of the competitive landscape.
114 Comments on AMD's RDNA 4 GPUs Could Stick with 18 Gbps GDDR6 Memory
People spending 1000+ generally already go for the best. Nvidia has been for years selling their cards on mindshare or software, more than hardware.
RX 7900 XT level of performance will not be reached with 500ish GB/s memory bandwidth. Forget it. Latest graphics card TPU review. www.techpowerup.com/review/?category=Graphics+Cards&manufacturer=&pp=25&order=date
Used because it shows current state of affairs.
My worst experience was with Nvidia during their bump gate scandal where my 8800 GTS 320 kept dying and had to be revived in an oven - albeit temporarely. It was also a second hand EVGA product so i had no warranty either. Currently im on 2080 Ti that i managed to buy for a reasonable price before the latest crypto boom sent prices to the sky. Also made more than 1k on it by mining on the side. If i had to buy a new card now it would likely be AMD as my modded games require more VRAM and i despise the new power connector Nvidia mandates even for 4070 Super, a <250W card that could easily be powered by a single 8-pin.
My fear when buying Nvidia is the next feature they lock me out of when they release their next series. I've already locked out of ReBAR that 30 series introduced and DLSS FG that 40 series introduced. Im sure 50 series will further widen the gap.
That is exactly what happened to me but the kicker was that they did not even inform me when they disabled SLI on the GTS450. Imagine how stupid I felt after I had sold them to a friend.
When we start to get laptops with just these APUs in them I expect they will sell well too. Acer has one that they announced for $599 with a 8700G laptop chip.
Intel I would hope would focus on desktop/datacenter for Celestial then we see in Druid/E series parts a funnel down into iGPU power/efficency.
Look at how Alchemist has performed/developed I am near 100% sure there has been a massive accidental bottleneck put in in the hardware and I would guess it was in the scheduler/load store functions as moving from 1080p to 1440p in most games on the arc has been single percentage points drop in performance RT on or Off. Yet on nearly all other manufacturers cards you can see a respectable drop in performance or should I say a respectable gain dropping the resolution.
Get that fixed for Celestial/Druid and then they have a real contestant into the iGPU space.
I suspect with RDNA4 and the now cancelled top end offering they either went too far on the chiplet design and realised it would either need a full rework (RDNA5/Successor arch?) or wonder if they had intended for HBM3/e to be used on the top end parts similar to the MI300 but the AI craze has just priced them out of the market again.
So I don't see any trouble leaving that tiny segment for Ultra rich kids and selfish menchildren, considering that among all premium products, AMD has more profits from enterprise anyway. No point for them to sell many premium GPU products if they can sell WS cards instead of top tier "gamer" counterparts, for people that needs them, ad will gladly pay that premium. And gamers can keep up with something akin to what being used in consoles (RX6700), anyway.
But most of gamer segment come from low/mid end GPUs anyway. No point to invest in something, that is basically a placeholder. And even if there won't be any successor as RDNA5, AMD can live with just such low cost cards, until they feel like they are able to release something top. They did it with RX580, Radeon VII (Vega II), RX5700, until they make RX6800XT/RX6900, that sold like the hot cakes, and was basically on par with nVidia rivals.
Thus, there's absolutely no point into putting the expensive VRAM, in such temporary and low cost. It's more reasonable to use newest GDDR7 along with breakthrough solutions that may, or not be be RDNA5.
As of Intel. I can't say they are absolutely hopeless. As it's not guaranteed that they will make the great achievements with Battlemage. However, as much as I don't like intel, I must admit, they have already made a significant progress in the GPU division. I dare to say, even bigger than AMD did for a decade, but instead within of couple of years. Of course, they have miles bigger R&D budget, but still. From what I've seen and read, Intels compression/decoders are miles better than AMD, even on lowest end cards. Their RTRT is also better. And this considering, Intel is on huge decline, and selling their assets left and right. AMD on the other hand, is blooming, but still reclutant to invest in their consumer areas like Radeon, because they went all-in on Enterprise, because it doesn't rely heavilly on AMD drivers, and they they don't need the streaming/decoding capabilities anyway. So AMD can invest less, while having more. At this point AMD is seems to be even greedier, than Nvidia. They lacking at every area, but still get the hubris to ask the premium for absent features/options.
The package is bad.
Intel made FSR look like a joke in their first attempt.
I would sacrifice the performance crown for a better package overall.
It took years to change the mindset from Intel Core to AMD Ryzen but it did happen.
That’s why most of us have ryzens now.
It may happen on the gpu side if nVidia continues asking 1000+ for midrange cards.
I totally agree that we need to get mid range back in the $500 range and high end to $1000 and under but that cat seems to be out of the bag now and may never go back.
And with both companies shifting their focus more on AI and putting more resources into it we may continue to see a squeeze or pricing going up on discreet gpu's.
You remember all the talk about engineer CEOs? Well, Nvidia still has the engineer as the CEO and not only that, he is the one who founded the company! That's if Intel still had Gordon Moore, Robert Noyce, and Andy Grove, those that are still considered legendary.
Nvidia's engineering IS really good. They consistently push out reticle-sized dies. Sure they make mistakes, but over a long period of time, nowhere near badly as both AMD and Intel. Nvidia made handful of mistakes while AMD and Intel stumbled like they were peg-legged. Remember though, Nvidia has one of the highest if not the highest employee satisfaction ratings. No wonder why they are successful!
That's part of why AMD's GPU division is struggling, and CPU is not.
Battlemage should in theory be a lot better even if places itself in the same relative position to competitors as Alchemist. They can fix the idle/low load power consumption issue, ReBar issue, and hardware quirks from lack of experience such as abnormal resolution/detail scaling. ReBar for one is a big thing, as it automatically rules out/discourages most of older systems, which is counterintuitive considering how affordable ARC cards are. ReBar doesn't just affect older systems. Recently it had a bug where some systems had half-working ReBar with Vulkan API. So random-ass low performance in modern games might be due to lack of ReBar performance having a great impact on ARC(where it's negligible on competitors).
I know from tracking Intel GPUs for a long time what was thought to be software/driver problems turned out to be a hardware problem. No doubt such problems exist on Alchemist. In fact even if driver bugs exist, it might be easier to fix on Battlemage and successors.
Prices will never return to what use to be the relative norm, people just finance everything from what I’ve seen and are probably drowning in debt if everyone and their mother is buying a 7900/4080/4090. I’ve said it before, but we’re continuously moving towards GPUs of any kind being a luxury and gaming on PC being largely unaffordable for an average person.
What Nvidia is good at is making something and making people want it, even though it might be in 1% of Games. The narrative then picks it up and it becomes a feature. I look at how Frame Gen was received and how that morphed into a good thing. The key though is a lot of the talk about AMD's response is real but considered snake oil by the community. I remember how people use to say Gsync was much better than Freesync because it was a hardware module. Sound familiar.
2. Yes, AMD aren't trying to compete. If we don't count Germany and a couple other countries where AMD production sells we're in the 99:1 NVIDIA win situation. Only because AMD GPUs of same price match or barely exceed the raster performance and lose in everything else.
3. Prices will stabilise, the bubble isn't going to grow forever. The most recent examples include real estate crisis of late 00s.
4. I disagree with "Intel made FSR look like a joke." FSR looks like a joke by itself, it required no help from competition. I bought my GPU more than a year ago and FSR is still in the same shape as it was when I bought this card, give or take two games where things became ever so slightly better after FSR 2.2 introduction. FSR 3.1 would've been late to the party even on the first day of 2023; yet it's almost mid-2024 and 3.1 is absolutely nowhere to be seen.
5. "6900 XT is on par with Ampere" is a deranged statement. It barely outperforms 3080 at 4K, sometimes even loses to it, also lacking any DLSS and RT performance whatsoever, whilst being far more expensive. More VRAM doesn't mean anything if framerate is still lower.
6. "4060 for $150 and 4080 for $500." I mean, these are exactly as cut-down as it gets. Halving their MSRP would've represented reasonable pricing. $220 and $620 respectively would be completely fine.
7. We don't need beating an NV halo GPU but we do need a price war. 7900 XTX is a great GPU by itself, it's just $1000 is beyond schizophrenic for it. $570ish would've striked hard, leading to much more pleasant market. Never happened though.
Initially GDDR7 was supposed to launch at 32 Gbps and go up to 36, but they're actually going to start with 28 Gbps, probably to reduce costs.
Some users: let em have cake, i'm fine with this situation.
Congrats for the rest of your post, you couldn't have written a better Nvidia advert if you tried. Altogether, the GRE is not a bad product. I tend to look a GPU as the sum of its parts, and not the bottleneck that one of its parts may or may not show. It's not like you can upgrade your VRAM after all.