Wednesday, April 24th 2024
AMD's RDNA 4 GPUs Could Stick with 18 Gbps GDDR6 Memory
Today, we have the latest round of leaks that suggest that AMD's upcoming RDNA 4 graphics cards, codenamed the "RX 8000-series," might continue to rely on GDDR6 memory modules. According to Kepler on X, the next-generation GPUs from AMD are expected to feature 18 Gbps GDDR6 memory, marking the fourth consecutive RDNA architecture to employ this memory standard. While GDDR6 may not offer the same bandwidth capabilities as the newer GDDR7 standard, this decision does not necessarily imply that RDNA 4 GPUs will be slow performers. AMD's choice to stick with GDDR6 is likely driven by factors such as meeting specific memory bandwidth requirements and cost optimization for PCB designs. However, if the rumor of 18 Gbps GDDR6 memory proves accurate, it would represent a slight step back from the 18-20 Gbps GDDR6 memory used in AMD's current RDNA 3 offerings, such as the RX 7900 XT and RX 7900 XTX GPUs.
AMD's first generation RDNA used GDDR6 with 12-14 Gbps speeds, RDNA 2 came with GDDR6 at 14-18 Gbps, and the current RDNA 3 used 18-20 Gbps GDDR6. Without an increment in memory generation, speeds should stay the same at 18 Gbps. However, it is crucial to remember that leaks should be treated with skepticism, as AMD's final memory choices for RDNA 4 could change before the official launch. The decision to use GDDR6 versus GDDR7 could have significant implications in the upcoming battle between AMD, NVIDIA, and Intel's next-generation GPU architectures. If AMD indeed opts for GDDR6 while NVIDIA pivots to GDDR7 for its "Blackwell" GPUs, it could create a disparity in memory bandwidth performance between the competing products. All three major GPU manufacturers—AMD, NVIDIA, and Intel with its "Battlemage" architecture—are expected to unveil their next-generation offerings in the fall of this year. As we approach these highly anticipated releases, more concrete details on specifications and performance capabilities will emerge, providing a clearer picture of the competitive landscape.
Sources:
@Kepler_L2 (on X), via Tom's Hardware
AMD's first generation RDNA used GDDR6 with 12-14 Gbps speeds, RDNA 2 came with GDDR6 at 14-18 Gbps, and the current RDNA 3 used 18-20 Gbps GDDR6. Without an increment in memory generation, speeds should stay the same at 18 Gbps. However, it is crucial to remember that leaks should be treated with skepticism, as AMD's final memory choices for RDNA 4 could change before the official launch. The decision to use GDDR6 versus GDDR7 could have significant implications in the upcoming battle between AMD, NVIDIA, and Intel's next-generation GPU architectures. If AMD indeed opts for GDDR6 while NVIDIA pivots to GDDR7 for its "Blackwell" GPUs, it could create a disparity in memory bandwidth performance between the competing products. All three major GPU manufacturers—AMD, NVIDIA, and Intel with its "Battlemage" architecture—are expected to unveil their next-generation offerings in the fall of this year. As we approach these highly anticipated releases, more concrete details on specifications and performance capabilities will emerge, providing a clearer picture of the competitive landscape.
114 Comments on AMD's RDNA 4 GPUs Could Stick with 18 Gbps GDDR6 Memory
www.youtube.com/channel/UCI8iQa1hv7oV_Z8D35vVuSg/community?lb=UgkxUw1eV4quZuFTSY7v1jJeDNdIVtNKOpyN
www.youtube.com/channel/UCI8iQa1hv7oV_Z8D35vVuSg/community?lb=UgkxwOHbJ6uIcR9EyUrPdhxw5F2_A3FMJFAi
www.youtube.com/channel/UChIs72whgZI9w6d6FhwGGHA/community?lb=UgkxABGVuOK5yBlV-rFQROFWNwfSIQ3cIbOD
According to the GN poll 31% of people didn't have an RT capable card and a further 41% didn't even bother enabling it. This is among the enthusiast community, the numbers are going to be much more against RT in the broader PC gaming population.
who's are most gamers? You mean the mobile games? the esports gamers?
So out of ~34% of gamers surveyed 6.5% have a good chance of regularly running Ray Tracing. Still not amazing market penetration for a tech that is now nearly 6 years old.
Now I will admit its only in the 40xx and upcoming 50xx series that we have realllly had the performance in RT to turn it on and not be like what major downgrades do I need to do to the rest of the settings to make it work regularly in AAA games.
I would argue that most of the people that have the capability turn it on do it to see what it looks like but then either turns it right down or off to not impact performance.
Also all the hype around DLSS/FSR/XeSS etc is just a complete crutch for game developers to skip optimising their games to the same level as before/remove the need for optimisation tweaks in patchs so they can focus on DLC/Skins/Microtransactions from Day 0. Like nVidia coming out with marketing materials going "LOOK AT THIS 5X PERFORMANCE UPLIFT GEN TO GEN" and then in 0.5 font "When using DLSS 3.5"
Its like a car manufactuer coming out and saying "We have now released a 155mph capable car while maintaining 100+ MPG on your daily trips" and then you look at the writing to notice its while traversing the side of mount everest for the gravity assist or some other bullshit
You're not wrong about it being niche, of the people I've helped build systems for in the last 12 months, I've had to explain ray tracing to them in about 2/3 of the cases.
I really want to know what AMD (and this forum) thinks will tempt me to 'upgrade' from 6800XT.
The only reason I'm thinking about replacing my 7800 XT is that it eats almost as much power playing a video as my CPU does under full load. It's performance is more than enough and I don't care about RT or DLSS.
I've said this before but if it isn't 7900XTX'ish performance with better power consumption and for a healthy price I wont be interested. Scarily could even consider moving to team green, which I'd prefer not to do.
We've still got a long way to go, and brute forcing the whole scene like they're doing now, isn't the way, the real deal will come from engine developments like Nanite that are software based before hardware specific. In the end, hand crafting will still matter not much unlike the way it worked before. Lots of new games on new engines don't look all that great. There's no love in it, just high fidelity assets. That alone won't make the scene or the game. Evidently, graphics don't make the game.
Ray Traced effects and RTX are almost as bad as the PhysX nonsense. Modern hardware and consoles alike simply aren’t powerful enough for real “ray” (path) tracing.
I tried RT AO and reflections. Had to drop to DLSS Performance, but even with that the game was hitching a lot. Doesn't seem VRAM related, as the usage was between 10-11 GB. Could be CPU related, but that's still just a bad implementation (all CPUs have terrible 1% lows in this game with RT).
RT reflections look good, I love that they don't disappear. But RTAO doesn't look much better, there are still some disocclusion artifacts. It's absolutely not worth the hit to performance.
RT is pretty pretty much pointless on anything but the top end GPU. I can't imagine when the 5090 comes out and they start "optimizing" games for that. But it is kind of like future-proofing. When you play the game 5 or 10 years later, you can enable all the eye candy.
But I would still rather get more shaders instead of RT cores. I do want DLSS, though, extremely useful on a 4K TV.
Edit: Which video is your screenshot from?