Monday, July 21st 2025

AMD's Upcoming UDNA / RDNA 5 GPU Could Feature 96 CUs and 384-bit Memory Bus
According to one of the most reliable AMD leakers, Kepler_L2, AMD's upcoming UDNA (or RDNA 5) GPU generation will reintroduce higher-end GPU configurations with up to 96 Compute Units (CUs) in the top-end Navi 5X SKU, paired with a 384-bit bus for memory. We still don't know what type of memory AMD will ultimately use, but an early assumption could be that GDDR7 is on the table. When it comes to the middle of the stack, AMD plans a GPU with 64 CUs and a 256-bit memory bus, along with various variations of CUs and memory capacities around that. For the entry-level models, AMD could deliver a 32 CU configuration paired with a 128-bit memory bus. So far, memory capacities are unknown and may be subject to change as AMD finalizes its GPU lineup.
After the RDNA 4 generation, which left AMD without a top-end contender, fighting for the middle-end market share, a UDNA / RDNA 5 will be a welcome addition. We are looking forward to seeing what UDNA design is capable of and what microarchitectural changes AMD has designed. Mass production of these GPUs is expected in Q2 2026, so availability is anticipated in the second half of 2026. A clearer picture of the exact memory capacity and type will emerge as we approach launch.
Source:
Kepler_L2
After the RDNA 4 generation, which left AMD without a top-end contender, fighting for the middle-end market share, a UDNA / RDNA 5 will be a welcome addition. We are looking forward to seeing what UDNA design is capable of and what microarchitectural changes AMD has designed. Mass production of these GPUs is expected in Q2 2026, so availability is anticipated in the second half of 2026. A clearer picture of the exact memory capacity and type will emerge as we approach launch.
119 Comments on AMD's Upcoming UDNA / RDNA 5 GPU Could Feature 96 CUs and 384-bit Memory Bus
People who usually care about DLSS are those who CAN play with such tools.
150-200 CUs please.
But for the next year at least, I'm happy with my 7900XTX factory OC card.
------------------------------
I'm also very glad some people are amused by my post, expected no less. I guess to me it's equally amusing that a lot of folks still operate on the "fake frames" & "upscaling bad" argument level in 2025 ;)
Still, that is halo product, above high end and stuff. Calling anything below it mid-range isn't quite right.
The main issue with fsr 4 is that because of how few cards actually support it, its not going to get traction. Now imagine UDNA launching with yeyt a new version of fsr that is exclusive to the new gpus. No thanks, id stick to nvidia.
that's pretty much exactly the middle. But on the other side, the TPU database calls the 5060 Ti a "High End" GPU...:roll:
in non clown world where people don't justify getting ripped off and defending stagnation (not meaning you... just in general) we would have a 5060 Ti with the specs of a 5080 for 450 bucks for a long time now. But instead we get the "too good to throw away" AI Accelerator binning rejects.
And UDNA is the same strategy as NVidias.
We just make a single architecture specifically for AI Accelerators and sell the trash to the Gamers for exorbitant amounts of money because from their eyes the full die is no longer a 1000 dollar product but a 10000 dollar one.
here's his the TOTL datacenter Blackwell vs the TOTL desktop blackwell. Notice the massive difference in ROP count, FP64 units, INT32 units and FP16 units relative to the amount of shading units. GB202 is not a cut-down GB200. Blackwell for the desktop is optimized for gaming.
It's not even like "Gaming only" RDNA had an advantage in performance or efficiency over Nvidia "desktop and HPC architecture".
But then again if it's not a chiplet design I might not, I like the chiplets purely because it's a cool idea :) Still, make it happen AMD! Give us a monstrous card to buy and show off again!
Where it can.
But in most of games(*) it wont work that way. Ever.
In the best case some Cheap-Anti will show you the mid finger
In the worst .. your account will be baned. Forever.
(*) All On-Line competitive multiplayer games, all gachagames and, for better measure, some Single player games. DLSS Swapper in most of the case does not violate Cheap-Anti.
Optiscaler does.
And core count increases of course, the RX 470 came in 2016 with 2048 cores and 8 GB vram, now fast forward to 2025 and we have 9060 XT with 2048 and 8 GB for 300-320€, and 16 GB for 360-400€.
It is like AMD pulling a core stagnation strategy for a decade now, like Intel used to do on the cpu front with their 4 cores i5/7s.
As for upscaling, the best case scenario should be that a card don't need to use it at all (but seeing as how HW maker so stingy these days i idont see this happening) , the worst is, well Microsoft should get their sh*t together and release DirectSR already and make its use manditory, so game devs and NV/AMD/Intel can't pull crap on us at all, and all latest and greatest upscaler is avaialble in all games from all vendors and everyone can select the best upscaler for our/their current GPU brand.
And as I also said before, but I see you avoid giving a straight answer, you make it look like someone needs to invest a couple of hours per game to use Optiscaler. So, are you saying that someone needs to pass two hours messing with stuff, with the help of Optiscaler, just to make one game work with a different upscaler? They tried it for over a decade. People where laughing at AMD's face and kept buying Nvidia cards anyway
"Please AMD, release something good enough but much cheaper than Nvidia, so that Nvidia lowers prices and I can buy cheaper Nvidia GPUs"
combined with
"AMD cards are trash, they are for poor people and they have the worst drivers imaginable".
I guess the age of gifts has reach it's end. AMD some times offers better GPUs at lower prices and people again will choose Nvidia. The same will happen again now with people buying 8GB RTX 5050 and RTX 5060, over the faster 8GB RX 9060 XT.