he states the chip will have an RDNA module not UDNA and presumably an RDNA 5 chip given the APU's release date (mid 2027)
One must consider APU architecture lags behind actual discrete gpu architecture, sometimes significantly, perhaps dependent upon if the new architecture makes sense wrt power/die space/perhaps other reasons.
It would appear they are not worked on in tandem (GPU arch + APU using it); rather a module of a completed GPU architecture bolted on to a newer APU when designed. It's long been this way.
You'll find APUs constructed with Vega when we had RDNA2 on desktop; as that is just how it appears to work at AMD. If they could cut that lag it would be great, but as I said, there are probably reasons.
Things likely won't truly shift until GPU is a MCM chiplet (at some point), and to me that's what I call 'UDNA', even if it's not the actual definition. It would appear that won't happen until post-2027.
At least in APUs. We could see it in other markets earlier, perhaps even as APUs continue to use a more centralized design (which could be continued to be called RDNAx).
Consider UDNA may be mostly be defined by that movement to chiplets in general (used in both MI-series and discrete graphics; probably also eventually MCM APUs); either in conjunction with CPU cores or not.
I have absolutely no idea when this will be ready; it may be with the next GPU series, it may not. If they choose to call the next thing RDNA5 or UDNA (perhaps in conjunction with a programming language model) IDK. The fact his contact referred to RDNA5 does make me wonder if instead of chiplets we instead get a couple monolithic dies (at least at first) on 3nm; perhaps the change to chiplets later.
At the end of the day, I don't think things will change that much from RDNA4 regardless. Cache ratio might change a little; it might be (mostly) external w/ UDNA; but the general architecture still similar I think.
I've been thinking about next-gen a lot, given while it *does* make sense to make an efficient 384-bit chip on 3nm (say something like 18432sp @ 3780/36000), and chiplets yield better, other options also exist.
And monolithic could still make sense for a company like AMD. They don't *have* to attack that highest-end market (and likely won't on monolithic given cost/risk) head-on with a high-unit setup.
For instance, there are absolutely perf targets you can hit with a high-clocked 12288sp part (over 4ghz/40gbps instead of 3.7/36gbps) versus using a more-efficient lower-clocked design with similar units.
nVIDIA may use the latter to sell a higher-end part versus the lower-end one (that otherwise may satiate many 4k/1440RT [even upscaled to 4k] scenarios), especially if DLSS becomes more costly.
Theoretically the less-unit chip (even w/ high clocks) would still be able to get yield and not be priced absurdly, which I do believe is AMD's main goal (chiplets or not).
Part of me wonders if this indeed will happen; if AMD will let nVIDIA make their 18432sp part to replace 5090, and they'll happily just make a better 4090 w/ more perf than a '6080'.
That way they would be staying in their known markets and not venturing into the very small market of >$1000 GPUs, while for all intents and purposes covering the same realistic goals (and majority of folks).
I guess it would depend how well a super high-end part would accomplish native 4k RT or maybe 1440p (and up-scaled to 4k) PT. I don't think AMD will target more than 1080p (upscaled to 4k) PT.
If you look at the performance of 4090 (where it is now ~55fps), you could imagine (increasing) scenarios where a (efficiently-clocked) 12288sp part would not necessarily hold that setting, even w/ an OC.
But perhaps if AMD shoot for higher clocks it may in-fact hold those scenarios. It may use more power, but perhaps serve the overwhelming majority of the market better. IDK if that will happen, but it's possible.
I've got to believe information on whatever they are launching in 2026 will leak soon (and how that coincides with what the next-gen consoles will use), given tape-out expected by EOY and MP early next.
And hopefully we get some insight into the long-term of what's coming after that as well (as-in if there is indeed monolithic chips coming and then an eventual chiplet design in 2027 or later).
TBH, the most exciting part of that video is the idea of AMD using N2X for CPU chiplets; I've wondered if they'd do something like that for a long time. Glad to hear it would appear they are. Yay.
Now imagine them doing something like that for UDNA; where I/O chip and cache are perhaps 3/4/5nm, but GPU chiplets a heavily-advanced process; even if it could only yield small chips. Exciting!
That's exactly what they have to do to compete with nVIDIA imho: they need to find a way to deliver highest-end performance with a lower cost/risk; as they currently find themselves a tough sell for those users.
First they're going for the kill wrt Intel, apparently...but I could absolutely see them doing the same in GPUs. And TBH, they need to do that imho.