Wednesday, March 19th 2025

AMD "Medusa Point" APU with Zen 6 Confirmed to Use RDNA 3.5, RDNA 4 Reserved for Discrete GPUs

AMD's next-generation Zen 6-based "Medusa Point" mobile APUs will not feature RDNA 4 graphics as previously speculated, according to recent code discoveries in AMD GPUOpen Drivers on GitHub. The Device ID "GfxIp12" associated with RDNA 4 architecture has been reserved only for discrete GPUs, confirming that the current Radeon RX 9000 series will exclusively implement AMD's latest graphics architecture. Current technical documentation indicates AMD will instead extend RDNA 3.5 implementation beyond the Zen 5 portfolio while potentially positioning UDNA as the successor technology for integrated graphics.

The chiplet-based Medusa Point design will reportedly pair a single 12-core Zen 6 CCD manufactured on TSMC's 3 nm-class node with a mobile client I/O die likely built on N4P. This arrangement is significantly different from current monolithic mobile solutions. Earlier speculation indicates the Medusa Point platform may support 3D V-Cache variants, leveraging the same vertical stacking methodology employed in current Zen 5 implementations. The mobile processor's memory controllers and neural processing unit are expected to receive substantial updates. However, compatibility limitations with AMD's latest graphics features, like FSR 4 technology, remain a concern due to the absence of RDNA 4 silicon. The Zen 6-powered Medusa Point processor family is scheduled for release in 2026, targeting premium mobile computing applications with a performance profile that builds upon AMD's current Strix Halo positioning.
Sources: Kepler_L2, via Wccftech
Add your own comment

15 Comments on AMD "Medusa Point" APU with Zen 6 Confirmed to Use RDNA 3.5, RDNA 4 Reserved for Discrete GPUs

#1
mrnagant
Never really understood why AMD keeps iGPUs behind. Like when RDNA was a thing, they still used GCN for a while in the iGPU space.

Ryzen 5000 series 3 years after the last discrete GPU, and the Ryzen 7000 series 5 years after the last discrete GPU got a couple models with GCN.
Posted on Reply
#2
Dragokar
They probably upgrade the "igpus" next time with UDNA. Smart move tbh.
Posted on Reply
#3
Daven
Will the differences between RDNA 3.5 and 4 even fit inside an SoC even on a smaller node? Also I’m guessing a lot of the differences are AI and RT related. SoCs have NPUs and I doubt anyone could actually play games with RT enabled on an SoC.

Finally isn’t RDNA 3.5 + an NPU + media decoding/encoding units close to RDNA4 anyway. I think there needs to be more context that an architecture number difference.
Posted on Reply
#4
Assimilator
mrnagantNever really understood why AMD keeps iGPUs behind. Like when RDNA was a thing, they still used GCN for a while in the iGPU space.

Ryzen 5000 series 3 years after the last discrete GPU, and the Ryzen 7000 series 5 years after the last discrete GPU got a couple models with GCN.
Product segmentation.
Posted on Reply
#5
outlw6669
mrnagantNever really understood why AMD keeps iGPUs behind. Like when RDNA was a thing, they still used GCN for a while in the iGPU space.

Ryzen 5000 series 3 years after the last discrete GPU, and the Ryzen 7000 series 5 years after the last discrete GPU got a couple models with GCN.
In this case, I can understand AMD keeping RDNA 3.5 for the new IGPs.
Specifically here, RDNA 3.5 has been highly optimized for power and bandwidth restrained mobile scenarios.
While it might make us feel better to have RDNA 4 (and FSR 4 would be very welcome in mobile!), unless they have the resources free to make an optimized 'RDNA 4.5', it would probably be a step backwards.
Posted on Reply
#6
SL2
mrnagantNever really understood why AMD keeps iGPUs behind.
COST?

Historically, for every Phoenix or Hawk point sold to someone who actually cares about that there are hundreds sold to business or someone else who doesn't care about that. AMD doesn't have unlimited resources, and APU's is clearly not their cash cow.

Even if they did go for RDNA4, what's the point? Putting all that extra work into something that won't do rays in a meaningful anyway because it's too slow to begin with.
Posted on Reply
#7
Tigerfox
SL2Even if they did go for RDNA4, what's the point? Putting all that extra work into something that won't do rays in a meaningful anyway because it's too slow to begin with.
Nonsense. You know that RX 9070 is about 27-30% faster than 7800XT in rasterizing, too? It has ~7% less shaders but equally clocks up to 7% higher and has only slightly more bandwith, so you can assume that that's the difference between RDNA3 and RDNA4.
Now who wouldn't want 890M iGPU or even 8060S to be 30% faster? And who wouldn't wan't FSR4 on mobile? The whole point of Strix Halo (and Medusa Halo) is IGP-performance. It's a shame already that Strix Halo is not RDNA4.

Of course, the reason is cost. With only slightly more shaders, the same IF-cache and memory controller, but upgraded architecture, double the L2-cache and PCIe Gen5, Navi48 has nearly double the amount of transistors of Navi32 and nearly as much as Navi31 and can only stay about as big as Navi32 due to more expensive N4-process and highly increased packing densitiy. So upgrading IGPs to RDNA will make them quite a lot more costly to produce, which isn't an option yet. However, just as I would prefer a 9070XT over a 7900XT I would prefer a 9050S with 32CUs RDNA4 over a 8060S with 40CUs RDNA3.5.
Posted on Reply
#8
bushlin
For those speculating as to why the iGPU is RDNA 3.5 and not 4, the answer is simple: die space
Posted on Reply
#9
Squared
I don't undertand why companies keep releasing new products loaded with old tech. RDNA4 will probably have longer-lived driver support which alone is reason to use it. If it's too expensive, reduce the CUs from 16 to 12. Since the CU count can be reduced I don't think die size alone can explain this.
Posted on Reply
#10
igormp
TigerfoxThe whole point of Strix Halo (and Medusa Halo) is IGP-performance. It's a shame already that Strix Halo is not RDNA4.
The article is about Medusa Point, which would be the successor to Strix point. The iGPU performance is nice but it's not the main feature of this range of product.
I haven't seen talks about a "Medusa Halo" other than minor things from MLID, and I don't think this is a credible source at all.
Posted on Reply
#11
SL2
TigerfoxNonsense. You know that RX 9070 is about 27-30% faster than 7800XT in rasterizing, too? It has ~7% less shaders but equally clocks up to 7% higher and has only slightly more bandwith, so you can assume that that's the difference between RDNA3 and RDNA4.
Oh, you want to talk about nonsense? That's cute.

I bring a valid reason and it's nonsense. Meanwhile, you bring up the fact that the 9070 XT does more work than a 7800 XT (which is RDNA 3, not 3.5). You forgot about power draw, how convenient.

Power draw is everything for mobile devices, and the 9070 XT and the 7900 XTX has identical power draw, about the same performance, and about the same efficiency. HW is vastly different and so is manufacturing cost, but that's a given.

So tell me again, how does an RDNA 4 GPU with a 3 % better efficiency than the RDNA 3 GPU, which is negligible, become a better choice than an RDNA 3.5 GPU, IF it means a lot of extra work for AMD?

Whatever comes after RDNA 4 might be a worthy upgrade for mobile, but this is not it.
Posted on Reply
#12
ymdhis
mrnagantNever really understood why AMD keeps iGPUs behind. Like when RDNA was a thing, they still used GCN for a while in the iGPU space.
Because the newer GPUs are not yet available (in small enough versions) when they start development on the APUs. Any cpu takes several years to get into the market. RDNA4 on it which came out JUST NOW and doesn't even have a smaller version yet. Medusa Point has been mentioned for a few years now, so they could not have started developing it with RDNA4 which was not yet finished.
And it's not possible - well, possible, but not a good idea - to develop both new cpu + gpu both at the same time. So they just develop a new CPU and slap on whatever existing, proven gpu they have.

The better question is, why does it take so much time to release new desktop APUs? It took 3 years to go from Cezanne to Phoenix. Strix Point is already out but there's no news about any new APUs, not even on leaked roadmaps. Will we have to wait until 2027 for a 8600G successor (which will most likely just be a Strix Point port)?
Posted on Reply
#13
Fouquin
Once again...
FouquinPeople saying RDNA3.5 is outdated don't understand how divergent architectures work. RDNA3.5 is not RDNA3, it's also not RDNA4. It's a focused divergent architecture between the two that is concurrent with both generations. It puts a focus on power and density efficiency that neither of the bigger designs need to bother with to hit their targets.
Posted on Reply
#14
Tigerfox
SL2Meanwhile, you bring up the fact that the 9070 XT does more work than a 7800 XT. You forgot about power draw, how convenient.
I did not compare 7800XT with 9070XT, but with 9070 non-XT, which offers identical computing power as 7800XT (as stated above, 7% less CUs but 7% higher clock) but still is 27-30% ffaster in raster and 43% faster in RT. 7800XT isn't even an efficient RDNA3-card, others have a much better fps-per-watt ratio, but 9070 is 43% more efficient than 7800XT (43% more fps per watt) and 22% more efficient than 9070XT. 9070XT is very much optimized for performance, not efficiency. 9070 is by far the most efficient card compared to 4070 and 7800XT and everything above.

But I agree that this might be not enough to justify upgrading to RDNA4, since it nearly doubles the transistorcount and thus makes costly N4-node necessary. Yet we know nothing about UDNA, so we can only hope it will be more of an upgrade than RDNA4.
ymdhisAny cpu takes several years to get into the market. RDNA4 on it which came out JUST NOW and doesn't even have a smaller version yet. Medusa Point has been mentioned for a few years now, so they could not have started developing it with RDNA4 which was not yet finished.
And it's not possible - well, possible, but not a good idea - to develop both new cpu + gpu both at the same time.
That's not entirely true. Phoenix with RDNA3 came out less than five months after RX 7900XT(X), so they were developed rather close together. It's risky but necessary not to fall behind. AMD has some catching up to do GPU-wise.
ymdhisThe better question is, why does it take so much time to release new desktop APUs? It took 3 years to go from Cezanne to Phoenix. Strix Point is already out but there's no news about any new APUs, not even on leaked roadmaps. Will we have to wait until 2027 for a 8600G successor (which will most likely just be a Strix Point port)
The answer is because desktop-APUs aren't a big market and it's even less important how fast they are. Their only selling point is low price and that's only true for 8400F and 8500G. Even though RAM-OC records are made with Phoenix-APUs right now, there is no real reason to play games on 8700G and invest in fast RAM because 7500F and anything above RX 6400 will be much faster. Only reason is if you really don't have space for anything but the tiniest ITX-cases.

What I don't get is why AMD willingy makes themselves look worse than Intel on everything but halo-laptops by always recycling old APU-generations once or twice. While only the cheapest, lowest-end Intel-laptops come with last generation's CPUs, it is commonplace that in many identical AMD-variants, the refresh of the last generation is used. Compare the last generations of Lenovo Thinkbooks and Thinkpad E- and L-series aswell as HP Probook 400 and Elitebook 600 between Intel and AMD.
While Intel did the same thing this time around - recycling Alder Lake and Raptor Lake as Core 200 without ultra (core i 14000 was a refresh with increased clock and partly corecount atleast), it seems they will be used not nearly as much as the second Phoenix-recyling called AMD Ryzen 200U/H.

They did the right thing by developing a seperate, smaller and cheaper DIE with the same tech as Strix Point with Krakan Point, but now it seems many laptops will be available with two or even three different DIEs.
Posted on Reply
#15
Squared
TigerfoxYet we know nothing about UDNA, so we can only hope it will be more of an upgrade than RDNA4.
We know this:
  • Prior to RDNA was GCN. A major difference between the two is that RDNA removed a lot of logic intended for compute so that more transistors could be allocated to gaming tasks.
  • Now AMD says this is a problem; RDNA couldn't (until recently) run ROCm and so very few developers that have AMD graphics have access to ROCm and so no one uses it. AMD's stated solution to this is to make ROCm more widely available and to make a new graphics architecture (UDNA) that has more compute-focused resources than RDNA.
  • Supporting more features usually means more transistors are needed.
Posted on Reply
Add your own comment
Mar 19th, 2025 19:41 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts