• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Debuts Ryzen AI Max Series "Strix Halo" SoC: up to 16 "Zen 5" cores, Massive iGPU

Since RDNA 3.5 is mobile, and most of the stuff comes with NPUs, does RDNA 3.5 even have AI Accelerators still? If so, what is the point?

It depends on what you consider AI accelerators on a GPU. AMD doesn't have something like tensor cores, they mention on some materials having some AI accelerators per compute unit but I don't know what that means, probably just dp4a implementation and the like. On the other hand NPUs use a different instruction set (which one? beats me) so they can't simply replace the GPU implementing the basic stuff like dp4a.

Maybe someone more knowledgeable can shed some light on this, in my opinion the best case scenario would for this RDNA3.5 for the better or worse to be just like a regular GPU and have the NPU as an extra to meet the "copilot pc" bs requirements (it's cool to have but not because of microsoft copilot pc requirements)
 
It depends on what you consider AI accelerators on a GPU. AMD doesn't have something like tensor cores, they mention on some materials having some AI accelerators per compute unit but I don't know what that means, probably just dp4a implementation and the like. On the other hand NPUs use a different instruction set (which one? beats me) so they can't simply replace the GPU implementing the basic stuff like dp4a.

Maybe someone more knowledgeable can shed some light on this, in my opinion the best case scenario would for this RDNA3.5 for the better or worse to be just like a regular GPU and have the NPU as an extra to meet the "copilot pc" bs requirements (it's cool to have but not because of microsoft copilot pc requirements)
NPUs are meant to do basic inference with quantized models, often INT4 or INT8, and don't have much extra capabilities apart from that.
The extra "AI accelerators" in GPUs (like XMX in Intel, Tensor cores in Nvidia, or the WMMA instructions in RDNA) are meant to do large matmuls similar to the ones in NPUs, but faster and with many different data types, such as FP16, FP8, BF16, FP32, etc etc, which allows for higher performance, quality and also for training models.
 
I'm laughing pretty hard here.

So AMD shows their best iGPU laptop CPU versus Intel, and AMD is so far ahead some silly guys get upset here.

Yeah, Intel is far behind, very very behind.

---------------

Yes they are not that far apart in TDP. Expect 45W AMD parts to compete very well against the 37W Intel part.

That Intel part btw is found in $2700 CAD Dell laptops, the Dell XPS 13.

A perfectly valid comparison. Expect a new Dell laptop with Halo Strix.
 
I'm laughing pretty hard here.

So AMD shows their best iGPU laptop CPU versus Intel, and AMD is so far ahead some silly guys get upset here.

Yeah, Intel is far behind, very very behind.

---------------

Yes they are not that far apart in TDP. Expect 45W AMD parts to compete very well against the 37W Intel part.

That Intel part btw is found in $2700 CAD Dell laptops, the Dell XPS 13.

A perfectly valid comparison. Expect a new Dell laptop with Halo Strix.
This 37W intel part is already non-sense, it's doubling up the TDP of the rest of the lineup for an extra 100MHz boost clock. All other LNL models are 17W ones.
You won't be seeing strix halo in a laptop like the XPS 13 either, since its base TDP is way higher, which would reduce both battery life and make cooling harder, specially given how the XPS is a premium thin&light product.

I don't think this is a fair comparison, unless you also consider the comparison between a 5090 and the supposed AMD's 9070 to be fair as well, in this case we can agree to disagree.
 
What's the difference with this CCD placement?

I'm guessing that it works like in a 9950X, only with all the chips placed right next to them? Or is there some difference in how they're connected?
1736789665894.png
1736789706026.png
 
I'm laughing pretty hard here.

So AMD shows their best iGPU laptop CPU versus Intel, and AMD is so far ahead some silly guys get upset here.

Yeah, Intel is far behind, very very behind.

---------------

Yes they are not that far apart in TDP. Expect 45W AMD parts to compete very well against the 37W Intel part.

That Intel part btw is found in $2700 CAD Dell laptops, the Dell XPS 13.

A perfectly valid comparison. Expect a new Dell laptop with Halo Strix.
This was a Mic drop that will get louder as the year goes on.
 
What's the difference with this CCD placement?

I'm guessing that it works like in a 9950X, only with all the chips placed right next to them? Or is there some difference in how they're connected?
View attachment 379843View attachment 379844
It's still a chiplet design like your ryzen desktop counterpart, but it seems that they've made the dies sit close together.
 
  • Like
Reactions: SL2
It's still a chiplet design like your ryzen desktop counterpart, but it seems that they've made the dies sit close together.
Don't forget all those GPU cores.
 
It's still a chiplet design like your ryzen desktop counterpart, but it seems that they've made the dies sit close together.

it's more like rdna3 type of chiplets, it's sits on a interposer rather than having pcb substrate. - "high performance fanout bridge" is the name.
 
Back
Top