• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Debuts Ryzen AI Max Series "Strix Halo" SoC: up to 16 "Zen 5" cores, Massive iGPU

Joined
Jun 18, 2021
Messages
2,597 (2.00/day)
Since RDNA 3.5 is mobile, and most of the stuff comes with NPUs, does RDNA 3.5 even have AI Accelerators still? If so, what is the point?

It depends on what you consider AI accelerators on a GPU. AMD doesn't have something like tensor cores, they mention on some materials having some AI accelerators per compute unit but I don't know what that means, probably just dp4a implementation and the like. On the other hand NPUs use a different instruction set (which one? beats me) so they can't simply replace the GPU implementing the basic stuff like dp4a.

Maybe someone more knowledgeable can shed some light on this, in my opinion the best case scenario would for this RDNA3.5 for the better or worse to be just like a regular GPU and have the NPU as an extra to meet the "copilot pc" bs requirements (it's cool to have but not because of microsoft copilot pc requirements)
 
Joined
May 10, 2023
Messages
481 (0.79/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
It depends on what you consider AI accelerators on a GPU. AMD doesn't have something like tensor cores, they mention on some materials having some AI accelerators per compute unit but I don't know what that means, probably just dp4a implementation and the like. On the other hand NPUs use a different instruction set (which one? beats me) so they can't simply replace the GPU implementing the basic stuff like dp4a.

Maybe someone more knowledgeable can shed some light on this, in my opinion the best case scenario would for this RDNA3.5 for the better or worse to be just like a regular GPU and have the NPU as an extra to meet the "copilot pc" bs requirements (it's cool to have but not because of microsoft copilot pc requirements)
NPUs are meant to do basic inference with quantized models, often INT4 or INT8, and don't have much extra capabilities apart from that.
The extra "AI accelerators" in GPUs (like XMX in Intel, Tensor cores in Nvidia, or the WMMA instructions in RDNA) are meant to do large matmuls similar to the ones in NPUs, but faster and with many different data types, such as FP16, FP8, BF16, FP32, etc etc, which allows for higher performance, quality and also for training models.
 
Top