Friday, April 19th 2024

AMD "Strix Halo" Zen 5 Mobile Processor Pictured: Chiplet-based, Uses 256-bit LPDDR5X

Enthusiasts on the ChipHell forum scored an alleged image of AMD's upcoming "Strix Halo" mobile processor, and set out to create some highly plausible schematic slides. These are speculative. While "Strix Point" is the mobile processor that succeeds the current "Hawk Point" and "Phoenix" processors; "Strix Halo" is in a category of its own—to offer gaming experiences comparable to discrete GPUs in the ultraportable form-factor where powerful discrete GPUs are generally not possible. "Strix Halo" also goes head on against Apple's M3 Max and M3 Pro processors powering the latest crop of MacBook Pros. It has the same advantages as a single-chip solution, as the M3 Max.

The "Strix Halo" silicon is a chiplet-based processor, although very different from "Fire Range". The "Fire Range" processor is essentially a BGA version of the desktop "Granite Ridge" processor—it's the same combination of one or two "Zen 5" CCDs that talk to a client I/O die, and is meant for performance-thru-enthusiast segment notebooks. "Strix Halo," on the other hand, use the same one or two "Zen 5" CCDs, but with a large SoC die featuring an oversized iGPU, and 256-bit LPDDR5X memory controllers not found on the cIOD. This is key to what AMD is trying to achieve—CPU and graphics performance in the league of the M3 Pro and M3 Max at comparable PCB and power footprints.
The iGPU of the "Strix Halo" processor is based on the RDNA 3+ graphics architecture, and features a massive 40 RDNA compute units. These work out to 2,560 stream processors, 80 AI accelerators, 40 Ray accelerators, 160 TMUs, and an unknown number of ROPs (we predict at least 64). The slide predicts an iGPU engine clock as high as 3.00 GHz.

Graphics is an extremely memory sensitive application, and so AMD is using a 256-bit (quad-channel or octa-subchannel) LPDDR5X-8533 memory interface, for an effective cached bandwidth of around 500 GB/s. The memory controllers are cushioned by a 32 MB L4 cache located on the SoC die. The way we understand this cache hierarchy, the CCDs (CPU cores) can treat this as a victim cache, besides the iGPU treating this like an L2 cache (similar to the Infinite Cache found in RDNA 3 discrete GPUs).

The iGPU isn't the only logic-heavy and memory-sensitive device on the SoC die, there's also a NPU. From what we gather, this is the exact same NPU model found in "Strix Point" processors, with a performance of around 45-50 AI TOPS, and is based on the XDNA 2 architecture developed by AMD's Xilinx team.
The SoC I/O of "Strix Halo" isn't as comprehensive as "Fire Range," because the chip has been designed on the idea that the notebook will use its large iGPU. It has PCIe Gen 5, but only a total of 12 Gen 5 lanes—4 toward an M.2 NVMe slot, and 8 to spare for a discrete GPU (if present), although these can be used to connect any PCIe device, including additional M.2 slots. There's also integrated 40 Gbps USB4, and 20 Gbps USB 3.2 Gen 2.

As for the CPU, since "Strix Halo" is using one or two "Zen 5" CCDs, its CPU performance will be similar to "Fire Range." You get up to 16 "Zen 5" CPU cores, with 32 MB of L3 cache per CCD, or 64 MB of total CPU L3 cache. The CCDs are connected to the SoC die either using conventional IFOP (Infinity Fabric over package), just like "Fire Range" and "Granite Ridge," or there's even a possibility that AMD is using Infinity Fanout links like on some of its chiplet-based RDNA 3 discrete GPUs.
Lastly, there are some highly speculative performance predictions for the "Strix Halo" iGPU, which puts it competitive to the GeForce RTX 4060M and RTX 4070M.
Sources: ChipHell Forums, harukaze5719 (Twitter)
Add your own comment

103 Comments on AMD "Strix Halo" Zen 5 Mobile Processor Pictured: Chiplet-based, Uses 256-bit LPDDR5X

#1
Daven
Do you really need 16 cores in such a beast? Or put another way, how are they fitting all this into such a small package?
Posted on Reply
#2
Chaitanya
DavenDo you really need 16 cores in such a beast? Or put another way, how are they fitting all this into such a small package?
There already is a 16core beast in form of 7945HX so a successor is a normal thing.
Posted on Reply
#3
tabascosauz
Ah there's the Infinity Link we've all been waiting for...now for the actual innovation
DavenDo you really need 16 cores in such a beast? Or put another way, how are they fitting all this into such a small package?
Why limit yourself? It's not going to be going into affordable products, and so far that new interconnect of theirs hasn't proven to be efficiency-oriented like Foveros in MTL is. It's not a small substrate and it's not some cost-effective existing design, so why only put 1 CCD in there (on a design potential level; obviously it's not an impossibility that there can be 1CCD parts)?

I think you underestimate what size of package Dragon Range is presently dictating (FL1). This isn't that far fetched. It's not going to be a drop-in replacement for something on a small substrate like Hawk Point on FP7/8.
Posted on Reply
#4
Daven
ChaitanyaThere already is a 16core beast in form of 7945HX so a successor is a normal thing.
Fire Range is the successor to the 7945HX. Strix Halo is a completely new product family with no predecessors. Typically AMD combines very few GPU CUs to its high core count SKUs as its assumed it will be coupled with a discrete GPU.
Posted on Reply
#5
AnarchoPrimitiv
THIS....this is what I've been waiting years for....a consumer APU with the graphical horsepower of a current gen console (or more)....this will finally give us a legitimate way of skipping the dGPU and I personally believe that APUs like this have the potential to be a bigger threat to the dGPU market, and by extension Nvidia, than any top tier dGPU competitor that AMD could make.

...oh, and the SFF builds will be legendary!
Posted on Reply
#6
Daven
AnarchoPrimitivTHIS....this is what I've been waiting years for....a consumer APU with the graphical horsepower of a current gen console (or more)....this will finally give us a legitimate way of skipping the dGPU and I personally believe that APUs like this have the potential to be a bigger threat to the dGPU market, and by extension Nvidia, than any top tier dGPU competitor that AMD could make.

...oh, and the SFF builds will be legendary!
I would like to add a consumer non-Apple APU with graphical horsepower of consoles. But yes this has been a long time coming and Apple made it happen first which might of forced AMD and Intel to finally make one themselves.
Posted on Reply
#7
wNotyarD
AnarchoPrimitivTHIS....this is what I've been waiting years for....a consumer APU with the graphical horsepower of a current gen console (or more)....this will finally give us a legitimate way of skipping the dGPU and I personally believe that APUs like this have the potential to be a bigger threat to the dGPU market, and by extension Nvidia, than any top tier dGPU competitor that AMD could make.

...oh, and the SFF builds will be legendary!
And then the market which could greatly benefit from it (laptops) will always come with a Nvidia dGPU and insanely priced just because.
Posted on Reply
#8
Kohl Baas
DavenI would like to add a consumer non-Apple APU with graphical horsepower of consoles. But yes this has been a long time coming and Apple made it happen first which might of forced AMD and Intel to finally make one themselves.
I kinda feel betrayed by AMD for not having something like this since Fusion...
wNotyarDAnd then the market which could greatly benefit from it (laptops) will always come with a Nvidia dGPU and insanely priced just because.
Because AMD can't make drivers. /s
Posted on Reply
#10
Denver
tabascosauzAh there's the Infinity Link we've all been waiting for...now for the actual innovation



Why limit yourself? It's not going to be going into affordable products, and so far that new interconnect of theirs hasn't proven to be efficiency-oriented like Foveros in MTL is. It's not a small substrate and it's not some cost-effective existing design, so why only put 1 CCD in there (on a design potential level; obviously it's not an impossibility that there can be 1CCD parts)?

I think you underestimate what size of package Dragon Range is presently dictating (FL1). This isn't that far fetched. It's not going to be a drop-in replacement for something on a small substrate like Hawk Point on FP7/8.
Because the TDP is shared, dissipating heat remains a challenge inside a laptop. If the design is aimed at gaming, 8 cores are sufficient.

16 cores could and should be restricted to workstations.
Posted on Reply
#11
user556
It can't possibly be used for more than single function after all.
Posted on Reply
#12
phints
DavenDo you really need 16 cores in such a beast? Or put another way, how are they fitting all this into such a small package?
TSMC is how. Probably just using a newer node.
Posted on Reply
#13
SRB151
Kohl BaasI kinda feel betrayed by AMD for not having something like this since Fusion...



Because AMD can't make drivers. /s
I've always wondered about this statement. I ran Nvida cards for years until they pulled that Geforce partner program, when I switched to AMD (also as much for the price/performance ratio, I can't afford $1500-2k for a GPU). I can't think of a single bug which was really a show stopper with either of them. Most annoying problem I ever had was the power draw with dual monitors, and eventually that got fixed.
Posted on Reply
#14
N/A
phintsTSMC is how. Probably just using a newer node.
N4 is an offshoot of N5. 71 mm² ZEN4 vs 80-85 mm² ZEN5
Posted on Reply
#15
Darmok N Jalad
DavenDo you really need 16 cores in such a beast? Or put another way, how are they fitting all this into such a small package?
It doesn't necessarily have to be a gaming APU. If it's aiming at M3 Pro/Max, then it needs to have a healthy core count. There's also an NPU included, so while it should play games really well, it would also make for a very powerful mobile workstation product.
Posted on Reply
#16
Daven
SRB151I've always wondered about this statement. I ran Nvida cards for years until they pulled that Geforce partner program, when I switched to AMD (also as much for the price/performance ratio, I can't afford $1500-2k for a GPU). I can't think of a single bug which was really a show stopper with either of them. Most annoying problem I ever had was the power draw with dual monitors, and eventually that got fixed.
The bad AMD driver quality misinformation is an internet myth perpetuated by bad players. There is a lot speculation on the who with regard to these bad players from viral Nvidia marketing to brand loyalists. But rest assured as you have found out, there is no truth to it.

There’s also thinking out there that if company A does something better than company B then it means company B has bad quality control or is ignorant to making good products. This relates to super sampling and ray tracing for the current discussion. These two things are features which Nvidia simply does better. It has no relationship to drivers or driver quality. If these features are not important to you, paying the extra premium priced into Nvidia products for said features would be a waste of money.
Posted on Reply
#17
Kohl Baas
SRB151I've always wondered about this statement. I ran Nvida cards for years until they pulled that Geforce partner program, when I switched to AMD (also as much for the price/performance ratio, I can't afford $1500-2k for a GPU). I can't think of a single bug which was really a show stopper with either of them. Most annoying problem I ever had was the power draw with dual monitors, and eventually that got fixed.
My only -still existing- problem is the occasional habbit for the driver to crash when I run a game with something else using GPU acceleration. In my case there is Chrome and Discord alongside the games and there were times/games when I had to disable hardware acceleration if I wanted to have a straight hour to play without CTD.
DavenThe bad AMD driver quality misinformation is an internet myth perpetuated by bad players. There is a lot speculation on the who with regard to these bad players from viral Nvidia marketing to brand loyalists. But rest assured as you have found out, there is no truth to it.

There’s also thinking out there that if company A does something better than company B then it means company B has bad quality control or is ignorant to making good products. This relates to super sampling and ray tracing for the current discussion. These two things are features which Nvidia simply does better. It has no relationship to drivers or driver quality. If these features are not important to you, paying the extra premium priced into Nvidia products for said features would be a waste of money.
I read on a forum that it evolved from the "nVidia is faster but ATI has a better image" parable, which itself originated from the pre DX9 era when some of the rendering/imaging methods were not standardized yet and the manufacturers did they own separate solutions (don't ask, I wasn't really into this back than and it was like the DX10-11 times when I read it). It held itself -relatively falsely- during DX9 because of the different HDR profileing they used.
Posted on Reply
#18
illli
DavenI would like to add a consumer non-Apple APU with graphical horsepower of consoles. But yes this has been a long time coming and Apple made it happen first which might of forced AMD and Intel to finally make one themselves.
The i7-8809G doesn't count? It even had a nugget of HBM on the chip
www.notebookcheck.net/AMD-Radeon-RX-Vega-M-GH-GPU.278680.0.html
Posted on Reply
#19
SL2
DavenDo you really need 16 cores in such a beast?
No?
"Strix Halo," on the other hand, use the same one or two "Zen 5" CCDs,
As for the CPU, since "Strix Halo" is using one or two "Zen 5" CCDs,
DavenOr put another way, how are they fitting all this into such a small package?
Chiplets are small.

I'm just waiting for the first "Imagine a handheld with this!11" comment..
Posted on Reply
#20
Daven
illliThe i7-8809G doesn't count? It even had a nugget of HBM on the chip
www.notebookcheck.net/AMD-Radeon-RX-Vega-M-GH-GPU.278680.0.html
Oh I forgot about that one. Yep that counts. Nice post/reminder.
SL2I'm just waiting for the first "Imagine a handheld with this!11" comment..
Lol. I was going to edit my post with that but then I said nah. Of course your posts ends the wait. :)
Posted on Reply
#21
Noyand
DavenDo you really need 16 cores in such a beast? Or put another way, how are they fitting all this into such a small package?
AMD chiplets are Tiny. If the schematics are accurate, it's going to be longer, but narrower than dragon range. And My guess is that AMD is trying to be seen as a relatively thin and light mobile workstation solution. As a whole, you gain a ton of space on the PCB with that massive Soc that use shared memory, compared to a dGPU solution
Posted on Reply
#22
bug
SRB151I've always wondered about this statement. I ran Nvida cards for years until they pulled that Geforce partner program, when I switched to AMD (also as much for the price/performance ratio, I can't afford $1500-2k for a GPU). I can't think of a single bug which was really a show stopper with either of them. Most annoying problem I ever had was the power draw with dual monitors, and eventually that got fixed.
It's not as bad as it seems, but there is a reason the term "AMD FineWine™" was coined.
AMD usually takes longer to extract peak performance from their GPUs. Also, if you want to use AMD on Linux while using compute and HDMI audio, that's always a lot of fun.

But the cards themselves and drivers are certainly usable for day-to-day tasks.
Posted on Reply
#23
progste
That GPU looks BEEFY!
Would be interesting to see this in a portable gaming machine like the steam deck, but maybe it's too power-hungry for that?
A thin gaming laptop with this could turn out very nice.
Posted on Reply
#24
wolf
Performance Enthusiast
40CU's and 256-bit memory, now that's what I'm talking about.

Can't wait to see this in action!
Posted on Reply
Add your own comment
May 2nd, 2024 17:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts