Friday, April 19th 2024

AMD "Strix Halo" Zen 5 Mobile Processor Pictured: Chiplet-based, Uses 256-bit LPDDR5X

Enthusiasts on the ChipHell forum scored an alleged image of AMD's upcoming "Strix Halo" mobile processor, and set out to create some highly plausible schematic slides. These are speculative. While "Strix Point" is the mobile processor that succeeds the current "Hawk Point" and "Phoenix" processors; "Strix Halo" is in a category of its own—to offer gaming experiences comparable to discrete GPUs in the ultraportable form-factor where powerful discrete GPUs are generally not possible. "Strix Halo" also goes head on against Apple's M3 Max and M3 Pro processors powering the latest crop of MacBook Pros. It has the same advantages as a single-chip solution, as the M3 Max.

The "Strix Halo" silicon is a chiplet-based processor, although very different from "Fire Range". The "Fire Range" processor is essentially a BGA version of the desktop "Granite Ridge" processor—it's the same combination of one or two "Zen 5" CCDs that talk to a client I/O die, and is meant for performance-thru-enthusiast segment notebooks. "Strix Halo," on the other hand, use the same one or two "Zen 5" CCDs, but with a large SoC die featuring an oversized iGPU, and 256-bit LPDDR5X memory controllers not found on the cIOD. This is key to what AMD is trying to achieve—CPU and graphics performance in the league of the M3 Pro and M3 Max at comparable PCB and power footprints.
The iGPU of the "Strix Halo" processor is based on the RDNA 3+ graphics architecture, and features a massive 40 RDNA compute units. These work out to 2,560 stream processors, 80 AI accelerators, 40 Ray accelerators, 160 TMUs, and an unknown number of ROPs (we predict at least 64). The slide predicts an iGPU engine clock as high as 3.00 GHz.

Graphics is an extremely memory sensitive application, and so AMD is using a 256-bit (quad-channel or octa-subchannel) LPDDR5X-8533 memory interface, for an effective cached bandwidth of around 500 GB/s. The memory controllers are cushioned by a 32 MB L4 cache located on the SoC die. The way we understand this cache hierarchy, the CCDs (CPU cores) can treat this as a victim cache, besides the iGPU treating this like an L2 cache (similar to the Infinite Cache found in RDNA 3 discrete GPUs).

The iGPU isn't the only logic-heavy and memory-sensitive device on the SoC die, there's also a NPU. From what we gather, this is the exact same NPU model found in "Strix Point" processors, with a performance of around 45-50 AI TOPS, and is based on the XDNA 2 architecture developed by AMD's Xilinx team.
The SoC I/O of "Strix Halo" isn't as comprehensive as "Fire Range," because the chip has been designed on the idea that the notebook will use its large iGPU. It has PCIe Gen 5, but only a total of 12 Gen 5 lanes—4 toward an M.2 NVMe slot, and 8 to spare for a discrete GPU (if present), although these can be used to connect any PCIe device, including additional M.2 slots. There's also integrated 40 Gbps USB4, and 20 Gbps USB 3.2 Gen 2.

As for the CPU, since "Strix Halo" is using one or two "Zen 5" CCDs, its CPU performance will be similar to "Fire Range." You get up to 16 "Zen 5" CPU cores, with 32 MB of L3 cache per CCD, or 64 MB of total CPU L3 cache. The CCDs are connected to the SoC die either using conventional IFOP (Infinity Fabric over package), just like "Fire Range" and "Granite Ridge," or there's even a possibility that AMD is using Infinity Fanout links like on some of its chiplet-based RDNA 3 discrete GPUs.
Lastly, there are some highly speculative performance predictions for the "Strix Halo" iGPU, which puts it competitive to the GeForce RTX 4060M and RTX 4070M.
Sources: ChipHell Forums, harukaze5719 (Twitter)
Add your own comment

109 Comments on AMD "Strix Halo" Zen 5 Mobile Processor Pictured: Chiplet-based, Uses 256-bit LPDDR5X

#76
AusWolf
Super Firm TofuThis is rock solid?



I guess you just have a different definition. :shrug:

My issue is that as long a people won't admit there are problems, AMD has no incentive to fix their Windows drivers.
I forgot to add later that I reinstalled the driver, and now it works fine. I'm not saying that AMD is the same kind of plug-and-play experience as Nvidia, but it's nowhere near as dramatic as some people say.
Posted on Reply
#77
Chrispy_
wolf40CU's and 256-bit memory, now that's what I'm talking about.

Can't wait to see this in action!
We've had promising on-paper configurations before, and this is what happens:
  • The only silicon configuration with all the CUs will be the insanely-expensive flagship variant(s)
  • Because they're flagships, they'll only appear in
    • Behemoth overpriced gaming laptops with dGPUs, rendering the IPG pointless
      or
    • Impossibly thin, overpriced ultraportables that compromise on cooling to look "sexy and thin" so hard that it throttles hard within 60 seconds of any real GPU load and is therefore unusable for gaming.
  • The sort of configuration that will appear in a half-decent, everyday $1000 laptop is going to be a 6-core with 20CU and lacking the LPGDDR5X because manufacturers are cheapskates and the LPGDDR5X is probably "optional"...
I really hope I'm wrong!
Posted on Reply
#78
AnotherReader
Chrispy_We've had promising on-paper configurations before, and this is what happens:
  • The only silicon configuration with all the CUs will be the insanely-expensive flagship variant(s)
  • Because they're flagships, they'll only appear in
    • Behemoth overpriced gaming laptops with dGPUs, rendering the IPG pointless
      or
    • Impossibly thin, overpriced ultraportables that compromise on cooling to look "sexy and thin" so hard that it throttles hard within 60 seconds of any real GPU load and is therefore unusable for gaming.
  • The sort of configuration that will appear in a half-decent, everyday $1000 laptop is going to be a 6-core with 20CU and lacking the LPGDDR5X because manufacturers are cheapskates and the LPGDDR5X is probably "optional"...
I really hope I'm wrong!
I don't expect laptops featuring this to be cheap, but it wouldn't make any sense to pair this with a dGPU.
Posted on Reply
#79
mechtech
"This is key to what AMD is trying to achieve—CPU and graphics performance in the league of the M3 Pro and M3 Max at comparable PCB and power footprints.

The iGPU of the "Strix Halo" processor is based on the RDNA 3+ graphics architecture, and features a massive 40 RDNA compute units. These work out to 2,560 stream processors, 80 AI accelerators, 40 Ray accelerators, 160 TMUs, and an unknown number of ROPs (we predict at least 64). The slide predicts an iGPU engine clock as high as 3.00 GHz."

Interesting................

Drop all the AI and RT (and transistors and price that goes with it) and make sure it has full media engine and make a nice HTPC chip.
Posted on Reply
#80
ghazi
Kohl BaasI kinda feel betrayed by AMD for not having something like this since Fusion...



Because AMD can't make drivers. /s
Really I think the memory bandwidth was the final limiting factor, HBM is out of the question with this product range so what can you do? 256-bit LPDDR5X at 8.4Gbps gives 269GB/s bandwidth which is less than the RX 7600 and in line with the tired old RX 480. It's actually also in line with the HD 7970 which had 264GB/s. And this just became available, in the days of LPDDR4X you'd be looking at half that bandwidth.
Posted on Reply
#81
Darmok N Jalad
AnotherReaderMoreover, if it is anything like Phoenix, then the CPU complex might not have a wide enough link to the memory controller to use that bandwidth. The cores themselves are capable of using that bandwidth, but I doubt that they would be allowed to access even 50% of it.
Apple's CPU cores are similar. They just don't use the large amount of bandwidth they are provided. The GPU will for sure, and perhaps the NPU?
stimpy883DFX ring a bell? They were better than nGreedia, and were killed because of it.
3dfx suffered self-inflicted wounds. They didn't compete well in the end and had to close shop. I had a Voodoo3 2000, and, while it was good in proprietary Glide, it was not as competitive in OpenGL. AGP did nothing for performance, and it maxed out at 16-bit color. Voodoo5 resoled that, but it leaned too much on SLI and multiple GPUs on one board to compete. Their last single GPU card couldn't hold its own, which basically knocked them right out of the market. Even ATi was better, and they weren't exactly known for gaming power until the 7500/8500 series.
Posted on Reply
#82
RimbowFish
This an APU Done Right

I think if the GPU is downclocked to around 2Ghz, this APU could be put in a handled gaming console
Posted on Reply
#83
GodisanAtheist
Chrispy_We've had promising on-paper configurations before, and this is what happens:
  • The only silicon configuration with all the CUs will be the insanely-expensive flagship variant(s)
  • Because they're flagships, they'll only appear in
    • Behemoth overpriced gaming laptops with dGPUs, rendering the IPG pointless
      or
    • Impossibly thin, overpriced ultraportables that compromise on cooling to look "sexy and thin" so hard that it throttles hard within 60 seconds of any real GPU load and is therefore unusable for gaming.
  • The sort of configuration that will appear in a half-decent, everyday $1000 laptop is going to be a 6-core with 20CU and lacking the LPGDDR5X because manufacturers are cheapskates and the LPGDDR5X is probably "optional"...
I really hope I'm wrong!
-Yep.

AMD's laptop strategy has been schizophrenic and they've really been crowded out by Intel and NV on that front.

For every 1 AMD laptop out there, you'll find 10 Intel/NV options.

It's good to see AMD finally leveraging it's IPs to provide a part that neither Intel or NV can deliver, but I question whether the part will be delivered in volume and whether it would be enough to go against the current brand perceptions.
Posted on Reply
#84
Tek-Check
btarunr
The SoC I/O of "Strix Halo" isn't as comprehensive as "Fire Range," because the chip has been designed on the idea that the notebook will use its large iGPU. It has PCIe Gen 5, but only a total of 12 Gen 5 lanes—4 toward an M.2 NVMe slot, and 8 to spare for a discrete GPU (if present), although these can be used to connect any PCIe device, including additional M.2 slots. There's also integrated 40 Gbps USB4, and 20 Gbps USB 3.2 Gen 2.
This diagram from the 'leaker' is a plagiarised copy-cat from TechPowerUp's CPU database.
Posted on Reply
#85
Kohl Baas
ghaziReally I think the memory bandwidth was the final limiting factor, HBM is out of the question with this product range so what can you do? 256-bit LPDDR5X at 8.4Gbps gives 269GB/s bandwidth which is less than the RX 7600 and in line with the tired old RX 480. It's actually also in line with the HD 7970 which had 264GB/s. And this just became available, in the days of LPDDR4X you'd be looking at half that bandwidth.
Since it's gonna be a BGA solution anyways, GDDR6 would be okay with me... The only concern is capacity. How much shared RAM yould you need for 16 cores and 40CUs if Gaming is not the only thing you care about?
Posted on Reply
#86
AusWolf
Chrispy_We've had promising on-paper configurations before, and this is what happens:
  • The only silicon configuration with all the CUs will be the insanely-expensive flagship variant(s)
  • Because they're flagships, they'll only appear in
    • Behemoth overpriced gaming laptops with dGPUs, rendering the IPG pointless
      or
    • Impossibly thin, overpriced ultraportables that compromise on cooling to look "sexy and thin" so hard that it throttles hard within 60 seconds of any real GPU load and is therefore unusable for gaming.
  • The sort of configuration that will appear in a half-decent, everyday $1000 laptop is going to be a 6-core with 20CU and lacking the LPGDDR5X because manufacturers are cheapskates and the LPGDDR5X is probably "optional"...
I really hope I'm wrong!
I want this on a mini-ITX board, either in soldered or socketed form, I don't care. I can dream on, I guess. :rolleyes:
Posted on Reply
#87
stimpy88
bitsandbootsAt this point of dominance Windows marketshare can only go down, and the only question is to what limit and at what speed. But, honestly the web has taken over, with so many things web apps or electron apps these days, that if you wanted to only use a browser on linux, that's most of people's activites now anyway, which is great for choosing the right OS.

Somehow valve managed to make the steam deck a success despite putting Arch of all distros on it, so it's not as bad as you'd think!

As a developer though, I noticed a funny thing about Windows... the I/O is such that if a program is ported in a naive way, though it may work, it will have worse performance than on Linux. "stat" is a fast command on linux, but not so on Windows. I just do not do any JS development on windows anymore for example because of how attrocious the tools like npm and webpack handle thousands of tiny files. And text searching them is no better.
Do you remember the days where software was written to be as small and fast as possible? Apps written in assembler to target the CPU directly for the fastest possible executions. It's sad that today a simple command or app that performs limited functions can be dozens of megabytes, where a little 1 or 2k program could have done the same thing.

Could you imagine the performance of Windows written in pure assembly, targeted at a specific range of hardware? Shame Microsoft fired every decent and competent coder they had years ago.

If you remember the days of the Amiga, Windows 11 is like a game programmed in AMOS or basic vs a game programmed in assembly. Today our computers are so fast that it's almost impossible to not simply brute force good performance. Today Windows is coded in high level languages not far off HTML5 code.
Posted on Reply
#88
R0H1T
Kohl BaasHow much shared RAM yould you need for 16 cores and 40CUs if Gaming is not the only thing you care about?
1280GB sounds about enough, also just add an extra 8GB in there for a specific kind of middle finger to Apple & their stupid fanbase :D
Posted on Reply
#89
stimpy88
Darmok N JaladTheir last single GPU card couldn't hold its own, which basically knocked them right out of the market. Even ATi was better, and they weren't exactly known for gaming power until the 7500/8500 series.
Do you remember the first time you saw a game running on a 3DFX card? I had a Matrox Millennium at the time, the best 2D card at the time, and I bought my first Diamond 3D Voodoo card and connected it up using that awful loop-through cable that made your monitor slightly blurry! That moment was the most profound moment I ever had with a PC. I still remember feeling like I owned a Silicon Graphics workstation in my bedroom and all my friends were in awe because it was like having a Sega Rally or Ridge Racer arcade cabinet at home! Man, when you fired up a game and that 3DFX logo showed, you knew you was in for an experience!

It was tragic what 3DFX did to themselves, but it was even sadder when nGreedia killed them off.
Posted on Reply
#90
AusWolf
stimpy88Do you remember the first time you saw a game running on a 3DFX card? I had a Matrox Millennium at the time, the best 2D card at the time, and I bought my first Diamond 3D Voodoo card and connected it up using that awful loop-through cable that made your monitor slightly blurry! That moment was the most profound moment I ever had with a PC. I still remember feeling like I owned a Silicon Graphics workstation in my bedroom and all my friends were in awe because it was like having a Sega Rally or Ridge Racer arcade cabinet at home! Man, when you fired up a game and that 3DFX logo showed, you knew you was in for an experience!

It was tragic what 3DFX did to themselves, but it was even sadder when nGreedia killed them off.
It was an easy kill, to be fair. They only had to put the last nail in the coffin.

Surely, the graphics wasn't bad for the time, but as far as I've seen, the whole 3DFX ownership experience wasn't that great because of things like the aforementioned fizzy image or the loop cable, or the fact that the card didn't do 2D, which bumped up the cost of a new build. Sadly, I've never owned one, so I can only comment based on the YouTube nostalgia videos I've seen.
Posted on Reply
#91
Kohl Baas
R0H1T1280GB sounds about enough, also just add an extra 8GB in there for a specific kind of middle finger to Apple & their stupid fanbase :D
I don't believe it is possible to put 1280GBs of GDDR6 on a single card without you needing the approval of the FAA. :p
Posted on Reply
#92
Chrispy_
AusWolfI want this on a mini-ITX board, either in soldered or socketed form, I don't care. I can dream on, I guess. :rolleyes:
You probably won't see it on a mini-ITX board but based on current trends, the prebuilt, bespoke mini-PCs from companies like Minisforum and Beelink are likely to pick them up eventually. They're you're best bet this generation if you want a decent-spec mobile CPU like the 7840HS or 7940HS in a desktop SFF

Just a couple of examples....
Posted on Reply
#93
Darmok N Jalad
stimpy88Do you remember the first time you saw a game running on a 3DFX card? I had a Matrox Millennium at the time, the best 2D card at the time, and I bought my first Diamond 3D Voodoo card and connected it up using that awful loop-through cable that made your monitor slightly blurry! That moment was the most profound moment I ever had with a PC. I still remember feeling like I owned a Silicon Graphics workstation in my bedroom and all my friends were in awe because it was like having a Sega Rally or Ridge Racer arcade cabinet at home! Man, when you fired up a game and that 3DFX logo showed, you knew you was in for an experience!

It was tragic what 3DFX did to themselves, but it was even sadder when nGreedia killed them off.
I think their investors killed them off by wanting them to sell their assets, and NVIDIA bought up their IP.
Posted on Reply
#94
Wirko
Kohl BaasI don't believe it is possible to put 1280GBs of GDDR6 on a single card without you needing the approval of the FAA. :p
Given that you'd need two industrial-grade octacopters, moored down to the PC, to cool the memory and the associated processor, you're probably right here.
Posted on Reply
#95
Zareek
R-T-BI just switched to Linux this weekend because open source drivers are superior to the windows ones, yes.

I was dual booting until literally yesterday.

And I wouldn't use the word "crap." But there is a difference. It's been improving yes, but I am impatient. :laugh:
Fortunately or unfortunately, I can't speak for the RX 7000 series yet. Maybe that's why my experiences are so good. I am rarely an early adopter these days. I think Ryzen 3800x/x570 was my last early adoption. That was a bit rough at launch. Back in the day, I was a launch day buyer for a lot of graphics cards, but I think the Radeon HD 3870 was my last launch day video card. I usually wait three to six months so everyone else can work out all the driver bugs these days.
Posted on Reply
#96
R-T-B
ZareekFortunately or unfortunately, I can't speak for the RX 7000 series yet. Maybe that's why my experiences are so good. I am rarely an early adopter these days. I think Ryzen 3800x/x570 was my last early adoption. That was a bit rough at launch. Back in the day, I was a launch day buyer for a lot of graphics cards, but I think the Radeon HD 3870 was my last launch day video card. I usually wait three to six months so everyone else can work out all the driver bugs these days.
I think to be fair, some of it may be chiplet technology growing pains in the GPU side of things. Cost of innovation I guess.
Posted on Reply
#97
AnotherReader
Kohl BaasSince it's gonna be a BGA solution anyways, GDDR6 would be okay with me... The only concern is capacity. How much shared RAM yould you need for 16 cores and 40CUs if Gaming is not the only thing you care about?
GDDR6 is too power hungry to be a viable alternative to LPDDR5X for this application.
Posted on Reply
#98
Yashyyyk
ghaziReally I think the memory bandwidth was the final limiting factor, HBM is out of the question with this product range so what can you do? 256-bit LPDDR5X at 8.4Gbps gives 269GB/s bandwidth which is less than the RX 7600 and in line with the tired old RX 480. It's actually also in line with the HD 7970 which had 264GB/s. And this just became available, in the days of LPDDR4X you'd be looking at half that bandwidth.
HBM... could have been used, see 8809G / Vega M GH

Ironically performs about as well as a 780M but at 3x the power draw, but 4-5 years earlier. No new drivers though.

(See post #53)

I think I like the shared memory more though, no need to be VRAM limited (I like Skyrim mods)
G777So it’s configurable up to 150 or 175W; that’s about the same as (actually it’s more than) max total power draw of a 2024 Zephryus G14 laptop, and it’s projected to achieve about the performance GPU-wise. Even the lowest power configuration at 45W is more than the standard for ultrabooks.

This product seems quite niche to me. I suppose you can slightly more compact high-performance laptops by saving some space on the dGPU and the associated components, but given the high idle power consumption AMD’s chiplet-based processors tend to have, battery life may not be great. It may also be used in the most premium desktop-replacement laptops just so they can have the best components, but then those would come with the downside of soldered RAM. I suspect that it will be too power-hungry to compete against the M3/M4 Max.
It is very possible to do ~60W in a 13" laptop if OEM is willing and competent.

Phawx on YT has shown (gaming only) the 8840/780M fit into like ~17W, I think it is possible for it to scale into (very) large handheld/smaller laptop.

(I don't buy large APU in Xbox handheld though, there is $ budget/limit as well as power limit/budget)

But not sure if all the extra cores will have high idle drain (i.e. impractical to use unless plugged in). And 60W will be limiting for the CPU either way

I have some discussion/speculation on my 4070Ti at ~70W video, if interested
CheeseballI still think this is possibly going in the purported PS5 Pro. The embedded and shared LPDDR5X RAM kinda gives it away.

The specs are direct upgrades from the Zen 2/Oberon APU of the PS5 and Zen 2/Scarlett APU of the Series X too.

However, I too really wish this goes in some sort of ultra-portable or lightweight gaming laptop.


I wish, but anything above 100W TDP in a handheld is going to push it out of its category for sure.
It's too expensive / CPU is too "excessive"

Unless you're playing CPU-bound games (basically only sims/>60fps), the benefit of having a better CPU (bin/efficiency) is to use less power and give it to the GPU (think laptop or even handheld).

Instead, why not just give more power/$ (cores/bandwidth) to the GPU?

Also as mentioned above, it is very possible to do ~60W in a 13" laptop if OEM is willing and competent.
Posted on Reply
#99
Minus Infinity
AnotherReaderI don't expect laptops featuring this to be cheap, but it wouldn't make any sense to pair this with a dGPU.
Indeed, that is what Dragin Range currently and Fire Range ultimately is for. High end-desktop parts with piss weak iGPU modified for laptop usage and to be paired with dGPU. Halo is not meant to be for high-end gaming laptops at all. It will allow laptops using this to be substantially thinner than having to make a chassis that can handle the size of a dGPU and the extra thermals.

For me that doesn't really care to much about gaming on a laptop at all I'm interested in both what Strix Point and Halo bring to the market. Zero interest in Fire Range.
Posted on Reply
#100
Lewzke
Put 8 cores and bigger GPU die, we need good integrated graphics. Low and midrange GPU segment should be all integrated in the future, there is no point anymore to keep the lower segment discrete graphics market.
Posted on Reply
Add your own comment
Nov 23rd, 2024 03:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts