• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD "Strix Halo" a Large Rectangular BGA Package the Size of an LGA1700 Processor

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,288 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Apparently the AMD "Strix Halo" processor is real, and it's large. The chip is designed to square off against the likes of the Apple M3 Pro and M3 Max, in letting ultraportable notebooks have powerful graphics performance. A chiplet-based processor, not unlike the desktop socketed "Raphael," and mobile BGA "Dragon Range," the "Strix Halo" processor consists of one or two CCDs containing CPU cores, wired to a large die, that's technically the cIOD (client I/O die), but containing an oversized iGPU, and an NPU. The point behind "Strix Halo" is to eliminate the need for a performance-segment discrete GPU, and conserve its PCB footprint.

According to leaks by Harukaze5719, a reliable source with AMD leaks, "Strix Halo" comes in a BGA package dubbed FP11, measuring 37.5 mm x 45 mm, which is significantly larger than the 25 mm x 40 mm size of the FP8 BGA package that the regular "Strix Point," "Hawk Point," and "Phoenix" mobile processors are built on. It is larger in area than the 40 mm x 40 mm FL1 BGA package of "Dragon Range" and upcoming "Fire Range" gaming notebook processors. "Strix Halo" features one or two of the same 4 nm "Zen 5" CCDs featured on the "Granite Ridge" desktop and "Fire Range" mobile processors, but connected to a much larger I/O die, as we mentioned.



At this point, the foundry node of the I/O die of "Strix Halo" is not known, but it's unlikely to be the same 6 nm node as the cIOD that AMD has been using on its other client processors based on "Zen 4" and "Zen 5." It wouldn't surprise us if AMD is using the same 4 nm node as it did for "Phoenix," for this I/O die. The main reason an advanced node is warranted, is because of the oversized iGPU, which features a whopping 20 workgroup processors (WGPs), or 40 compute units (CU), worth 2,560 stream processors, 80 AI accelerators, and 40 Ray accelerators. This iGPU is based on the latest RDNA 3.5 graphics architecture.

For perspective, the iGPU of the regular 4 nm "Strix Point" processor has 8 WGPs (16 CU, 1,024 stream processors). Then there's the NPU. AMD is expected to carry over the same 50 TOPS-capable XDNA 2 NPU it uses on the regular "Strix Point," on the I/O die of "Strix Halo," giving the processor Microsoft Copilot+ capabilities.

The memory interface of "Strix Halo" has for long been a mystery. Logic dictates that it's a terrible idea to have 16 "Zen 5" CPU cores and a 40-Compute Unit GPU share even a regular dual-channel DDR5 memory interface at the highest possible speeds, as both the CPU and iGPU would be severely bandwidth-starved. Then there's also the NPU to consider, as AI inferencing is a memory-sensitive application.

We have a theory, that besides an LPDDR5X interface for the CPU cores, the "Strix Halo" package has wiring for discrete GDDR6 memory. Even a relatively narrow 128-bit GDDR6 memory interface running at 20 Gbps would give the iGPU 320 GB/s of memory bandwidth, which is plenty for performance-segment graphics. This would mean that besides LPDDR5X chips, there would be four GDDR6 chips on the PCB. The iGPU even has 32 MB of on-die Infinity Cache memory, which seems to agree with our theory of a 128-bit GDDR6 interface exclusively for the iGPU.

View at TechPowerUp Main Site | Source
 
Joined
Jan 3, 2021
Messages
3,586 (2.48/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
This is a GPU with a CPU hanging off the side. Could this concept be usable for building discrete GPUs too? "Reverse accelerated processing unit"?
 
Joined
Dec 6, 2022
Messages
437 (0.59/day)
Location
NYC
System Name GameStation
Processor AMD R5 5600X
Motherboard Gigabyte B550
Cooling Artic Freezer II 120
Memory 16 GB
Video Card(s) Sapphire Pulse 7900 XTX
Storage 2 TB SSD
Case Cooler Master Elite 120
Could this be some variation of the chip used in current xbox/ps5 consoles?

And since Ngreedia is getting away with murder (pricing wise) with the halo 4090, I wonder if AMD is going for the same “motif” given the Halo word in the name?

I hope is not strictly a gaming device and instead also used in a more professional segments/devices.
Man I want this in a desktop socket.
Or at the very least, in a mini pc, from Minisforums, for example.
 
Joined
Jan 3, 2021
Messages
3,586 (2.48/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Man I want this in a desktop socket.

Also earlier rumors said the memory was DDR5 256bit.
Only the Threadripper socket could be a candidate for that.
 
Joined
Dec 12, 2016
Messages
1,939 (0.66/day)
Man I want this in a desktop socket.

Also earlier rumors said the memory was DDR5 256bit.
The rumors still say 256-bit 8000 LPDDRX. Not sure why TPU now says 128 bit because of the socket size.

Edit: Oh the article is saying 128-bit GDDR6 not LPDDRX. Now that would be cool.
 
Joined
Feb 20, 2019
Messages
8,331 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
16-core Zen5 with GPU performance in the ballpark of a 6750XT? (laptop TDP limitations will likely offset any RDNA 3.5 advantages, since RDNA3.5 is just RDNA3 with an NPU bolt-on, and RDNA3 gave us very little IPC over RDNA2!)

That's still a very CPU-weighted config for gamers who honestly won't want to overpay for 8-10 cores they'll never use, and most likely the only configuration that will include the full-fat 40CU GPU component.

The sensible Ryzen7 or Ryzen5 variants that have 6-8 cores games need will likely come with crippled 32CU or 28CU GPUs in them which is acceptable, but not worth much - 6600S (28CU RDNA2 dGPU) laptops were occupying the sub-$1000 entry-level bargain bin 18 months ago. They're fine for casual gaming and esports but hardly what I'd call bleeding edge and already struggling in plenty of modern titles at 1080p.

Non-gamers likely aren't interested because no matter how good AMD's GPU compute performance is, they don't use CUDA which is a massive gatekeeper for the entire productivity industry, and 4060 laptops are cheap, even available in thin-and-lights that pull a mere 120W from the wall outlet. I don't see Strix Halo competing well with those, especially since the slides here indicate a potential 175W power draw so that's definitely not going to be a thin-and-light laptop.
 
Joined
Dec 12, 2016
Messages
1,939 (0.66/day)
16-core Zen5 with GPU performance in the ballpark of a 6750XT? (laptop TDP limitations will likely offset any RDNA 3.5 advantages, since RDNA3.5 is just RDNA3 with an NPU bolt-on, and RDNA3 gave us very little IPC over RDNA2!)

That's still a very CPU-weighted config for gamers who honestly won't want to overpay for 8-10 cores they'll never use, and most likely the only configuration that will include the full-fat 40CU GPU component.

The sensible Ryzen7 or Ryzen5 variants that have 6-8 cores games need will likely come with crippled 32CU or 28CU GPUs in them which is acceptable, but not worth much - 6600S (28CU RDNA2 dGPU) laptops were occupying the sub-$1000 entry-level bargain bin 18 months ago. They're fine for casual gaming and esports but hardly what I'd call bleeding edge and already struggling in plenty of modern titles at 1080p.

Non-gamers likely aren't interested because no matter how good AMD's GPU compute performance is, they don't use CUDA which is a massive gatekeeper for the entire productivity industry, and 4060 laptops are cheap, even available in thin-and-lights that pull a mere 120W from the wall outlet. I don't see Strix Halo competing well with those, especially since the slides here indicate a potential 175W power draw so that's definitely not going to be a thin-and-light laptop.
You're picking and choosing the markets for the failure of this product by meandering around DIY desktop gamers and CUDA workstation users. Of course, these two user groups won't buy Strix Halo. But DIY desktop gamers represent < 1% of the desktop/laptop market. CUDA workstation users are a different market all together and should never be mentioned here.

Let's see how Strix Halo performs and which products are built around it. I have my own ideas of how a fat CPU/GPU SoC can be used but the market is way more creative.
 
Joined
Sep 1, 2020
Messages
2,387 (1.52/day)
Location
Bulgaria
Yes well, already many myths and legends have spread about this product. There must already be misled people who would drop thousands of dollars to own it because they read someone's comments. And what if the product turns out to be mediocre?
 

mikesg

New Member
Joined
Jun 1, 2024
Messages
26 (0.13/day)
Yes well, already many myths and legends have spread about this product. There must already be misled people who would drop thousands of dollars to own it because they read someone's comments. And what if the product turns out to be mediocre?

Strix Point (RDNA 3.5) / 890M 16CU (in a GPD Duo) has scored in the region of a RTX 3050 in Time Spy.

With more than double the CU's and TDP headroom it's easily a 4050-4060-5050 competitor.

Strix Point is more for thin laptop/mini PC. Strix Halo would suit desktop a lot more. Within one day you would go shopping for the biggest cooler.
 
Joined
Sep 8, 2009
Messages
1,077 (0.19/day)
Location
Porto
Processor Ryzen 9 5900X
Motherboard Gigabyte X570 Aorus Pro
Cooling AiO 240mm
Memory 2x 32GB Kingston Fury Beast 3600MHz CL18
Video Card(s) Radeon RX 6900XT Reference (amd.com)
Storage O.S.: 256GB SATA | 2x 1TB SanDisk SSD SATA Data | Games: 1TB Samsung 970 Evo
Display(s) LG 34" UWQHD
Audio Device(s) X-Fi XtremeMusic + Gigaworks SB750 7.1 THX
Power Supply XFX 850W
Mouse Logitech G502 Wireless
VR HMD Lenovo Explorer
Software Windows 10 64bit
We have a theory, that besides an LPDDR5X interface for the CPU cores, the "Strix Halo" package has wiring for discrete GDDR6 memory. Even a relatively narrow 128-bit GDDR6 memory interface running at 20 Gbps would give the iGPU 320 GB/s of memory bandwidth, which is plenty for performance-segment graphics. This would mean that besides LPDDR5X chips, there would be four GDDR6 chips on the PCB. The iGPU even has 32 MB of on-die Infinity Cache memory, which seems to agree with our theory of a 128-bit GDDR6 interface exclusively for the iGPU.

Strix Halo's memory controller has shown to be 256bit LPDDR5X since the first leak. Probably LPDDR5X 8000, so 256GB/s unified.

There's no GDDR6 there, but there's 32MB Infinity Cache for the iGPU.


Man I want this in a desktop socket.
Why? It'll probably be cheaper and faster to get a 12/16-core Ryzen 9 with a discrete GPU. Especially if you wait for RDNA4.


With more than double the CU's and TDP headroom it's easily a 4050-4060-5050 competitor.
It's expected to have RTX 4070 Laptop performance (desktop RTX 4060 Ti chip) but without the 8GB VRAM limitation.
In fact, Strix Halo is probably only going to appear in expensive laptops with 32GB LPDDR5X or more, as there's been shipping manifestos with Strix Halo test units carrying 128GB RAM.
 
Joined
Dec 6, 2022
Messages
437 (0.59/day)
Location
NYC
System Name GameStation
Processor AMD R5 5600X
Motherboard Gigabyte B550
Cooling Artic Freezer II 120
Memory 16 GB
Video Card(s) Sapphire Pulse 7900 XTX
Storage 2 TB SSD
Case Cooler Master Elite 120
Mediocre how?
Simple, by being an AMD product.
/s

I simply dont understand the automatic hate that all AMD products get, accompanied by false statements.
 
Joined
Aug 3, 2006
Messages
141 (0.02/day)
Location
Austin, TX
Processor Ryzen 6900HX
Memory 32 GB DDR4LP
Video Card(s) Radeon 6800m
Display(s) LG C3 42''
Software Windows 11 home premium
As much as I am hyped for Strix Halo, it does make me wonder why not make a APU that is 60cu and 12 cores? or 80cu and 8 cores? Do we really need 16cores 32threads for gaming? They could easily market a 'gamer 3D APU' that is 8 cores, 60cu, with 3D cache and Infinity Cache. The mini PC market would go absolute bonkers.
This is a GPU with a CPU hanging off the side. Could this concept be usable for building discrete GPUs too? "Reverse accelerated processing unit"?
 
Joined
Feb 12, 2021
Messages
220 (0.16/day)
As much as I am hyped for Strix Halo, it does make me wonder why not make a APU that is 60cu and 12 cores? or 80cu and 8 cores? Do we really need 16cores 32threads for gaming? They could easily market a 'gamer 3D APU' that is 8 cores, 60cu, with 3D cache and Infinity Cache. The mini PC market would go absolute bonkers.
The first answer is the chip size and power consumption and also therefore cooling. As noted in this article the package is already massive, the chip that contains the graphics already dwarfs the chiplets that contain the CPU cores so you get an idea of how much space you would save, which is nowhere near enough to add that much more in the the way of GPU, which in turn will need more memory bandwidth.

If you put all of this together you soon find out why AMD ended up with the design they did that isn't going to be insanely expensive, so will actually end up with mass market appeal whilst doing a solid job of making a product that is designed specifically to not have a discrete GPU alongside it, thus eliminating the sale of a nVidia GPU, and making a product that beats anything intel has to offer.

This is looking like a good product to me, and much as I had assumed already, leaks are suggesting more and more that this is the first in a whole line of products.! The 265-bit bus laptop/desktop CPU/APU's are just over the horizon and IMHO, this is why Strix Halo is the product I am most interested in seeing this year, not least because of some very interesting use cases that look very promising, what OEM's will do with it, what mini-desktop PC's will look like on the inside, what the public want to see from version 2, and ultimately where AMD decides to take version 2, 3, etc.

I almost forgot to say that there will be variants that have fewer than 16 cores and fewer than 40CU's of GPU performance and yes there are already lots of people calling for a single CCD version ideally with 3D V-Cache if that is possible and a fully enabled 40CU GPU because that would be fantastic for gaming, but others are looking for a 256GB Strix Halo laptop fully enabled (16c 40CU) because they simply need it.

Also remember that this is essentially a new market and AMD has some choices to make, no doubt they are even reading comments like this in forums to get an idea of what people want and what people expect. As much as people (myself included) often laugh at marketing, it is important to launch the right product in the right segment at the right price, to keep customers happy and buying by providing the products that people actually want which is rarely the top models.
 
Last edited:
Joined
May 7, 2020
Messages
145 (0.09/day)
As much as I am hyped for Strix Halo, it does make me wonder why not make a APU that is 60cu and 12 cores? or 80cu and 8 cores? Do we really need 16cores 32threads for gaming? They could easily market a 'gamer 3D APU' that is 8 cores, 60cu, with 3D cache and Infinity Cache. The mini PC market would go absolute bonkers.
If they put only one CCD the chip would be less mechanically robust, corners are weakspots, a rectangle has 4 corners, a big IO die adjacent a single CCD would have 6
 
Joined
Feb 20, 2019
Messages
8,331 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
You're picking and choosing the markets for the failure of this product by meandering around DIY desktop gamers and CUDA workstation users. Of course, these two user groups won't buy Strix Halo. But DIY desktop gamers represent < 1% of the desktop/laptop market. CUDA workstation users are a different market all together and should never be mentioned here.

Let's see how Strix Halo performs and which products are built around it. I have my own ideas of how a fat CPU/GPU SoC can be used but the market is way more creative.
My experience and exposure is laptop gamers, and laptop creatives who work with Adobe PS/Premiere, Davinci, and the many various AI tools now popping up that need CUDA. I'm sure there are many more use cases than that, but every single one of those demographics will be better off with an Nvidia GPU from either a performance/Watt or API compatibility perspective. If you need multi-threaded CPU compute but not CUDA then it's a demographic I'm not familiar with - not to say that it doesn't exist. The Music industry is one but it's more concerned with DPC latency, not multi-threaded workloads needing a Ryzen 9 instead of a Ryzen 5 or 7.

The number of things that use OpenCL to any success these days is dwindling by the day, and ROCm support is a noble attempt but its influence so far on the software market is somewhere between "absolute zero" and "too small to measure". It's why Nvidia is now the most valuable company on earth, bar none. I certainly don't like that fact, but it's the undeniable truth.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
TPU's speculative analysis of the 128-bit/128-bit split between DDR and GDDR is quite astute, and while I've never seen it mentioned before...makes a ton of sense. I hope this to be the case.

I kept wondering how this product makes any sense, given the bw is so low and the cache does not appear to have changed versus RDNA3.

FWIW, I really like the sporadic "TPU has a theory" last paragraph inclusion that has been included in some news articles as of late.

If it were me, I too would include a last paragraph with an editorial (perhaps italicized and/or with an asterisk). I think this is very good writing that encompasses both available info and what we know it needs.

Thanks for the personal insight (along with the news).

Don't be afraid to keep writing the correctly-compartmentalized editorials! This is, ofc, (a very large part of) what makes TPU special.


TLDR: Keep up the good work, btarunr (in both regards to news and analysis), and don't be afraid to continue to show us that our News Guy actually has an innate understanding of the stuff they are reporting.

:lovetpu:


Edit: was using 7600 math (2048sp) in former calculation (~2.8ghz), not 2560sp. Wasn't completely awake yet when I wrote that (or even as I write this, for that matter).

Erased that JIC anyone caught it. :laugh:

Still, the GDDR/LPDDR theoretical split for bw makes sense wrt VERY efficient clocks at perfect yield (2560sp) or higher-clocked (but still on the voltage/yield curve) using a lesser-enabled part (such as 2048sp).

I really should have had another cup of coffee before writing anything; apologies for my blunder. :oops:
 
Last edited:
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
TPU's speculative analysis of the 128-bit/128-bit split between DDR and GDDR is quite astute, and while I've never seen it mentioned before...makes a ton of sense. I hope this to be the case.
What do you mean by split? It has 256bit LPDDR5x support & "possibly" separate support for GDDR6 ~ you can't split memory interfaces like that IIRC.
 
Joined
Nov 26, 2021
Messages
1,702 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
What do you mean by split? It has 256bit LPDDR5x support & "possibly" separate support for GDDR6 ~ you can't split memory interfaces like that IIRC.
Moreover, LPDDR5X, along with HBM3, is the most power efficient DRAM type. Opting for GDDR6 would increase system power consumption without a commensurate performance increase.
 
Joined
Nov 14, 2021
Messages
138 (0.12/day)
And it still has infinity cache. That is neat.

One thing I have been curious about, with this path AMD has been going down. Why have GPU AI accelerators and a dedicated NPU? Is the space the GPU accelerators take up mostly insignificant? What kind of capability overlap do they have? What makes them unique?

RDNA 3.5 is being used in packages like this and won't be offered on a dedicated discrete card where those AI accelerators make sense, as you'd likely not have an NPU. It seems like if your package has an NPU, you could have designed RDNA 3.5 to not have those AI accelerators at all. But AMD chose to leave them there for a reason. I wonder what that reasoning is.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
What do you mean by split? It has 256bit LPDDR5x support & "possibly" separate support for GDDR6 ~ you can't split memory interfaces like that IIRC.

I apologize. You're correct...I don't know what I was thinking. Again, I spoke before my brain was fully firing on this one.

Just forget I said anything.

I'm feeling pretty foolish at the moment (outside the commending the observation of possible GDDR6).

I usually think about things a lot before I post; I don't know what I was thinking with that one. I would just delete it, but won't hide that everyone makes mistakes, myself included..
 
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
That's just fine, it's all speculation after all ~ as for the OP I don't really expect GDDR6 support for the same chips going into laptops! Although it is possible.

AMD would probably want to minimize the die size & having multiple memory interfaces supported will do the opposite, the only way GDDR6 support is plausible is if any of these chips go into a console or something!
 
Top