Thursday, September 7th 2023

Intel's Meteor Lake CPU Breaks Ground with On-Package LPDDR5X Memory Integration

During a recent demonstration, Intel showcased its cutting-edge packaging technologies, EMIB (embedded multi-die interconnect bridge) and Foveros, unveiling the highly-anticipated Meteor Lake processor with integrated LPDDR5X memory. This move appears to align with Apple's successful integration of LPDDR memory into its M1 and M2 chip packages. At the heart of Intel's presentation was the quad-tile Meteor Lake CPU, leveraging Foveros packaging for its chiplets and boasting 16 GB of Samsung's LPDDR5X-7500 memory. Although the specific CPU configuration remains undisclosed, the 16 GB of integrated memory delivers a remarkable peak bandwidth of 120 GB/s, outperforming traditional memory subsystems using DDR5-5200 or LPDDR5-6400.

Nevertheless, this approach comes with trade-offs, such as the potential for system-wide failure if a memory chip malfunctions, limited upgradeability in soldered-down configurations, and the need for more advanced cooling solutions to manage CPU and memory heat. While Apple pioneered on-package LPDDR memory integration in client CPUs, Intel has a history of using package-on-package DRAM with its Atom-branded CPUs for tablets and ultrathin laptops. While this approach simplifies manufacturing, enabling slimmer notebook designs, it curtails configuration flexibility. We are yet to see if big laptop makers such as Dell, HP, and Asus, take on this design in the coming months.
Sources: Intel, via Tom's Hardware
Add your own comment

55 Comments on Intel's Meteor Lake CPU Breaks Ground with On-Package LPDDR5X Memory Integration

#26
bug
AssimilatorIt's Intel's iGPU, though, so bandwidth is irrelevant because nobody sane is going to try use that POS for gaming. The only thing you use an Intel iGPU is for driving 2D displays not rendering 3D graphics, given that it's barely capable of the former at the best of times.
Not for gaming, but there a lot of other things a GPU can do these days. I used to fire up KDE with all sorts of effects enabled on Intel's IGPs over 10 years ago, they can absolutely handle all that. And these days, there's software that will use OpenCL for various tasks, software like Photoshop or, if I'm not mistaken, even Excel. Not sure any of that would be bandwidth starved, though.
Posted on Reply
#27
Nattsun
AssimilatorIt's Intel's iGPU, though, so bandwidth is irrelevant because nobody sane is going to try use that POS for gaming. The only thing you use an Intel iGPU is for driving 2D displays not rendering 3D graphics, given that it's barely capable of the former at the best of times.
Xe 96EU is nothing to scoff at, staying relatively close (though not quite surpassing) AMD's capable 680M graphics.
Posted on Reply
#28
TheinsanegamerN
AssimilatorIt's Intel's iGPU, though, so bandwidth is irrelevant because nobody sane is going to try use that POS for gaming. The only thing you use an Intel iGPU is for driving 2D displays not rendering 3D graphics, given that it's barely capable of the former at the best of times.
You seem to have confused Intel HD graphics with the Iris core.

Current 96 EU iris is comparable to radeon vega 7 graphics in the 5000 series APUS.

A larger 128eu second gen design may encroach on 780m performance, and people absolutely ARE paying games on that.
Posted on Reply
#29
ToTTenTranz
Do these Meteor Lake models have stacked last level cache shared by GPU and CPU?
When is the release date?
Posted on Reply
#30
pressing on
ToTTenTranzWhen is the release date?
Intel is expected to make a full announcement about Meteor Lake and the release date in about two weeks from today.
chrcolukYeah slimmer equals less cooling, less durability, smaller battery, and probably other compromises as well.
All modern laptops look very slim to me, including the one in the Cinebench 2024 benchmark. I guess users accept, live with or are unaware of the compromises caused by it.
Posted on Reply
#31
Assimilator
TheinsanegamerNYou seem to have confused Intel HD graphics with the Iris core.

Current 96 EU iris is comparable to radeon vega 7 graphics in the 5000 series APUS.

A larger 128eu second gen design may encroach on 780m performance, and people absolutely ARE paying games on that.
No I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
Posted on Reply
#32
bug
ToTTenTranzDo these Meteor Lake models have stacked last level cache shared by GPU and CPU?
Probably not, it wouldn't make much, if any, sense.
Posted on Reply
#33
JustBenching
AssimilatorNo I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
Lot's of small and light laptops have 680m (and iris) - which is basically the same performance as the 780m.
Posted on Reply
#34
phanbuey
randomUserI don't have newest and greatest intel+win laptop. But i have 3 year old ultrabook with 8250U.
Usual scenario. I open the lid and it keeps on doing something drawing 30-45W of power for like 10-15 mins. Then, it sits at around 8W with high count of spikes up to 20W.

Now i also have M1 macbook pro 13" (which is 1,5 years old now), which is same formfactor as the win ultrabook. When i open lid, it draws 3W. Some spikes up to 10 watts, but rarely. It's pretty much always staying in the range of 3-5W power consumption.

The intel laptop is considerably weaker. Doing same tasks longer than macbook.

I have become so sceptical of intel and windows based laptops, i would be afraid to buy one now seeing how intel stagnated for decades doing same chips over and over again.
macbook and ultrabook cost the same at that time i bought them.
But you're comparing a 3 year old x86 processor on 14nm, which is an ISA that runs the large majority of software ever written, to a specialized RISC-based 5nm M1 chip that only runs a very narrow range of applications, and if it emulates x86 it gets terrible performance...

They're not doing the 'same tasks' even though it may appear that way. You're comparing tasks you can do on your M1 to the same thing on your wintel. There are so many things that 3 year old x86 machine can run that are just not even available for the M1.

My iPad gets much better efficiency than my desktop doing tasks that the iPad can do. But I'm not running a dev postgres server, an interface engine, on it and pushing images into the cloud, or even any kind of serious office apps/data tools while also being to interface with legacy business systems.
Posted on Reply
#35
londiste
AssimilatorNo I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
Current gen AMD Ryzen 7/9 have 12CU 780M, Ryzen 5 have 8CU 760M.
Current gen Intel i9/i7 have 96EU, i5 have 80EU and i3 have 64EU.
Posted on Reply
#36
Luke357
AssimilatorNo I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
For Intel the i5 and above have more than serviceable iGPUs for playing some older AAAs or esports titles. The i3 models are less than ideal but there are some things it can play. Newer AMD APUs are a similar story. And everything except Athlon and "Intel Processor" (Pentium spec) can play 3D games just fine.
Posted on Reply
#37
Assimilator
phanbueyBut you're comparing a 3 year old x86 processor on 14nm, which is an ISA that runs the large majority of software ever written, to a specialized RISC-based 5nm M1 chip that only runs a very narrow range of applications, and if it emulates x86 it gets terrible performance...

They're not doing the 'same tasks' even though it may appear that way. You're comparing tasks you can do on your M1 to the same thing on your wintel. There are so many things that 3 year old x86 machine can run that are just not even available for the M1.

My iPad gets much better efficiency than my desktop doing tasks that the iPad can do. But I'm not running a dev postgres server, an interface engine, on it and pushing images into the cloud, or even any kind of serious office apps/data tools while also being to interface with legacy business systems.
x86's backwards compatibility, AKA its unique selling point, is only a necessity when you're dealing with crusty old apps written decades ago that you can't fix the source of. Most dev tools are up-to-date, so all they need to be Arm-compatible is a recompile to spit out Arm-specific binaries. At that stage you absolutely don't need x86, so why not go for something that weighs nothing and sips power, like a Macbook?
londisteCurrent gen Intel i9/i7 have 96EU, i5 have 80EU and i3 have 64EU.
The 12th-gen Celerons and Pentiums drop down to 48 EUs - still, that's far better than their 11th-gen counterparts (e.g. the i7-11850H I'm typing this on has a mere 32 EUs). So Intel is moving in the right direction, but (again) only since AMD gave them a kick in the arse.
Luke357For Intel the i5 and above have more than serviceable iGPUs for playing some older AAAs or esports titles.
"Serviceable" isn't good enough.
Posted on Reply
#38
Luke357
Assimilator"Serviceable" isn't good enough.
When I say serviceable I mean "good enough". If it can play Fortnite at medium/low (not mobile graphics but true PC level) @60 FPS 1080P then that is "serviceable/good enough". Newer iGPUs are equivalent to a GTX 950 in alot of cases and that while not quite fast enough for today's AAA titles it can play lots of worthwhile games.
Posted on Reply
#39
FeelinFroggy
randomUserI have become so sceptical of intel and windows based laptops, i would be afraid to buy one now seeing how intel stagnated for decades doing same chips over and over again.
macbook and ultrabook cost the same at that time i bought them.
Well I've been skeptical of macbooks ever since they charge $500 for a soldered SSD with 250gb of storage. Nothing cost more than an Apple and it is closed from end to end. They are for people who have a lot of money and dont know a thing about computers.
Posted on Reply
#40
persondb
Assimilatorx86's backwards compatibility, AKA its unique selling point, is only a necessity when you're dealing with crusty old apps written decades ago that you can't fix the source of. Most dev tools are up-to-date, so all they need to be Arm-compatible is a recompile to spit out Arm-specific binaries. At that stage you absolutely don't need x86, so why not go for something that weighs nothing and sips power, like a Macbook?
You would be surprised at how much of the world is still heavily dependent on software from the 80s/90s. It's literally held together by spit and tape, and the tape has run out some years ago.
Posted on Reply
#41
Nanochip
FeelinFroggyWell I've been skeptical of macbooks ever since they charge $500 for a soldered SSD with 250gb of storage. Nothing cost more than an Apple and it is closed from end to end. They are for people who have a lot of money and dont know a thing about computers.
Windows 11 is a hot piece of garbage. macOS might have its quirks, and it is certainly not without fault, but in my use case, it is far more stable than Windows.

Macs are expensive yes, but so are some laptops, like gaming ROG or Alienware or Razer stuff. Also, many Windows laptops have soldered RAM and SSD storage, so it is not unique to the Macintosh. Macbooks used to be upgradable (RAM and storage), but that was in the Steve Jobs days.

Apple Silicon is super power efficient for the performance you get in return. When my macbook is sitting idle or I'm just browsing the web or watching a video or movie, it consumes just 6W! And no more than 20-25W under load. I never hear the fan ever. I can't say the same about my Windows-based laptop for work, which sounds like a jet engine from time to time, especially during video calls on zoom or Teams.

And your complaint about soldered SSD storage is true, but one way around that is to add thunderbolt-based NVME storage. With how cheap SSDs are these days, and the declining cost of thunderbolt/usb4 enclosures, it is easy to add terabytes of storage for far less than $500. You can pay $200 to $250 tops for 2TB of thunderbolt-based storage ($100 for the nvme enclosure, and $100 to $150 for the 2TB drive if based on pcie4, and even cheaper if based on PCIe3).

Yes thunderbolt 3/4 are based on PCIe3x4 with a real-world top speed of 2800 MB/sec. But that is more than fast enough as compared to the astronomical cost of adding the same amount of internal storage. Finally, with non-intel USB4 controllers now finally coming to market, USB4 NVME enclosures such as the Zike Drive (which uses AsMedia's USB4 controller) top out at around 3800 MB/sec. Not bad at all.
Posted on Reply
#42
Minus Infinity
TheinsanegamerNYou seem to have confused Intel HD graphics with the Iris core.

Current 96 EU iris is comparable to radeon vega 7 graphics in the 5000 series APUS.

A larger 128eu second gen design may encroach on 780m performance, and people absolutely ARE paying games on that.
Also Meteor Lake iGPU is said to be using an enhanced version of Alchemist sharing some Battlemage improvements, very much like Strix point using RDNA3.5. If Meteor Lake can deliver on the claims circulating the net about a 40% reduction in power compared to Raptor Lake and with a much stronger iGPU it could surpass Phoenix at least for efficiency.
Posted on Reply
#43
JustBenching
NanochipWhen my macbook is sitting idle or I'm just browsing the web or watching a video or movie, it consumes just 6W
WOW, that's insane. That's how much my desktop 12900k consumes browsing while streaming 2 videos, rofl
Posted on Reply
#44
londiste
AssimilatorThe 12th-gen Celerons and Pentiums drop down to 48 EUs - still, that's far better than their 11th-gen counterparts (e.g. the i7-11850H I'm typing this on has a mere 32 EUs). So Intel is moving in the right direction, but (again) only since AMD gave them a kick in the arse.
Intel actually seems to have a fairly reasonable lineup this time around (never thought I'd say that).

AMD also has their low range stuff. The good parts are Phoenix-based R9/R7/R5 and things get messy elsewhere.
In Ryzen3 and some Ryzen 5 you can get Radeon 610M (2CU RDNA2), 6CU Vega, Radeon 660M (4CU RDNA2), Radeon 740M (4CU RDNA3).
And there is also the entire high-end Dragon Range that comes with 610M on IO Die (on R9/R7/R5).
Assimilatorx86's backwards compatibility, AKA its unique selling point, is only a necessity when you're dealing with crusty old apps written decades ago that you can't fix the source of. Most dev tools are up-to-date, so all they need to be Arm-compatible is a recompile to spit out Arm-specific binaries. At that stage you absolutely don't need x86, so why not go for something that weighs nothing and sips power, like a Macbook?
This is a discussion for some other topic but I wholeheartedly disagree. x86 (with its current extensions) is a remarkable stable and complete ISA, especially when compared to ARM. ARM is getting there but note quite yet and was a hot mess not so long ago. Plus all the proprietary concerns even with independent ARM and all that. The vision of software being actively (re)developed or at least supported for newer platform/ISA specific versions is nice and cute... but not rooted in reality.
Posted on Reply
#45
tabascosauz
NanochipApple Silicon is super power efficient for the performance you get in return. When my macbook is sitting idle or I'm just browsing the web or watching a video or movie, it consumes just 6W! And no more than 20-25W under load. I never hear the fan ever. I can't say the same about my Windows-based laptop for work, which sounds like a jet engine from time to time, especially during video calls on zoom or Teams.
Isn't 6W kinda...terrible? 5W or so is the benchmark for most iGPU-only ultrabooks these days, Intel or AMD. And 8 core 12CU Rembrandt/Phoenix don't really exceed 20-25W anyway under load when on battery.

When I'm idle on my 6900HS G14 I'm also at 6W and 8-9W in video playback and office tasks. And that's with a SO-DIMM slot, dGPU (albeit MUXed) and 144Hz screen at high brightness driving up the power.

Still, not denying M1 and M2's merits, they are definitely efficient in day to day tasks.
Assimilator"Serviceable" isn't good enough.
80/96EU Iris is fine, as long as Vega 8 is serviceable then Iris is too. But like with 680M/780M you need high freq LPDDR5 to maximize performance and it's less common to find Intel laptops with that config.
Posted on Reply
#46
JustBenching
tabascosauzWhen I'm idle on my 6900HS G14 I'm also at 6W and 8-9W in video playback and office tasks.
That's high. You have asus services running? Try ghelper and turn off CPU boost for the silent profile. Consumption should drop to around 4w while browsing
Posted on Reply
#47
claes
M2 should be around 5W at idle, 10W in its max config (m2 ultra)
Posted on Reply
#48
chrcoluk
pressing onIntel is expected to make a full announcement about Meteor Lake and the release date in about two weeks from today.


All modern laptops look very slim to me, including the one in the Cinebench 2024 benchmark. I guess users accept, live with or are unaware of the compromises caused by it.
Yep

my old laptops, have screws to remove and easy access for drive swap and ram swap.

My friends new slim laptop is like opening a steam deck having to pry the covers off, (I had to remove the keyboard to access his SSD) and navigate to the components. Also his battery doesnt just detach externally like mine either, internal now.
Posted on Reply
#49
bug
chrcolukYep

my old laptops, have screws to remove and easy access for drive swap and ram swap.

My friends new slim laptop is like opening a steam deck having to pry the covers off, (I had to remove the keyboard to access his SSD) and navigate to the components. Also his battery doesnt just detach externally like mine either, internal now.
It could be worse, your friend could have a unibody design :D

I still value being able to swap a SSD and stuff like that. But for most people, a laptop or ultrabook is just a fashion accessory. They wouldn't be able to tell a M.2 SSD from a RAM stick anyway.
Posted on Reply
#50
ToTTenTranz
bugProbably not, it wouldn't make much, if any, sense.
So the Adamantine cache isn't coming to these models?

Aren't these Broadwell's "spiritual successors", APUs with unified iGPU+CPU LLC and a larger iGPU with the full Alchemist instruction set?
Posted on Reply
Add your own comment
Apr 15th, 2025 08:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts