• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's Meteor Lake CPU Breaks Ground with On-Package LPDDR5X Memory Integration

"While this approach simplifies manufacturing, enabling slimmer notebook designs"

Unnecessary, it is better to have adequate cooling and stable performance.
Yeah slimmer equals less cooling, less durability, smaller battery, and probably other compromises as well.
 
It's Intel's iGPU, though, so bandwidth is irrelevant because nobody sane is going to try use that POS for gaming. The only thing you use an Intel iGPU is for driving 2D displays not rendering 3D graphics, given that it's barely capable of the former at the best of times.
Not for gaming, but there a lot of other things a GPU can do these days. I used to fire up KDE with all sorts of effects enabled on Intel's IGPs over 10 years ago, they can absolutely handle all that. And these days, there's software that will use OpenCL for various tasks, software like Photoshop or, if I'm not mistaken, even Excel. Not sure any of that would be bandwidth starved, though.
 
It's Intel's iGPU, though, so bandwidth is irrelevant because nobody sane is going to try use that POS for gaming. The only thing you use an Intel iGPU is for driving 2D displays not rendering 3D graphics, given that it's barely capable of the former at the best of times.
Xe 96EU is nothing to scoff at, staying relatively close (though not quite surpassing) AMD's capable 680M graphics.
 
It's Intel's iGPU, though, so bandwidth is irrelevant because nobody sane is going to try use that POS for gaming. The only thing you use an Intel iGPU is for driving 2D displays not rendering 3D graphics, given that it's barely capable of the former at the best of times.
You seem to have confused Intel HD graphics with the Iris core.

Current 96 EU iris is comparable to radeon vega 7 graphics in the 5000 series APUS.

A larger 128eu second gen design may encroach on 780m performance, and people absolutely ARE paying games on that.
 
Do these Meteor Lake models have stacked last level cache shared by GPU and CPU?
When is the release date?
 
When is the release date?
Intel is expected to make a full announcement about Meteor Lake and the release date in about two weeks from today.

Yeah slimmer equals less cooling, less durability, smaller battery, and probably other compromises as well.
All modern laptops look very slim to me, including the one in the Cinebench 2024 benchmark. I guess users accept, live with or are unaware of the compromises caused by it.
 
You seem to have confused Intel HD graphics with the Iris core.

Current 96 EU iris is comparable to radeon vega 7 graphics in the 5000 series APUS.

A larger 128eu second gen design may encroach on 780m performance, and people absolutely ARE paying games on that.
No I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
 
No I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
Lot's of small and light laptops have 680m (and iris) - which is basically the same performance as the 780m.
 
I don't have newest and greatest intel+win laptop. But i have 3 year old ultrabook with 8250U.
Usual scenario. I open the lid and it keeps on doing something drawing 30-45W of power for like 10-15 mins. Then, it sits at around 8W with high count of spikes up to 20W.

Now i also have M1 macbook pro 13" (which is 1,5 years old now), which is same formfactor as the win ultrabook. When i open lid, it draws 3W. Some spikes up to 10 watts, but rarely. It's pretty much always staying in the range of 3-5W power consumption.

The intel laptop is considerably weaker. Doing same tasks longer than macbook.

I have become so sceptical of intel and windows based laptops, i would be afraid to buy one now seeing how intel stagnated for decades doing same chips over and over again.
macbook and ultrabook cost the same at that time i bought them.

But you're comparing a 3 year old x86 processor on 14nm, which is an ISA that runs the large majority of software ever written, to a specialized RISC-based 5nm M1 chip that only runs a very narrow range of applications, and if it emulates x86 it gets terrible performance...

They're not doing the 'same tasks' even though it may appear that way. You're comparing tasks you can do on your M1 to the same thing on your wintel. There are so many things that 3 year old x86 machine can run that are just not even available for the M1.

My iPad gets much better efficiency than my desktop doing tasks that the iPad can do. But I'm not running a dev postgres server, an interface engine, on it and pushing images into the cloud, or even any kind of serious office apps/data tools while also being to interface with legacy business systems.
 
No I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
Current gen AMD Ryzen 7/9 have 12CU 780M, Ryzen 5 have 8CU 760M.
Current gen Intel i9/i7 have 96EU, i5 have 80EU and i3 have 64EU.
 
No I didn't. My point was that in the small and light laptops that the majority of the world uses, you aren't going to get maybe-nearly-as-good-as-780M Iris; you're going to get the shitty "(U)HD Graphics" that are completely worthless for anything except 2D. It's only in the high-end models that you might get Iris and guess what, most of those models come with much faster dGPUs anyway, making Iris (or 780M) irrelevant there.

Until Intel and AMD get their priorities straight and start bringing decent iGPUs to all laptop market segments, not just the most expensive models, their high-powered iGPUs may as well not exist.
For Intel the i5 and above have more than serviceable iGPUs for playing some older AAAs or esports titles. The i3 models are less than ideal but there are some things it can play. Newer AMD APUs are a similar story. And everything except Athlon and "Intel Processor" (Pentium spec) can play 3D games just fine.
 
But you're comparing a 3 year old x86 processor on 14nm, which is an ISA that runs the large majority of software ever written, to a specialized RISC-based 5nm M1 chip that only runs a very narrow range of applications, and if it emulates x86 it gets terrible performance...

They're not doing the 'same tasks' even though it may appear that way. You're comparing tasks you can do on your M1 to the same thing on your wintel. There are so many things that 3 year old x86 machine can run that are just not even available for the M1.

My iPad gets much better efficiency than my desktop doing tasks that the iPad can do. But I'm not running a dev postgres server, an interface engine, on it and pushing images into the cloud, or even any kind of serious office apps/data tools while also being to interface with legacy business systems.
x86's backwards compatibility, AKA its unique selling point, is only a necessity when you're dealing with crusty old apps written decades ago that you can't fix the source of. Most dev tools are up-to-date, so all they need to be Arm-compatible is a recompile to spit out Arm-specific binaries. At that stage you absolutely don't need x86, so why not go for something that weighs nothing and sips power, like a Macbook?

Current gen Intel i9/i7 have 96EU, i5 have 80EU and i3 have 64EU.
The 12th-gen Celerons and Pentiums drop down to 48 EUs - still, that's far better than their 11th-gen counterparts (e.g. the i7-11850H I'm typing this on has a mere 32 EUs). So Intel is moving in the right direction, but (again) only since AMD gave them a kick in the arse.

For Intel the i5 and above have more than serviceable iGPUs for playing some older AAAs or esports titles.
"Serviceable" isn't good enough.
 
"Serviceable" isn't good enough.
When I say serviceable I mean "good enough". If it can play Fortnite at medium/low (not mobile graphics but true PC level) @60 FPS 1080P then that is "serviceable/good enough". Newer iGPUs are equivalent to a GTX 950 in alot of cases and that while not quite fast enough for today's AAA titles it can play lots of worthwhile games.
 
I have become so sceptical of intel and windows based laptops, i would be afraid to buy one now seeing how intel stagnated for decades doing same chips over and over again.
macbook and ultrabook cost the same at that time i bought them.
Well I've been skeptical of macbooks ever since they charge $500 for a soldered SSD with 250gb of storage. Nothing cost more than an Apple and it is closed from end to end. They are for people who have a lot of money and dont know a thing about computers.
 
x86's backwards compatibility, AKA its unique selling point, is only a necessity when you're dealing with crusty old apps written decades ago that you can't fix the source of. Most dev tools are up-to-date, so all they need to be Arm-compatible is a recompile to spit out Arm-specific binaries. At that stage you absolutely don't need x86, so why not go for something that weighs nothing and sips power, like a Macbook?
You would be surprised at how much of the world is still heavily dependent on software from the 80s/90s. It's literally held together by spit and tape, and the tape has run out some years ago.
 
Well I've been skeptical of macbooks ever since they charge $500 for a soldered SSD with 250gb of storage. Nothing cost more than an Apple and it is closed from end to end. They are for people who have a lot of money and dont know a thing about computers.
Windows 11 is a hot piece of garbage. macOS might have its quirks, and it is certainly not without fault, but in my use case, it is far more stable than Windows.

Macs are expensive yes, but so are some laptops, like gaming ROG or Alienware or Razer stuff. Also, many Windows laptops have soldered RAM and SSD storage, so it is not unique to the Macintosh. Macbooks used to be upgradable (RAM and storage), but that was in the Steve Jobs days.

Apple Silicon is super power efficient for the performance you get in return. When my macbook is sitting idle or I'm just browsing the web or watching a video or movie, it consumes just 6W! And no more than 20-25W under load. I never hear the fan ever. I can't say the same about my Windows-based laptop for work, which sounds like a jet engine from time to time, especially during video calls on zoom or Teams.

And your complaint about soldered SSD storage is true, but one way around that is to add thunderbolt-based NVME storage. With how cheap SSDs are these days, and the declining cost of thunderbolt/usb4 enclosures, it is easy to add terabytes of storage for far less than $500. You can pay $200 to $250 tops for 2TB of thunderbolt-based storage ($100 for the nvme enclosure, and $100 to $150 for the 2TB drive if based on pcie4, and even cheaper if based on PCIe3).

Yes thunderbolt 3/4 are based on PCIe3x4 with a real-world top speed of 2800 MB/sec. But that is more than fast enough as compared to the astronomical cost of adding the same amount of internal storage. Finally, with non-intel USB4 controllers now finally coming to market, USB4 NVME enclosures such as the Zike Drive (which uses AsMedia's USB4 controller) top out at around 3800 MB/sec. Not bad at all.
 
Last edited:
You seem to have confused Intel HD graphics with the Iris core.

Current 96 EU iris is comparable to radeon vega 7 graphics in the 5000 series APUS.

A larger 128eu second gen design may encroach on 780m performance, and people absolutely ARE paying games on that.
Also Meteor Lake iGPU is said to be using an enhanced version of Alchemist sharing some Battlemage improvements, very much like Strix point using RDNA3.5. If Meteor Lake can deliver on the claims circulating the net about a 40% reduction in power compared to Raptor Lake and with a much stronger iGPU it could surpass Phoenix at least for efficiency.
 
When my macbook is sitting idle or I'm just browsing the web or watching a video or movie, it consumes just 6W
WOW, that's insane. That's how much my desktop 12900k consumes browsing while streaming 2 videos, rofl
 
Last edited:
The 12th-gen Celerons and Pentiums drop down to 48 EUs - still, that's far better than their 11th-gen counterparts (e.g. the i7-11850H I'm typing this on has a mere 32 EUs). So Intel is moving in the right direction, but (again) only since AMD gave them a kick in the arse.
Intel actually seems to have a fairly reasonable lineup this time around (never thought I'd say that).

AMD also has their low range stuff. The good parts are Phoenix-based R9/R7/R5 and things get messy elsewhere.
In Ryzen3 and some Ryzen 5 you can get Radeon 610M (2CU RDNA2), 6CU Vega, Radeon 660M (4CU RDNA2), Radeon 740M (4CU RDNA3).
And there is also the entire high-end Dragon Range that comes with 610M on IO Die (on R9/R7/R5).

x86's backwards compatibility, AKA its unique selling point, is only a necessity when you're dealing with crusty old apps written decades ago that you can't fix the source of. Most dev tools are up-to-date, so all they need to be Arm-compatible is a recompile to spit out Arm-specific binaries. At that stage you absolutely don't need x86, so why not go for something that weighs nothing and sips power, like a Macbook?
This is a discussion for some other topic but I wholeheartedly disagree. x86 (with its current extensions) is a remarkable stable and complete ISA, especially when compared to ARM. ARM is getting there but note quite yet and was a hot mess not so long ago. Plus all the proprietary concerns even with independent ARM and all that. The vision of software being actively (re)developed or at least supported for newer platform/ISA specific versions is nice and cute... but not rooted in reality.
 
Apple Silicon is super power efficient for the performance you get in return. When my macbook is sitting idle or I'm just browsing the web or watching a video or movie, it consumes just 6W! And no more than 20-25W under load. I never hear the fan ever. I can't say the same about my Windows-based laptop for work, which sounds like a jet engine from time to time, especially during video calls on zoom or Teams.

Isn't 6W kinda...terrible? 5W or so is the benchmark for most iGPU-only ultrabooks these days, Intel or AMD. And 8 core 12CU Rembrandt/Phoenix don't really exceed 20-25W anyway under load when on battery.

When I'm idle on my 6900HS G14 I'm also at 6W and 8-9W in video playback and office tasks. And that's with a SO-DIMM slot, dGPU (albeit MUXed) and 144Hz screen at high brightness driving up the power.

Still, not denying M1 and M2's merits, they are definitely efficient in day to day tasks.

"Serviceable" isn't good enough.

80/96EU Iris is fine, as long as Vega 8 is serviceable then Iris is too. But like with 680M/780M you need high freq LPDDR5 to maximize performance and it's less common to find Intel laptops with that config.
 
When I'm idle on my 6900HS G14 I'm also at 6W and 8-9W in video playback and office tasks.
That's high. You have asus services running? Try ghelper and turn off CPU boost for the silent profile. Consumption should drop to around 4w while browsing
 
M2 should be around 5W at idle, 10W in its max config (m2 ultra)
 
Intel is expected to make a full announcement about Meteor Lake and the release date in about two weeks from today.


All modern laptops look very slim to me, including the one in the Cinebench 2024 benchmark. I guess users accept, live with or are unaware of the compromises caused by it.
Yep

my old laptops, have screws to remove and easy access for drive swap and ram swap.

My friends new slim laptop is like opening a steam deck having to pry the covers off, (I had to remove the keyboard to access his SSD) and navigate to the components. Also his battery doesnt just detach externally like mine either, internal now.
 
Yep

my old laptops, have screws to remove and easy access for drive swap and ram swap.

My friends new slim laptop is like opening a steam deck having to pry the covers off, (I had to remove the keyboard to access his SSD) and navigate to the components. Also his battery doesnt just detach externally like mine either, internal now.
It could be worse, your friend could have a unibody design :D

I still value being able to swap a SSD and stuff like that. But for most people, a laptop or ultrabook is just a fashion accessory. They wouldn't be able to tell a M.2 SSD from a RAM stick anyway.
 
Back
Top