Wednesday, April 10th 2024

Intel Confirms Core Ultra "Lunar Lake" Packs 45 TOPS NPU, Coming This Year

Intel at its VISION conference, confirmed that its next-generation processor for the ultraportable and thin-and-light segments, the Core Ultra "Lunar Lake," will feature an over four-fold increase in NPU performance, which will be as fast as 45 TOPS. This is a significant figure, as Microsoft recently announcedthat Copilot will perform several tasks locally (on the device), provided it has an NPU capable of at least 40 TOPS. The current AI Boost NPU found in Core Ultra "Meteor Lake" processor is no faster than 10 TOPS, and the current AMD Ryzen 8040 series features a Ryzen AI NPU with 16 TOPS on tap. AMD's upcoming Ryzen "Strix Point" processor is rumored to feature a similar 40 TOPS-class NPU performance as "Lunar Lake."

Intel also confirmed that notebooks powered by Core Ultra "Lunar Lake" processors will hit the shelves by Christmas 2024 (December). These notebooks will feature not just the 45 TOPS NPU, but also debut Intel's Arc Xe2 "Battlemage" graphics architecture as the processor's integrated graphics solution. With Microsoft's serious push for standardizing AI assistants, the new crop of notebooks could also feature Copilot as a fixed-function button on their keyboards, similar to the Win key that brings up the Start menu.
Source: Hothardware
Add your own comment

11 Comments on Intel Confirms Core Ultra "Lunar Lake" Packs 45 TOPS NPU, Coming This Year

#1
jlmcr87
Still too weak NPU, if the minimum is 40TOPS
Posted on Reply
#2
Vya Domus
jlmcr87Still too weak NPU
Too weak for what ?
Posted on Reply
#3
AnarchoPrimitiv
So, how long until data brokers use that AI "assistant" to violate privacy and line their pockets?
Posted on Reply
#4
FeelinFroggy
Can someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?
Posted on Reply
#5
ThrashZone
FeelinFroggyCan someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?
Hi,
Local AI would be best in company environment searching their own closed data for employ's..

Copilot in general is just another word for web search I have no idea what local means in that respect seeing I doubt anyone would want AI storing web search in bulk on their machines although MS doesn't allow deleting on close edge so they do love storing it locally same for oS user usage hehe

Bottom line "cloud" is just a dog whistle like "AI" is unless it's doing something to improve photos/ videos/ gaming at the users request.
Posted on Reply
#6
ScaLibBDP
>>...NPU performance, which will be as fast as 45 TOPs...

It doesn't say if this is at INT8 data type. I think this is INT8.
Posted on Reply
#7
duraz0rz
Trust this random person on the Internet
FeelinFroggyCan someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?
Pretend you asked Copilot to "open all PDFs related to hardware in my Documents folder", and that it can handle this query. If this query was executed in the cloud, you would need to send MS the list of files in that folder plus any programs that can handle PDFs. It may have to respond "What program do you want these files to open in?" if you have multiple programs that can handle PDFs. All of this costs bandwidth, CPU cycles (the cloud has to wait for your response), complexity of running over a network, resource contention, security/privacy issues with sending your personal data over the Internet.

Running Copilot locally on a dedicated AI processor circumvents all of those issues. It also helps train it to things you specifically do. The model they ship to your PC is initially trained against an initial data set, probably with a lot of the most common tasks or requests they've gathered through their metrics or feedback channels. There are scenarios that the model hasn't encountered and needs help to be trained on how to do it. This training data could be sent back to Microsoft anonymously to incorporate into their more general model that's then shipped out to everyone else.
Posted on Reply
#8
Vayra86
duraz0rzPretend you asked Copilot to "open all PDFs related to hardware in my Documents folder", and that it can handle this query. If this query was executed in the cloud, you would need to send MS the list of files in that folder plus any programs that can handle PDFs. It may have to respond "What program do you want these files to open in?" if you have multiple programs that can handle PDFs. All of this costs bandwidth, CPU cycles (the cloud has to wait for your response), complexity of running over a network, resource contention, security/privacy issues with sending your personal data over the Internet.

Running Copilot locally on a dedicated AI processor circumvents all of those issues. It also helps train it to things you specifically do. The model they ship to your PC is initially trained against an initial data set, probably with a lot of the most common tasks or requests they've gathered through their metrics or feedback channels. There are scenarios that the model hasn't encountered and needs help to be trained on how to do it. This training data could be sent back to Microsoft anonymously to incorporate into their more general model that's then shipped out to everyone else.
In a TL DR translation: We're gonna be folding@home for MS :)
Posted on Reply
#9
Darmok N Jalad
FeelinFroggyCan someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?
duraz0rzPretend you asked Copilot to "open all PDFs related to hardware in my Documents folder", and that it can handle this query. If this query was executed in the cloud, you would need to send MS the list of files in that folder plus any programs that can handle PDFs. It may have to respond "What program do you want these files to open in?" if you have multiple programs that can handle PDFs. All of this costs bandwidth, CPU cycles (the cloud has to wait for your response), complexity of running over a network, resource contention, security/privacy issues with sending your personal data over the Internet.

Running Copilot locally on a dedicated AI processor circumvents all of those issues. It also helps train it to things you specifically do. The model they ship to your PC is initially trained against an initial data set, probably with a lot of the most common tasks or requests they've gathered through their metrics or feedback channels. There are scenarios that the model hasn't encountered and needs help to be trained on how to do it. This training data could be sent back to Microsoft anonymously to incorporate into their more general model that's then shipped out to everyone else.
Also, there are programs that already exist today that will use the NPU to run AI tasks on local files. DxO's software uses it for RAW files, so to do that "in the cloud" would mean uploading all those RAW files and then pulling them back down. RAW files are about a MB per megapixel, and today's cameras are anywhere from 20 to 100MP, maybe more. Even with some really fast internet, that's going to turn a 4 second task into something considerably longer. Some stuff is better in the cloud, no doubt, but I suspect anything manipulating local files is going to be ideally performed locally.
Posted on Reply
#10
Dr. Dro
I'm done with Intel mobile processors. It's just not worth buying them until they tell Dell, HP and Lenovo to piss off with the external embedded controllers overriding the CPU for throttling. Except that's not gonna happen.

My Ryzen laptop is such bliss. Completely throttle free. Works perfectly to AMD's specification. That's how a good design is done.
Posted on Reply
#11
ThrashZone
Vayra86In a TL DR translation: We're gonna be folding@home for MS :)
Hi,
Not so far fetched actually seeing they already did the win-10 updates bonnet when it first came out
Sharing or receiving updates from others on the internet instead of just from MS servers hehe

Sure household local network can be nice but fact is they activated sending or receiving by default at one time without notice.
Posted on Reply
Add your own comment
Dec 18th, 2024 19:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts