• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Confirms Core Ultra "Lunar Lake" Packs 45 TOPS NPU, Coming This Year

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,677 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel at its VISION conference, confirmed that its next-generation processor for the ultraportable and thin-and-light segments, the Core Ultra "Lunar Lake," will feature an over four-fold increase in NPU performance, which will be as fast as 45 TOPS. This is a significant figure, as Microsoft recently announcedthat Copilot will perform several tasks locally (on the device), provided it has an NPU capable of at least 40 TOPS. The current AI Boost NPU found in Core Ultra "Meteor Lake" processor is no faster than 10 TOPS, and the current AMD Ryzen 8040 series features a Ryzen AI NPU with 16 TOPS on tap. AMD's upcoming Ryzen "Strix Point" processor is rumored to feature a similar 40 TOPS-class NPU performance as "Lunar Lake."

Intel also confirmed that notebooks powered by Core Ultra "Lunar Lake" processors will hit the shelves by Christmas 2024 (December). These notebooks will feature not just the 45 TOPS NPU, but also debut Intel's Arc Xe2 "Battlemage" graphics architecture as the processor's integrated graphics solution. With Microsoft's serious push for standardizing AI assistants, the new crop of notebooks could also feature Copilot as a fixed-function button on their keyboards, similar to the Win key that brings up the Start menu.



View at TechPowerUp Main Site | Source
 
So, how long until data brokers use that AI "assistant" to violate privacy and line their pockets?
 
Can someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?
 
Can someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?
Hi,
Local AI would be best in company environment searching their own closed data for employ's..

Copilot in general is just another word for web search I have no idea what local means in that respect seeing I doubt anyone would want AI storing web search in bulk on their machines although MS doesn't allow deleting on close edge so they do love storing it locally same for oS user usage hehe

Bottom line "cloud" is just a dog whistle like "AI" is unless it's doing something to improve photos/ videos/ gaming at the users request.
 
>>...NPU performance, which will be as fast as 45 TOPs...

It doesn't say if this is at INT8 data type. I think this is INT8.
 
Can someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?
Pretend you asked Copilot to "open all PDFs related to hardware in my Documents folder", and that it can handle this query. If this query was executed in the cloud, you would need to send MS the list of files in that folder plus any programs that can handle PDFs. It may have to respond "What program do you want these files to open in?" if you have multiple programs that can handle PDFs. All of this costs bandwidth, CPU cycles (the cloud has to wait for your response), complexity of running over a network, resource contention, security/privacy issues with sending your personal data over the Internet.

Running Copilot locally on a dedicated AI processor circumvents all of those issues. It also helps train it to things you specifically do. The model they ship to your PC is initially trained against an initial data set, probably with a lot of the most common tasks or requests they've gathered through their metrics or feedback channels. There are scenarios that the model hasn't encountered and needs help to be trained on how to do it. This training data could be sent back to Microsoft anonymously to incorporate into their more general model that's then shipped out to everyone else.
 
Pretend you asked Copilot to "open all PDFs related to hardware in my Documents folder", and that it can handle this query. If this query was executed in the cloud, you would need to send MS the list of files in that folder plus any programs that can handle PDFs. It may have to respond "What program do you want these files to open in?" if you have multiple programs that can handle PDFs. All of this costs bandwidth, CPU cycles (the cloud has to wait for your response), complexity of running over a network, resource contention, security/privacy issues with sending your personal data over the Internet.

Running Copilot locally on a dedicated AI processor circumvents all of those issues. It also helps train it to things you specifically do. The model they ship to your PC is initially trained against an initial data set, probably with a lot of the most common tasks or requests they've gathered through their metrics or feedback channels. There are scenarios that the model hasn't encountered and needs help to be trained on how to do it. This training data could be sent back to Microsoft anonymously to incorporate into their more general model that's then shipped out to everyone else.
In a TL DR translation: We're gonna be folding@home for MS :)
 
Can someone educate me on why it's important to run Copilot locally instead of the cloud? In my opinion, we have only been getting closer and closer to cloud computing but this seems to be going the other direction. Are Microsoft severs not better than an ultra laptop CPU?

Pretend you asked Copilot to "open all PDFs related to hardware in my Documents folder", and that it can handle this query. If this query was executed in the cloud, you would need to send MS the list of files in that folder plus any programs that can handle PDFs. It may have to respond "What program do you want these files to open in?" if you have multiple programs that can handle PDFs. All of this costs bandwidth, CPU cycles (the cloud has to wait for your response), complexity of running over a network, resource contention, security/privacy issues with sending your personal data over the Internet.

Running Copilot locally on a dedicated AI processor circumvents all of those issues. It also helps train it to things you specifically do. The model they ship to your PC is initially trained against an initial data set, probably with a lot of the most common tasks or requests they've gathered through their metrics or feedback channels. There are scenarios that the model hasn't encountered and needs help to be trained on how to do it. This training data could be sent back to Microsoft anonymously to incorporate into their more general model that's then shipped out to everyone else.
Also, there are programs that already exist today that will use the NPU to run AI tasks on local files. DxO's software uses it for RAW files, so to do that "in the cloud" would mean uploading all those RAW files and then pulling them back down. RAW files are about a MB per megapixel, and today's cameras are anywhere from 20 to 100MP, maybe more. Even with some really fast internet, that's going to turn a 4 second task into something considerably longer. Some stuff is better in the cloud, no doubt, but I suspect anything manipulating local files is going to be ideally performed locally.
 
I'm done with Intel mobile processors. It's just not worth buying them until they tell Dell, HP and Lenovo to piss off with the external embedded controllers overriding the CPU for throttling. Except that's not gonna happen.

My Ryzen laptop is such bliss. Completely throttle free. Works perfectly to AMD's specification. That's how a good design is done.
 
In a TL DR translation: We're gonna be folding@home for MS :)
Hi,
Not so far fetched actually seeing they already did the win-10 updates bonnet when it first came out
Sharing or receiving updates from others on the internet instead of just from MS servers hehe

Sure household local network can be nice but fact is they activated sending or receiving by default at one time without notice.
 
Back
Top