Wednesday, March 27th 2024

Microsoft Copilot to Run Locally on AI PCs with at Least 40 TOPS of NPU Performance

Microsoft, Intel, and AMD are attempting to jumpstart demand in the PC industry again, under the aegis of the AI PC—devices with native acceleration for AI workloads. Both Intel and AMD have mobile processors with on-silicon NPUs (neural processing units), which are designed to accelerate the first wave of AI-enhanced client experiences on Windows 11 23H2. Microsoft's bulwark with democratizing AI has been Copilot, as a licensee of Open AI GPT-4, GPT-4 Turbo, Dali, and other generative AI tools from the Open AI stable. Copilot is currently Microsoft's most heavily invested application, with its most capital and best minds mobilized to making it the most popular AI assistant. Microsoft even pushed for the AI PC designator to PC OEMs, which requires them to have a dedicated Copilot key akin to the Start key (we'll see how anti-competition regulators deal with that).

The problem with Microsoft's tango with Intel and AMD to push AI PCs, is that Copilot doesn't really use an NPU, not even at the edge—you input a query or a prompt, and Copilot hands it over to a cloud-based AI service. This is about to change, with Microsoft announcing that Copilot will be able to run locally on AI PCs. Microsoft identified several kinds of Copilot use-cases that an NPU can handle on-device, which should speed up response times to Copilot queries, but this requires the NPU to have at least 40 TOPS of performance. This is a problem for the current crop of processors with NPUs. Intel's Core Ultra "Meteor Lake" has an AI Boost NPU with 10 TOPS on tap, while the Ryzen 8040 "Hawk Point" is only slightly faster, with a 16 TOPS Ryzen AI NPU. AMD has already revealed that the XDNA 2-based 2nd Generation Ryzen AI NPU in its upcoming "Strix Point" processors will come with over 40 TOPS of performance, and it stands to reason that the NPUs in Intel's "Arrow Lake" or "Lunar Lake" processors are comparable in performance; which should enable on-device Copilot.
Source: Tom's Hardware
Add your own comment

25 Comments on Microsoft Copilot to Run Locally on AI PCs with at Least 40 TOPS of NPU Performance

#1
dir_d
Why the hell would anyone buy a separate AI Microsoft license on a laptop to run their own AI when you can use so many online. As far as i know only the new Laptop SOCs from Intel and AMD carry NPUs.
Posted on Reply
#2
MrDweezil
dir_dWhy the hell would anyone buy a separate AI Microsoft license on a laptop to run their own AI when you can use so many online. As far as i know only the new Laptop SOCs from Intel and AMD carry NPUs.
Response time? Not sending your data away?
Posted on Reply
#3
watzupken
MrDweezilResponse time? Not sending your data away?
I doubt you are not sending data away. The app may be local, but it’s almost guaranteed that it will send data back to MS or OpenAI for more learning.
Having spent so much money on AI hardware, they now need to try and monetize it. So I won’t be surprise when big firms start to charge for use, others will eventually follow along with some sort of subscription model.
Posted on Reply
#4
ThrashZone
Hi,
Just another name for telemetry and narrative training hehe
Glad I'm done with upgrading !
Posted on Reply
#5
enb141
Now the question is if customer grade NPU add on cards will enter the market.
Posted on Reply
#6
Broken Processor
This is just more bloat I have to remove from Windows. I only use windows for gaming everything else is done on Linux. I can't wait for the day I can ditch this turd.
Posted on Reply
#7
Bwaze
enb141Now the question is if customer grade NPU add on cards will enter the market.
Yoy can already accelerate AI learning at home with (strong) GPUs. For Stable Diffusion they reccomend RTX 4080 or 4090. :p
Posted on Reply
#8
Lewzke
But the local network is much smaller than the online network inference, this is only useful when you have no internet connection but that is just surreal. I smell marketing here ...
When the PC can run the full size GPT inference then the sides can turn ... but this way is just some good sounding stickers in the laptop.
Posted on Reply
#10
Vayra86
'Run locally' is a desperate attempt it seems to avoid the Bitcoin-pitfall of 'look at the energy cost of a transaction'. Now you've distributed that energy cost to the user and hidden it between all other usage and all is well.
Posted on Reply
#11
ThrashZone
Hi,
To myself anyway the term run locally seems more like a massive local AI search database installed hehe
Posted on Reply
#12
dir_d
After all the replies so far i still don't see why anyone would buy a separate AI license for windows to run this on a laptop. I could see a workstation computer but for a laptop I'm just stumped.
Posted on Reply
#13
ThrashZone
dir_dAfter all the replies so far i still don't see why anyone would buy a separate AI license for windows to run this on a laptop. I could see a workstation computer but for a laptop I'm just stumped.
Hi,
With most things now days buying is not an option only yearly subscriptions allowed.
Posted on Reply
#14
dir_d
ThrashZoneHi,
With most things now days buying is not an option only yearly subscriptions allowed.
Fair enough but who is this for?
Posted on Reply
#15
ThrashZone
dir_dFair enough but who is this for?
Hi,
For people naive enough to think AI isn't web search results hehe
Posted on Reply
#16
Darmok N Jalad
dir_dWhy the hell would anyone buy a separate AI Microsoft license on a laptop to run their own AI when you can use so many online. As far as i know only the new Laptop SOCs from Intel and AMD carry NPUs.
Apple claims that all their AI stuff is handled on-system through their Neural Engine. The rub, of course, is that Siri is way behind other AIs (which lends some credence to their claim, as more telemetry would help make Siri better), but for things like image processing, the NE does reasonably well. No doubt MS is wanting to give the user local AI as at least a competitive bullet point. It could also be preparing for future mandates when governments get involved in AI and data privacy legislation, which we all know will happen.
watzupkenI doubt you are not sending data away. The app may be local, but it’s almost guaranteed that it will send data back to MS or OpenAI for more learning.
Having spent so much money on AI hardware, they now need to try and monetize it. So I won’t be surprise when big firms start to charge for use, others will eventually follow along with some sort of subscription model.
Yes, with as much telemetry that MS collects on all its products, I suspect that the processing will be done locally, but telemetry will be sent back “to help improve the service.” Someday they may have to lock it down due to legislation, but not before they get tons of data back first.
Posted on Reply
#17
enb141
BwazeYoy can already accelerate AI learning at home with (strong) GPUs. For Stable Diffusion they reccomend RTX 4080 or 4090. :p
Microsoft said that they won't use GPU for this tasks.. In this case NPU is needed because they want to free the GPU for other tasks and also because GPU consume lots of power in comparison to NPU.
Posted on Reply
#18
Noyand
Vayra86'Run locally' is a desperate attempt it seems to avoid the Bitcoin-pitfall of 'look at the energy cost of a transaction'. Now you've distributed that energy cost to the user and hidden it between all other usage and all is well.
It will depend on how the user usage will evolve TBH. If copilot is mainly being used as a web search engine, most of the processing will be cloud based. My guess is that the local processing will involve stuff like text correction, text recognition on a picture, voice recognition, maybe image generation and so on. The kind of thing that even an iPhone 11 process locally. Virtual assistant like Siri, Google Assistant also became partially "Running on device", the PC is just following the same trend. A lot of the stuff that happening now are stuff that phones and tablet already achieved years ago.

In that context, I find the comparison to be misplaced. The NPU is not meant to be a big ML/ AI work horse eating watts 24/24 like no tomorrow, but a low power silicon used for stuff that really doesn't require that much processing power (Or to boost the GPU in some cases). A GPU is faster, but also use more power.

Posted on Reply
#19
ThrashZone
Hi,
Hell I thought most were against mining saying it's a waste of power blah...
AI is mining to and using your resources to do it and wants you to pay a subscription for the pleasure

AI mines user activities and web data so where is the outrage hehe :slap:
Posted on Reply
#20
Noyand
ThrashZoneHi,
Hell I thought most were against mining saying it's a waste of power blah...
AI is mining to and using your resources to do it and wants you to pay a subscription for the pleasure

AI mines user activities and web data so where is the outrage hehe :slap:
There's a distinction between the AI ran by end-users, and the AI running in the data center, and it also depends on what the A.I. is trying to achieve. When apps talk about A.I., they talk about a simplified model that's been figured out by the massive data center. Our computers don't really do much training, besides simple stuff like speech patterns for autocorrect.

The quality of the data that the A.I. use to be trained on is too valuable, from what I read, controlled, and curated content is favoured. A.I. training on broad user data has been done before, it didn't end well :D(Microsoft trained an A.I. with Twitter users' data, and it became a real piece of shit).

They have better ways to exploit us: when you solve a captcha, you contribute to train an A.I. it's digital labour and that's the currency being used for the "free" stuff (which isn't really free).

ML/A.I don't gather as much complains as mining because on some application the end-user does benefits from it. Stuff like real time translation, subject detection, better autofocus in cameras etc...
Posted on Reply
#21
Makaveli
enb141Microsoft said that they won't use GPU for this tasks.. In this case NPU is needed because they want to free the GPU for other tasks and also because GPU consume lots of power in comparison to NPU.
And for a laptop that makes sense. However a desktop with a powerful gpu and power won't have those issues so lets hope they do indeed release it for gpu's also at some point.
Posted on Reply
#22
Darmok N Jalad
NoyandA.I. training on broad user data has been done before, it didn't end well :D(Microsoft trained an A.I. with Twitter users' data, and it became a real piece of shit).
You leave Tay out of this! :D
Posted on Reply
#23
ThrashZone
NoyandThere's a distinction between the AI ran by end-users, and the AI running in the data center, and it also depends on what the A.I. is trying to achieve. When apps talk about A.I., they talk about a simplified model that's been figured out by the massive data center. Our computers don't really do much training, besides simple stuff like speech patterns for autocorrect.

The quality of the data that the A.I. use to be trained on is too valuable, from what I read, controlled, and curated content is favoured. A.I. training on broad user data has been done before, it didn't end well :D(Microsoft trained an A.I. with Twitter users' data, and it became a real piece of shit).

They have better ways to exploit us: when you solve a captcha, you contribute to train an A.I. it's digital labour and that's the currency being used for the "free" stuff (which isn't really free).

ML/A.I don't gather as much complains as mining because on some application the end-user does benefits from it. Stuff like real time translation, subject detection, better autofocus in cameras etc...
Hi,
People that mine actually make money although not much daily it does add up enough to pay for equipment.. eventually
As far as captcha goes there's an app to get around that nonsense did AI make it lol :laugh:

I was just pointing out the irony that AI does mine and frankly lots of people will reject it just as you point out the manipulated shit show already attempted :cool:
Posted on Reply
#24
Carillon
Usually new tech comes out as hardware first, and a few decades later we start seeing the first programs to use it. Why is this new tech out as software first, with no hardware available?
Posted on Reply
#25
R-T-B
ThrashZoneHi,
To myself anyway the term run locally seems more like a massive local AI search database installed hehe
Maybe because that's exactly what it is and claims to be?
Posted on Reply
Add your own comment
Dec 18th, 2024 03:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts