"
Some of the stuff listed here can be done more efficiently with AI acceleration"
Which ones tho?
View attachment 384494
On this:
View attachment 384495
Image generation is a totally different thing you brought up, for that an NPU is quite useful, never questioned that!
But these bullet-point list is pure marketing bs... none of it makes sense
- offline support and lower latency is in the context of local AI vs cloud AI. Cloud based A.I is almost never used for real-time application beause there's too much latency between the input, and the output result, they not talking about system latency.
- Digital avatars/ auto subject framing/ eye correction require subject detection, and a finer detection of the movement of said subject (since digital avatars are not static, but animated according to the facial movement of a person). I've read a lot about that subject, computers are really bad at perceiving the world like we do, deep learning improved that a lot, and the computer need to apply that ML algorithm on the fly.
The Limits of Computer Vision, and of Our Own | Harvard Medicine Magazine
- In the same sipirit, posture correction also require the computer to recognize what's a bad posture and a good posture in the place. Wich make use of computer vision/subject recignition.
-Lighting enhancement is for video calls, it's to make up for low light, some of them are specifically making the speaker brighter, while also try to make up for the noise present in low ligh situation. So subject detection, and image reconstruction are involved.
- Auto translation can include real time audio translation wich need real time speech/language recognition, I don't think that they only mean text translation.
- Accesibility features, mostly for people who have isues with their eyesight, so you use subject recignition to describe what's happening on the screen, even when people don't make content with the description tag. Also improving speech based input, because again, computers are by default bad at recognizing sounds and pictures like we do. To this day I'm still seeing exemple of a computer doing mistakes in audio transcription.
-Blue light dimming is a just behavior based automated warm mode to reduce eye strain. Instead of just using it at a set time, it's going to do it several time a day base on your behavior. That's the theory, but I haven't seen that one implemented yet, but imo that sound annoying
- Longer battery life because there's more and more applications that are making use of on device ML algorithms, so the NPU can increase the battery life. Improved performance also because of the NPU who can let the CPU/GPU available for other stuff.
I feel there's a misconcpetion that A.I is mostly about generating stuff, when it's also extensively used to make computers "see", "hear", and "understand" things that would be a pain to do with classic programming.