• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Femtosense Launches AI-ADAM-100, a System in Package (SiP) for Consumer Applications

Nomad76

News Editor
Staff member
Joined
May 21, 2024
Messages
212 (3.59/day)
Femtosense, in partnership with ABOV Semiconductor, today launched the AI-ADAM-100, an artificial intelligence microcontroller unit (AI MCU) built on sparse AI technology to enable on-device AI features such as voice-based control in home appliances and other products. On-device AI provides immediate, no-latency user responses with low power consumption, security, operational stability, and low cost compared to GPUs or cloud-based AI.

The AI-ADAM-100 integrates the Femtosense Sparse Processing Unit 001 (SPU-001), a neural processing unit (NPU), and an ABOV Semiconductor MCU to provide deep learning-powered AI voice processing and voice-cleanup capabilities on-device at the edge. With language processing, appliances can implement "say what you mean" voice interfaces that allow users to speak naturally and express their intent freely in multiple ways. For example, "Turn the lights off", "Turn off the lights," and "Lights off" all convey the same intent and are understood as such.





Voice/audio cleanup processes data before it is sent to the cloud, improving reliability and accuracy while reducing the volume of data sent, thus reducing backend infrastructure costs. "With sparsity integrated throughout the AI development stack, the AI-ADAM-100 is the first device on the market to fully unlock the advantages of sparse AI," said Sam Fok, CEO, Femtosense. "Our sparsity-enabling technology allows our customers to deliver compact, efficient AI processing to a growing variety of markets and products, including home appliances as well as small form factor, battery-operated devices like high-fidelity hearing aids, industrial headsets, and consumer earbuds."

On top of the AI-ADAM-100, Femtosense provides a highly customizable selection of AI-ADAM-100-based AI software application products—from full turnkey solutions to tool-driven applications or full custom implementations using a manufacturer's own AI models, whether dense or sparse.

The Sparse AI Advantage
Sparse AI reduces the cost of AI inferencing by zeroing-out irrelevant portions of an algorithm and then only allocating hardware memory and compute resources to the remaining nonzero, relevant portions of the algorithm. A system that stores and computes only nonzero weights can deliver up to a 10x improvement in speed, efficiency, and memory footprint. Similarly, a system that computes only when a neuron's output is nonzero can deliver up to another 10x increase in speed and efficiency. Those 10s can multiply. Consequently, sparse AI enables manufacturers to implement deep learning-based AI models of up to 100x the power/complexity of previous MCUs without adversely impacting speed, efficiency, memory footprint, or performance.

While many edge applications can benefit from AI, they often lack the price or power flexibility to implement a GPU, cloud connectivity, or the volume to support a dedicated silicon solution. This has limited the adoption of edge AI. With the introduction of the AI-ADAM-100, manufacturers can implement voice language interfaces at the edge even for devices that are not connected to the cloud.

Many existing AI systems are always processing and consuming power even when the task is easy, like when the environment is quiet. Pure, cloud-based voice processing requires continuous throughput, leading to high infrastructure cost. The AI-ADAM-100 resolves tasks on-device to significantly reduce power and backend cloud loading. Specifically, the AI-ADAM-100 enables home appliance manufacturers to implement sophisticated wake-up and control functionality, allowing other system controllers and connectivity modules to drop into sleep mode and consume substantially less power when a user is not interacting with the system. This capability can be used to listen until a user's voice command is received, and then to either process the command on-device or wake the system to send the command to the cloud.

A Product of Partnership
Femtosense and ABOV developed the AI-ADAM-100 MCU in strategic collaboration, leveraging the core strengths of each partner. "The AI-ADAM-100 is the best-optimized AI MCU solution for voice and audio-based AI applications and enables a variety of on-device AI applications for consumer electronics and standalone devices," said Choi Won, CEO of ABOV Semiconductor. "Together with Femtosense, we will continue to develop the most cost- and power-efficient AI MCUs for global customers."

ABOV has verified AI-ADAM-100's top-notch voice command recognition performance under multiple noise conditions, meeting the requirements of leading customers. Global home appliance makers are working to reduce the number of buttons on their devices and streamline the user experience. AI-based voice command can accelerate this trend.

Availability
Engineering samples of the AI-ADAM-100 are available now with commercial mass production targeted for later this year. Development support includes software tools, evaluation boards, and demo AI models, including Smart Home Appliance Wake-up and Control.

View at TechPowerUp Main Site | Source
 
Joined
Oct 22, 2014
Messages
13,591 (3.82/day)
Location
Sunshine Coast
System Name Lenovo ThinkCentre
Processor AMD 5650GE
Motherboard Lenovo
Memory 32 GB DDR4
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Lenovo
Software W11 Pro 64 bit
"Turn the lights off", "Turn off the lights," and "Lights off" all convey the same intent and are understood as such"
Except Lights off, could be a statement of fact, not a request.
 
Top