News Posts matching #Compute Stick

Return to Keyword Browsing

Intel Unveils the Neural Compute Stick 2

Intel is hosting its first artificial intelligence (AI) developer conference in Beijing on Nov. 14 and 15. The company kicked off the event with the introduction of the Intel Neural Compute Stick 2 (Intel NCS 2) designed to build smarter AI algorithms and for prototyping computer vision at the network edge. Based on the Intel Movidius Myriad X vision processing unit (VPU) and supported by the Intel Distribution of OpenVINO toolkit, the Intel NCS 2 affordably speeds the development of deep neural networks inference applications while delivering a performance boost over the previous generation neural compute stick. The Intel NCS 2 enables deep neural network testing, tuning and prototyping, so developers can go from prototyping into production leveraging a range of Intel vision accelerator form factors in real-world applications.

"The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn't exist before. We're excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2," said Naveen Rao, Intel corporate vice president and general manager of the AI Products Group.

The Laceli AI Compute Stick is Here to Compete Against Intel's Movidius

Gyrfalcon Technology Inc, an emerging AI chip maker in Silicon Valley, CA, launches its Laceli AI Compute Stick after Intel Movidius announced its deep learning Neural Compute Stick in July of last year. With the company's first ultra-low power, high performance AI processor Lightspeeur 2801S, the Laceli AI Compute Stick runs a 2.8 TOPS performance within 0.3 Watt of power, which is 90 times more efficient than the Movidius USB Stick (0.1 TOPS within 1 Watt of power.)

Lightspeeur is based on Gyrfalcon Technology Inc's APiM architecture, which uses memory as the AI processing unit. This eliminates the huge data movement that results in high power consumption. The architecture features true, on-chip parallelism, in situ computing, and eliminates memory bottlenecks. It has roughly 28,000 parallel computing cores and does not require external memory for AI inference.
Return to Keyword Browsing
Nov 25th, 2024 06:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts