Syntiant Aims New AI Chip at ‘Always-On’ Edge
Addressing power constraints for “always-on” edge devices, the latest neural processor from Syntiant Corp. is designed to run multiple applications simultaneously while limiting energy budgets.
The custom AI silicon startup said this week its second-generation neural decision processor is aimed at low-power audio and sensor processing tasks. Those include beamforming, echo-cancellation, noise suppression and speech enhancement, among others.
The NDP120 chip is based on the AI startup’s “Core 2” tensor processor architecture promoted as a low-power embedded framework capable of delivering performance comparable to connected edge devices.
“We took years of real world, low-power edge deep learning experience to develop this architecture into a scalable design optimized to bring neural processing to power constrained deployments,” Syntiant CEO Kurt Busch noted in a blog post. “We want AI to be pervasive and unobtrusive.”
The Core 2 inference engine is based on an ultra-low-power deep neural network with a configurable audio front-end interface. Among the edge and sensor fusion applications are infrared and motion detection.
Syntiant’s embedded approach also seeks to evolve power-hungry microcontrollers by implementing on silicon data collection and deep learning training on a parallel neural network.
The startup, based in Irvine, Calif., announced a $35 million funding round in August 2020 led by Microsoft’s M12 venture arm. Intel Capital is another early backer, targeting AI processors that promise to push machine learning from the cloud to the edge. So far, Syntiant has raised about $65 million in venture funding.
It’s latest tensor processing core incorporates an Arm Cortex M0, among the chip IP vendor’s smallest, along with a HiFi 3 audio/voice digital signal processor. That combination supports more than 7 million parameters while enabling the Core 2 to handle multiple, simultaneous networks.
The design further reduces power consumption in edge devices while upping performance by a factor of 25. That, the startup claims, would move neural processing technology into “always-on domains.”
Among other advances, the performance increase is attributed to native support for graph-based execution along with neural compression. The former is built around tensor-based memory and processing that allows edge devices to run multiple networks; the latter mixes tasks from 1-bit to 8-bits, with a reserve capability of 16 bits for higher precision.
Syntiant said this week earlier versions of its neural processor are shipping in high volumes. The NDP120 is sampling now, and the chip maker said it expects to begin volume shipments this coming summer.
Syntiant’s speech-recognition processors are used in battery-powered edge devices ranging from smartphones and smart speakers to laptops and earbuds. They are also used in sensor platforms. Early applications include “wake” and command words, speaker identification and event detection.
Related
George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).