BrainChip unveils ultra-low power NPU
BrainChip has introduced the Akida Pico, a co-processor that enables compact, ultra-low power, portable and intelligent devices for wearables and sensor integrated AI into consumer, healthcare, IoT, defence and wake-up applications.
“It can run in the Microwatt to Milliwatt Power range, and below the 2 Milliamp curve,” says Steve Brightfield, BrainChip’s Chief Marketing Officer.
“It’s that sweet spot that the Pico can address. It fits a unique need for high availability always on AI. The always on use cases that we don't frankly see any other AI solution out there doing.”
Akida Pico accelerates limited use case-specific neural network models to create an ultra-energy efficient, purely digital architecture. Akida Pico enables secure personalization for applications including voice wake detection, keyword spotting, speech noise reduction, audio enhancement, presence detection, personal voice assistant, automatic doorbell, wearable AI, appliance voice interfaces and more.
“So what if you have an AA battery, or an AAA battery, or a hearing aid battery?” questions Brightfield. “How do you put AI into that? So this is what really where we go to what we call an ultra-low power, neural processing core and that’s what we are bringing to the marketplace.”
“So we have this blueprint of a neuromorphic processor we call Akita. That's the brand name for our processor, and it's a very scalable architecture that has what we call a neuron which is effectively a processing element,” continues Brightfield. “Think of it like a neuromorphic CPU. It. It actually is a complex set of computational engines around communications and memory. And we can tile these in a 2 dimensional fashion, like a tile, say, for example, and they're connected with the mesh network.”
“Pico takes size and power to the extreme minimum of requirements. Instead of starting with the data writing software running on some hardware and then measuring the power out of it, we start with what's your power requirement?”
“Then we configure the hardware to meet that,” explains Brightfield. “Then we work on special software algorithms to achieve that. And then you can figure out how much data that can process. So it's kind of looking at tipping the problem upside down and looking at your constraints first, rather than looking at your data and saying OK let’ figure out how to pack this into a cube.”
The co-processor provides power-efficient footprint for waking up microcontrollers or larger system processors, with a neural network to filter out false alarms to preserve power consumption until an event is detected.
It is ideally suited for sensor hubs or systems that need to be monitored continuously using only battery power with occasional need for additional processing from a host.
BrainChip’s exclusive MetaTF software flow enables developers to compile and optimize their specific Temporal-Enabled Neural Networks (TENNs) on the Akida Pico.
With MetaTF’s support for models created with TensorFlow/Keras and Pytorch, users avoid needing to learn a new machine language framework while rapidly developing and deploying AI applications for the Edge.
“Like all of our Edge AI enablement platforms, Akida Pico was developed to further push the limits of AI on-chip compute with low latency and low power required of neural applications,” said Sean Hehir, CEO at BrainChip. “Whether you have limited AI expertise or are an expert at developing AI models and applications, Akida Pico and the Akida Development Platform provides users with the ability to create, train and test the most power and memory efficient temporal-event based neural networks quicker and more reliably.”
BrainChip’s Akida is an event-based compute platform ideal for early detection, low-latency solutions without massive compute resources for robotics, drones, automotive and traditional sense-detect-classify-track solutions. BrainChip provides a range of software, hardware and IP products that can be integrated into existing and future designs, with a roadmap for customers to deploy multi-modal AI models at the edge.