Moving smartphones from the cloud into edge devices
Two neural network cores, the AX2185 and AX2145, have been announced by Imagination Technologies, and are designed to enable high-performance computation of neural networks at very low power consumption in minimal silicon area. The cores are based on Imagination’s neural network accelerator (NNA) architecture, PowerVR Series2NX, which enables ‘smartness’ to move from the cloud into edge devices, enabling greater efficiency and real-time responsiveness.
The Series2NX AX2185 targets the high-end smartphone, smart surveillance, and automotive markets, where neural network acceleration has a significant impact in areas such as image categorisation and driver assistance systems. Featuring eight full-width compute engines the AX2185 provides 2,048 MACs/clock (4.1 Tera Operations Per Second).
The AX2145 targets mid-range smartphone, DTV/set-top box, smart camera and consumer security markets, which are increasingly adopting neural network acceleration for various tasks. The PowerVR AX2145 streamlined architecture delivers performance-efficient neural network inferencing for ultra-low bandwidth systems.
Both cores fully support the Android Neural Networks API (NNAPI), used by developers to bring neural network capabilities to Android-based mobile devices.
Jeff Bier, founder of Embedded Vision Alliance, said: "Visual intelligence brings compelling capabilities to a variety of applications, but its computation demands are challenging. Imagination's two new PowerVR Series2NX neural network accelerator cores, optimised for performance and memory bandwidth, are welcome options for chip designers deploying demanding deep learning-based computer vision algorithms on embedded and mobile devices."
The PowerVR Series2NX architecture was designed to provide hardware acceleration for efficient neural network inference in mobile and embedded platforms. Its flexible bit-depth support on a per-layer basis for weights and data means PowerVR Series2NX can maintain high inference accuracy while reducing bandwidth/power requirements.