embedded world: Arm packs more performance into NPU
Arm has unveiled the Arm Ethos-U85, its highest performance, most efficient Ethos NPU to date at embedded world in Nuremberg (April 9-11).
Delivering a 4x performance uplift and 20% higher power efficiency compared to its predecessor while scaling from 128 to 2048 MAC units (4 TOPs @1GHz), the Ethos-U85 is addressing applications where Arm is seeing even greater performance demands such as factory automation and commercial or smart home cameras. Ethos-U85 offers the same consistent toolchain so partners can leverage existing investments for a seamless developer experience. Importantly, it provides support for AI frameworks such as TensorFlow Lite and PyTorch.
Ethos-U85 supports Transformer Networks as well as Convolutional Neural Networks (CNNs) for AI inference.Transformer Networks will drive new applications, particularly in vision and generative AI use cases for tasks like understanding videos, filling in missing parts of images or analyzing data from multiple cameras for image classification and object detection.
With the deployment of microprocessors into more high-performance IoT systems for use cases such as industrial machine vision, wearables and consumer robotics, Arm has designed the Ethos-U85 to work with its leading Armv9 Cortex-A CPUs, to accelerate ML tasks and bring power-efficient edge inference intoa broader range of higher-performing devices.
The Ethos family of NPUs has been licensed by more than 20 partners to date, and early adopters of the new Ethos-U85 include Alif and Infineon. Re
At embedded world, Arm also introduced the Arm Corstone-320 IoT Reference Design Platform, bringing together its highest performance Arm Cortex-M85 CPU, the Mali-C55 Image Signal Processor and the brand-new Ethos-U85 NPU.
Says Paul Williamson, Senior Vice President and General Manager, IoT Line of Business at Arm, “This reference design platform delivers the performance required to span the broad range of edge AI applications for voice, audio, and vision, such as real time image classification and object recognition, or enabling voice assistants with natural language translation on smart speakers.
“The platform includes software, tools, and support including Arm Virtual Hardware,” adds Williamson. “This combination of hardware and software will accelerate product timelines by enabling software development to start ahead of silicon being available, rapidly improving time to market for these increasingly complex edge AI devices.