Design

2nd gen neural network software framework extends support for AI

27th June 2016
Daisy Stapley-Bunten
0

The industry’s first software framework for embedded systems to automatically support networks generated by TensorFlow has been introduced by CEVA. The CDNN2 (CEVA Deep Neural Network), is a 2nd gen neural network software framework for machine learning.

CDNN2 enables localised, deep learning-based video analytics on camera devices in real time. This significantly reduces data bandwidth and storage compared to running such analytics in the cloud, while lowering latency and increasing privacy. Coupled with the CEVA-XM4 intelligent vision processor, CDNN2 offers significant time-to-market and power advantages for implementing machine learning in embedded systems for smartphones, advanced driver assistance systems (ADAS), surveillance equipment, drones, robots and other camera-enabled smart devices.

CDNN2 builds on the successful foundations of CEVA’s 1st gen neural network software framework (CDNN), which is already in design with multiple customers and partners. It adds support for TensorFlow, Google’s software library for machine learning, as well as offering improved capabilities and performance for the most sophisticated and latest network topologies and layers. CDNN2 also supports fully convolutional networks, thereby allowing any given network to work with any input resolution.

Pete Warden, lead of the TensorFlow Mobile/Embedded team at Google, commented: “It's great to see CEVA adopting TensorFlow. Power efficiency is key to successfully harnessing the potential of deep learning in embedded devices. CEVA's low-power vision processors and CDNN2 framework could help a wide variety of developers get TensorFlow working on their devices.”

Using a set of enhanced APIs, CDNN2 improves the overall system performance, including direct offload from the CPU to the CEVA-XM4 for various neural network-related tasks. These enhancements, combined with the “push-button” capability that automatically converts pre-trained networks to run seamlessly on the CEVA-XM4, underpin the significant time-to-market and power advantages that CDNN2 offers for developing embedded vision systems. The end result is that CDNN2 generates an even faster network model for the CEVA-XM4 imaging and vision DSP, consuming significantly lower power and memory bandwidth compared to CPU- and GPU-based systems.

Jeff Bier, Founder of the Embedded Vision Alliance, commented: “Today, designers of many types of systems – from cars to drones to home appliances – are incorporating embedded vision into their products to enhance safety, autonomy and functionality. I applaud CEVA’s initiative in enabling low-cost, low-power implementations of visual intelligence using deep neural networks.”

Eran Briman, Vice President of Marketing at CEVA, commented: “The enhancements we have introduced in our 2nd gen Deep Neural Network framework are the result of extensive in-the-field experience with CEVA-XM4 customers and partners. They are developing and deploying deep learning systems utilising CDNN for a broad range of end markets, including drones, ADAS and surveillance. In particular, the addition of support for networks generated by TensorFlow is a critical enhancement that ensures our customers can leverage Google’s powerful deep learning system for their next-gen AI devices.”

CDNN2 is intended to be used for object recognition, advanced driver assistance systems (ADAS), Artificial intelligence (AI), video analytics, augmented reality (AR), virtual reality (VR) and similar computer vision applications. The CDNN2 software library is supplied as source code, extending the CEVA-XM4’s existing Application Developer Kit (ADK) and computer vision library, CEVA-CV. It is flexible and modular, capable of supporting either complete CNN implementations or specific layers for a wide breadth of networks. These networks include Alexnet, GoogLeNet, ResidualNet (ResNet), SegNet, VGG (VGG-19, VGG-16, VGG_S) and Network-in-network (NIN), among others.

CDNN2 supports the most advanced neural network layers including convolution, deconvolution, pooling, fully connected, softmax, concatenation and upsample, as well as various inception models. All network topologies are supported, including Multiple-Input-Multiple-Output, multiple layers per level, fully convolutional networks, in addition to linear networks (such as Alexnet).

A key component within the CDNN2 framework is the offline CEVA Network Generator, which converts a pre-trained neural network to an equivalent embedded-friendly network in fixed-point math at the push of a button. CDNN2 deliverables include a hardware-based development kit which allows developers to not only run their network in simulation, but also to run it with ease on the CEVA development board in real-time.

Image 1

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier