Robotics

Robots of tomorrow with smart visual capabilities

9th February 2017
Enaie Azambuja
0

 

The ability to perceive and understand the dynamics of the real world is critical for the next generation of robots. An EU initiative explored vision, which is essential for most robotic tasks. Robots need a way to adaptively select relevant information in a given scene for further processing.

They require prior common-sense knowledge about where to find a target and should also have an idea of their size, shape, colour or texture. Robots require attention mechanisms to determine which parts of the sensory array they need to process. Attention involves selecting the most relevant information from multi-sensory inputs to efficiently perform a target search.

The EU-funded REAL-TIME ASOC (Real-time understanding of dexterous deformable object manipulation with bio-inspired hybrid hardware architectures) project focused on the development of new mechanisms for visual attention.

REAL-TIME ASOC employed a specialised camera called dynamic vision sensor (DVS) which is suitable for robotic applications that require short latencies to operate in real time. It captures everything that is changing at a very high temporal resolution in microseconds. DVS records about 600 000 frames per second and reduces of the amount of information by removing a scene's static areas.

Project partners began by using the DVS sensor to extract contours and boundary ownership from event information only. Since events are solely triggered at major luminance changes, most events occur at the boundary of objects.

Detecting these contours is a key step towards further processing. They introduced an approach that identifies the location of contours and their border ownership using features representing motion, timing, texture and spatial orientations. The contour detection and boundary assignment were then demonstrated in a proto-segmentation of the scene.

Scientists worked on algorithms to estimate image motion from asynchronous event-based information, and a field programmable gate array to compute visual attention. Lastly, they produced a dataset that provides both frame-free event data and classic image, motion and depth data.

This helps to evaluate different event-based methods and compare them to frame-based conventional computer vision. REAL-TIME ASOC demonstrated how tomorrow's robot will visually select and process images much like humans do.

Featured products

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier