Analysis

Making robots more autonomous

10th August 2015
Siobhan O'Gorman
0

 

Scientists at the University of Glasgow have joined forces with British and American colleagues in a project that may ultimately help robots become more autonomous through recognising and understanding everyday scenes.

Glasgow is part of a group that has been awarded £724,000 from the US Office of Naval Research in a joint venture with Engineering and Physical Sciences Research Council and will provide their expertise on visual scene processing to advance ‘deep scene understanding’ in machines.

The goal is ultimately to develop mechanisms that can recognise their environments and the behaviours of people within that environment and respond accordingly. Such a development could have a host of benefits in everyday life.

Professor Philippe Schyns, Director of the Institute of Neuroscience and Psychology, who is leading the Glasgow contribution to the project, said: “If robots of science fiction are to become reality they will need to be much more aware of their surroundings and be able to adapt to situations accordingly, to be more human essentially.

“Humans are amazingly fast and efficient when it comes to recognising scenes. We can quickly recognise a scene from its simple layout without necessarily having time to process the details of different objects within that scene, the difference between a mountain and a motorway for instance.

“We can also draw inferences from detailed information within a scene, for example, if there is a man in a kitchen, pouring milk into bowl and there are ingredients on a table such as flour and eggs, it indicates the man is probably starting to bake a cake.

“A human could recognise and draw that inference almost immediately; a robot can’t. But if we were able to give a companion robot, for example, the ability to recognise that scene and respond appropriately, the robot could assist humans in everyday tasks such as baking the cake.

“If you want robots to have deep scene understanding they need to know how humans do it and how it’s dependent on the task. This kind of deep understanding is a very old goal of artificial intelligence.”

The Understanding Scenes and Events through Joint Parsing, Cognitive Reasoning and Lifelong Learning project is being led by UCLA with international partners from Stanford, Carnegie Mellon University, University of Illinois, MIT, and Yale in the US and Oxford, Glasgow, Birmingham and Reading in the UK.

Featured products

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier