Automatic programming makes swarm robots more reliable
Researchers from Sheffield Robotics have applied a novel method of automatically programming and controlling a swarm of up to 600 robots to complete a specified set of tasks simultaneously. This reduces human error and therefore many of the bugs that can occur in programming, making it more user-friendly and reliable than previous techniques. This could be particularly advantageous in areas where safety of using robotics is a concern, for example, in driverless cars.
The team of researchers from the University of Sheffield applied an automated programming method previously used in manufacturing to experiments using up to 600 of their 900-strong robot swarm, one of the largest in the World, in research published in the March issue of Swarm Intelligence journal.
Swarm robotics studies how large groups of robots can interact with each other in simple ways to solve relatively complex tasks cooperatively.
Previous research has used 'trial and error' methods to automatically program groups of robots, which can result in unpredictable, and undesirable, behaviour. Moreover, the resulting source code is time-consuming to maintain, which makes it difficult to use in the real-world.
The supervisory control theory used for the first time with a swarm of robots in Sheffield reduces the need for human input and therefore, error. The researchers used a graphical tool to define the tasks they wanted the robots to achieve, a machine then automatically programmed and translated this to the robots.
This program uses a form of linguistics, comparable to using the alphabet in the English language. The robots use their own alphabet to construct words, with the 'letters' of these words relating to what the robots perceive and to the actions they choose to perform.
The supervisory control theory helps the robots to choose only those actions that eventually result in valid 'words'. Hence, the behaviour of the robots is guaranteed to meet the specification.
We are increasingly reliant on software and technology, so machines that can program themselves and yet behave in predictable ways within parameters set by humans, are less error-prone and therefore safer and more reliable.
The experiments carried out in the research required up to 600 robots to each make decisions independently to achieve the desired actions of gathering together, manipulating objects and organising themselves into logical groups.
This could be used in a situation where a team is needed to tackle a problem and each individual robot is capable of contributing a particular element, which could be hugely beneficial in a range of contexts - from manufacturing to agricultural environments.
Dr Roderich Gross, Department of Automatic Control and Systems Engineering at Sheffield, said: "Our research poses an interesting question about how to engineer technologies we can trust - are machines more reliable programmers than humans after all? We, as humans set the boundaries of what the robots can do so we can control their behaviour, but the programming can be done by the machine, which reduces human error."
Reducing human error in programming also has potentially significant financial implications. The global cost of debugging software is estimated at $312billion annually and on average, software developers spend 50% of their programming time finding and fixing bugs.
The research at Sheffield is an important step forward in the area of swarm robotics. The next stage of the research will focus on finding ways in which humans can collaborate with swarms of robots so the communication is two-way and they can learn from each other.