Robotics

Google DeepMind goes head-to-head with humans at table tennis

20th August 2024
Harry Fowle
0

Google DeepMind has announced a significant breakthrough in robotics: it has successfully trained a robot to play table tennis at an amateur competitive level against human opponents. This achievement marks the first instance of a robot being trained to compete in a sport with humans at a comparable skill level.

In the experiments, a robotic arm equipped with a 3D-printed paddle managed to win 13 out of 29 games against human players of varying skill levels. The results of this research were documented in a paper published on Arxiv.

The robot's performance, while impressive, was not flawless. It consistently defeated beginner-level players and won 55% of its matches against amateur opponents. However, it struggled against advanced players, losing all its games against them. Despite these limitations, the progress made by the robot was noteworthy.

“Even a few months back, we projected that realistically the robot may not be able to win against people it had not played before. The system certainly exceeded our expectations,” commented Pannag Sanketi, a senior staff software engineer at Google DeepMind and the project's lead. He added that the robot's ability to outmanoeuvre even strong opponents was remarkable.

Beyond its entertainment value, this research represents an important step towards developing robots capable of performing tasks safely and effectively in real-world settings, such as homes and warehouses. The approach used by Google DeepMind to train this robot has potential applications in various other areas within the robotics field, according to Lerrel Pinto, a computer science researcher at New York University, who was not involved in the project.

Pinto expressed his enthusiasm for the project: “I'm a big fan of seeing robot systems actually working with and around real humans, and this is a fantastic example of this. It may not be a strong player, but the raw ingredients are there to keep improving and eventually get there.”

To train the robot to play table tennis, the researchers had to overcome significant challenges. Table tennis requires excellent hand-eye coordination, quick decision-making, and rapid movement—all of which are difficult for robots to master. Google DeepMind employed a two-phase approach: first, they used computer simulations to develop the robot's hitting skills; then, they fine-tuned its abilities using real-world data, allowing the robot to continually improve.

The researchers created a dataset that included detailed information about the table tennis ball's state, such as position, spin, and speed. This data was used to simulate a realistic table tennis environment, where the robot learned to perform actions like returning serves and executing forehand and backhand shots. Due to the robot’s inability to serve the ball, real-world matches were adapted to accommodate this limitation.

During matches, the robot collected data on its own performance, which it used to refine its skills. It tracked the ball's position with cameras and monitored its opponent's playing style through a motion capture system equipped with LEDs on the opponent's paddle. The robot then fed this data back into the simulation, creating a continuous feedback loop that allowed it to test and develop new skills to improve its gameplay.

This feedback system enabled the robot to adjust its tactics and behaviour dynamically, enhancing its performance throughout a match and over time. However, the system faced difficulties in certain scenarios. The robot struggled when the ball was hit very fast, beyond its field of vision (more than six feet above the table), or very low. It also found it challenging to handle spinning balls, as it could not directly measure spin—an aspect that advanced players exploited.

Chris Walti, Founder of Mytra and former head of Tesla’s robotics team, highlighted the difficulties in training robots in simulated environments: “It's very, very difficult to actually simulate the real world because there's so many variables, like a gust of wind, or even dust [on the table],” he said. “Unless you have very realistic simulations, a robot’s performance is going to be capped.”

Google DeepMind acknowledged these limitations and suggested potential solutions, such as developing predictive AI models to better anticipate the ball’s trajectory and improving collision-detection algorithms.

Importantly, the human participants enjoyed playing against the robotic arm, even the advanced players who defeated it. They found the experience fun and engaging and saw potential for the robot to serve as a dynamic practice partner to help them improve their skills. One participant expressed enthusiasm for the robot's potential: “I would definitely love to have it as a training partner, someone to play some matches from time to time.”

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier