Google collaborates with researchers on AI safety
The Google Research Blog posted a message from Chris Olah of Google Research. He confirmed the publication of "a technical paper, Concrete Problems in AI Safety, a collaboration among scientists at Google, OpenAI, Stanford and Berkeley." This is great news for those who are alarmed over what limits may be over-stepped by AI systems in carrying out their actions, and whether we had better anticipate any event where an AI system does not behave according to a predesigned purpose engineered by humans.
Olah said, "We believe it's essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably."
Google suggests an "open, cross-institution work on how to build machine learning systems that work as intended. We're eager to continue our collaborations with other research groups to make positive progress on AI."
The paper is a collaboration between Google (Google Brain) and Stanford, University of California at Berkeley and OpenAI. The latter is a non-profit artificial intelligence research company.
Google's Olah and team outlined five problems that the team thinks are quite important as AI becomes applied in more general circumstances. "These are all forward thinking, long-term research questions—minor issues today, but important to address for future systems," he said.
The company is laying out "five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe." Tom Simonite wrote about the Olah blog's' use of a cleaning robot to illustrate some the five points.
"One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them," wrote Simonite.
Another problem is how the AI machine can explore environment safely. To use the cleaning example, a cleaning robot should be able to experiment with mopping strategies, "but clearly it shouldn't try putting a wet mop in an electrical outlet," commented Olah.