Could street sign graffiti confuse autonomous vehicles?
Depending on your point of view, graffiti can be either an act of vandalism that is a blight on city streets or a legitimate, albeit underground, art form. However, for autonomous cars, graffiti can represent something altogether different. It can be a dangerous physical hazard.
Author: Sam Chase, The Connected Car
A paper recently published by Yoshi Kohno and fellow researchers at the University of Washington entitled Robust Physical-World Attacks on Machine Learning Models confirmed that autonomous vehicle systems are likely to be confused by street signs that have been altered in appearance.
This isn't entirely surprising since autonomous vehicles use logic to operate and alterations to a street sign could confuse that logic. More shocking, however, was the precise nature of these alterations and their consequences: By doctoring street signs to intentionally manipulate an autonomous vehicle's algorithms, Kohno and his team were able to cause self-driving cars to behave inappropriately and dangerously.
"We physically realised and evaluated two attacks," according to the report. "One that causes a stop sign to be misclassified as a speed limit sign in 100% of the testing conditions, and one that causes a right turn sign to be misclassified as either a stop or added lane sign in 100% of the testing conditions."
In each attack, stickers designed to confuse the automobiles in a specific way were affixed to street signs. In the former, the test cars consistently identified stop signs as 45mph speed limit signs, causing them to accelerate through the intersections on a test track. In the second experiment, vehicles would go straight in an instance where they were supposed to go right.
"We hypothesise that given the similar appearance of warning signs, small perturbations are sufficient to confuse the classifier," reads the report. "In future work, we plan to explore this hypothesis with targeted classification attacks on other warning signs."
The implications of Kohno's team's discoveries are obvious, and scary. Someone with access to or deep knowledge of an autonomous vehicle system - and an equally deep supply of malicious intent - could set up passengers in self-driving cars for some incredibly dangerous situations. Presumably, a self-driving car's safety systems would lessen the potential danger of such a 'hack' by causing the car to stop or swerve to avoid danger in the aftermath. However, there's probably only so much these systems could do to avoid a T-bone in an intersection when multiple vehicles are converging on one another at high speeds.
Hiring cyber security experts has become a top priority for OEMs in the autonomous vehicle space, as the devastating outcomes that could result from any sort of hack have certainly kept more than a few engineers awake at night. Rather than shy away from the thought, though, researchers like Kohno have leaned into the darkest possibilities of our autonomous vehicle future. Every time they do, though, those possibilities become less and less likely of becoming a reality.