Artificial Intelligence

When AI goes bad

16th April 2018
Anna Flockett
0

The Future of Humanity Institute recently followed up on last year’s survey about AI exceeding human performance in a variety of tasks, with a report titled: ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.’

Guest blog written by Mychal McCabe.

The report focuses on three macro changes to the threat landscape: expansion of existing threats, introduction of new threats, and a change in the character of existing threats.

Fundamental to this discussion is the notion that AI is a ‘dual use’ technology, “AI systems and the knowledge of how to design them can be put toward both civilian and military uses, and more broadly, toward beneficial and harmful ends.”

The popular imagination is concerned with AI at large in the world, or a world overrun with indifferent or malevolent autonomous systems. Such systems tend to be viewed as either science projects or science fiction, but a look across multiple market segments suggests that we’re entering the early majority phase of the technology adoption lifecycle for systems moving from automatic to autonomous. Consider that DARPA’s first Autonomous Vehicle Challenge took place in 2004, that Google and Amazon have been working on their autonomous drone fleets since 2012, and IBM began lobbying the FDA to let Watson assess cancer screening scans in 2013.

AI in particular is emerging as a mainstream capability for everything from marketing automation to smart factories, which doesn’t mean that it is well understood. Consider Facebook’s unplugging of an AI project in which two computers began to communicate with one another in a language that wasn’t understood by the humans assigned to the project – mainstream media described this story as a ‘creepy preview of our potential future.’ Only it didn’t really happen, at least not in the way suggested by the headlines.

More likely than an incomprehensible, creepily pervasive, and indifferent AI entity at large in the world, are threats like data poisoning, adversarial examples, and the ability to exploit the goal orientation of autonomous systems.

As the Future of Humanity Institute points out, ‘these are distinct from traditional software vulnerabilities (e.g. buffer overflows) and demonstrate that while AI systems can exceed human performance in many ways, they can also fail in ways that a human never would.’

The clear trend from systems characterised by Automatic operation to those characterised by Autonomous operation –in multiple critical infrastructure sectors– will usher in the arrival of AI in Operational Technologies (OT) including but not limited to autonomous vehicles, control and process domains, and other systems with Safety Critical requirements. System architectures and certification approaches must evolve with these requirements in mind.

Separating workloads that could result in more than one output based on the response of an AI-capable system from other workloads with fixed or deterministic outcomes should be an essential consideration of those architecting such systems.

Consolidating and separating workloads with multiple levels of safety criticality and performance criteria is an area where Wind River has deep expertise across multiple industries.

Similarly, the ability to understand expected and desired outcomes at the system-level and identify deltas in real-time will be critical. Simulation and Digital Twin technologies –including Wind River Simics– have a role to play in setting such behavioural baselines and monitoring against them through time.

Wind River look forward to working with system Operators, Integrators, Manufacturers and the broader ecosystem of innovators driving the automatic to autonomous trend to ensure the software defined autonomous world of the future is a safe, secure reality.

Courtesy of Wind River.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier