Artificial Intelligence

To get a handle on ethical AI, we must first secure the Edge

15th December 2020
Lanna Deamer
0

With the advancement of technology, Artificial Intelligence (AI) has started to become a tangible part of our everyday lives, assisting us with transportation, healthcare, protecting the environment and even helping us at home. In the coming decades, AI applications will go much further.

Guest blog written by Lars Reger, NXP Semiconductors

Combined with a multitude of sensors and devices, AI will extend into the core functions of society, including education, social care, scientific research, law and defence.

Although the majority of AI applications will be built to benefit humanity, it can also be used for more nefarious purposes, such as cyberattacks, social engineering and mass surveillance. As machines begin to take autonomous actions that can endanger the safety of humans, the need becomes more pressing to develop a universally recognised code of ethics to govern the development of AI; a task which seems all the more daunting due to the fact that ethics often differ vastly across cultural groups and even among individuals with similar socioeconomic backgrounds.

The US and EU have already started work on developing policies and laws targeting AI, taking a ‘human-centric’ approach to AI ethics. A number of tech companies and other organisations (including the Vatican) have also joined forces to discuss the development of an ethical code of conduct for AI centered around the principles of transparency, fairness, safety and privacy.

However, the process of making AI ethical is not simply limited to coding the perfectly virtuous machine. As with a number of existing industries, AI applications and devices need robust frameworks and support to ensure the safest, most ethical decision-making, but they also need to be equipped with physical fail-safes in the event that things do go wrong.

Now that the Internet of Things (IoT) is fast maturing into a fully-functional ecosystem with explosive growth in edge devices, AI is already forming a big part of edge computing. Deloitte has estimated that more than 750 million AI chips will be sold in 2020, and by 2024 they expect this will grow to exceed 1.5 billion chips. Once dependent on the kind of processing power that only data centres could offer, AI chips have become an integral part of smartphones, smart speakers and security cameras, enabling these edge devices to develop real time machine learning capabilities and reducing their reliance on an internet connection to perform AI/ML functions.

Designing trustworthy AI/ML requires a focus on the design, development and deployment of AI systems that learn from and collaborate with humans in a deep, meaningful way. Security and privacy must be taken into account at the very beginning of a new system architecture, and cannot be added as an afterthought. The highest appropriate level of security and data protection must be applied to all hardware and software, ensuring that it is pre-configured into the design, functionalities, processes, technologies, operations, architectures and business models. This also requires establishing risk based methodology and verification to be implemented as baseline requirements for the entire supply chain. The Charter of Trust initiative of cyber security for IoT has already provided an excellent template for this.

Once we have identified a set of underlying principles that govern the development of AI, how do we then ensure that these ethical AI systems do not become compromised? Machine learning can be utilised to monitor data streams to detect anomalies, but it can also be used by hackers to further enhance the effectiveness of their cyberattacks. Thus the integrity and security of AI systems is just as important as the ethical programming of the AI itself.

It is imperative for AI systems to process input data while still respecting user privacy. This can be achieved by encrypting all communications, ensuring confidentiality and the integrity of data and authentication. Edge AI systems are starting to use some of the most advanced cryptography techniques available, including homomorphic encryption and attribute-based encryption.

In order to prevent or defend against attacks on Al to extract sensitive information from secure systems, focus must be placed on how to leverage hardware security to improve overall system security and data privacy. Sophisticated, secure devices must be equipped with countermeasures that repel a broad range of logical and physical attacks, such as side-channel or template attacks.

Perhaps the biggest challenge is that the AI ecosystem is not homogenous; it can currently be likened to a patchwork quilt made up of contributions from various creators that in the end must be stitched together to make a single blanket. At this point in time, the share of accountability and levels of trust between these various actors are not the same across the board. Allowing even the smallest holes in the ‘security and privacy by design’ principle could potentially cause the entire ecosystem to collapse if found and exploited by attackers. It is therefore essential that all actors in the development and operation of AI should work towards interoperable and assessable security.

It will take some time for AI stakeholders to agree on a universally recognised code of ethics, and it will take even more time for people to be able to trust that machines can act in the best interests of humanity. But it is not enough to own the soul of AI; we must safely manage temptations for its corruption.

There is a lot of groundwork to be done in preparation: safety and security provisions must be standardised across the entire edge. The certification of silicon, connectivity and transactions must therefore be the central focus for chipmakers and customers alike as we collaborate to form the building blocks for the secure and trustworthy AI systems of the future.

To underscore our commitment to the ethical development of AI components and systems, you can read our whitepaper entitled 'The Morals of Algorithms', which details our comprehensive framework for AI principles: non-maleficence, human autonomy, explicability, continued attention and vigilance, and privacy and security by design.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier