Artificial Intelligence

How can we build trust in AI?

19th November 2020
Lanna Deamer
0

The pandemic’s disruption may have led to a profound shift in our attitudes towards the exchange of data. Previously, individuals would unlikely hand over sensitive personal information to the Government.

Written by Kalliopi Spyridaki, Chief Privacy Strategist, Europe and Asia Pacific, SAS

But in the midst of a global pandemic, people have become more willing to share their data with government test, trace and isolate programmes, wanting to play their part in overcoming the current crisis.

For many reasons, some countries have been more effective at containing the virus than others. But for all the differences in each country’s national response, the successes noted across Europe and Asia Pacific find common ground in a commitment to upholding data protection standards. It is likely the General Data Protection Regulation (GDPR) has had a domino effect in driving confidence with data sharing for Europe and its trading partners.

This unprecedented openness towards data sharing driven by the pandemic may provide a valuable example about how to build trust in Artificial Intelligence (AI) technologies going forward. Notably, in order to put in place the foundations for trusted AI, we need to develop robust and secure technology, create a culture of digital trust that encourages data sharing and design a regulatory framework which promotes the responsible use of AI.

The changing data landscape

Europe is moving towards a data-agile economy. The region is seeking to address many of the weaknesses that have limited the competitiveness of European companies, most notably, the lack of access to large quantities of high quality data. This is an integral asset in the race to develop powerful AI solutions that can greatly enhance business insight and efficiency.

In the frame of the recently adopted European Data Strategy, the European Union (EU) will propose a Data Governance Act by the end of 2020 and a Data Act in 2021 which will aim to foster business-to-government data sharing for the public interest as well as to support business-to-business data sharing. The aspiration is to create a genuine single market for data and common data pools that organisations can tap for growth and innovation.

Core to the strategy remains a continued respect for citizens’ rights and freedoms. Consistent with Europe’s stance on the protection of fundamental rights including privacy, the new data ecosystem is unlikely to mandate data sharing as a general rule. The new requirements will need to take into account the existing body of consumer rights and are likely to enhance organisations’ responsibility for keeping customer data secure.

In parallel, the EU will propose legislation in early 2021 that aims to drive a horizontal, risk-based and precautionary approach to the development and use of AI. While the specifics are still taking shape, the legislation will advance transparency, accountability and consumer protection. This is likely to be achieved by requiring organisations to adhere to robust AI governance and data quality requirements.

If the digital trust felt by citizens has contributed to the success of many test and trace programmes, the upcoming AI legislation will likely help entrench this trend in the realm of AI. Notably, European legislation on data and AI is likely to have implications across the world.

Much like the GDPR’s effect that saw other nations enact similar data protection laws, new data and AI legislation may create a global ripple. As the UK develops its own strategies for data sharing and AI development, lawmakers will surely be keeping a close eye on Europe.

Employing a framework to build trust

An active and inclusive culture of data sharing between governments, tech giants, startups and consumers is critical to creating tomorrow’s AI applications. Digital trust is the necessary foundation to this end. In their management of data and development of AI, organisations should strive to build confidence with consumers beyond merely complying with applicable standards.

Policymakers have the power and responsibility to facilitate this process. But the task is not easy. Regulatory intervention needs to be balanced so that it does not stifle AI innovation and adoption. At the same time, it must give clear, consistent and flexible guidance on how to develop and use trustworthy, safe and accountable AI.

Balanced regulation also helps to incentivise the market, with the arrival of GDPR seeing companies compete on their ability to uphold consumer privacy. But this new privacy culture is also being driven by external factors beyond regulations. The current crisis, as well as changing attitudes spurred on by the diversity and inclusion movements seen this year, has accentuated the importance of ethics in all walks of life, including the implementation of emerging technologies.

Thus incorporating ethics into a legal framework will help to build trust in AI and drive forward its full adoption. Legislators have the opportunity to develop flexible rules that will help build a culture of responsible AI, with an emphasis on fairness, accountability and transparency. The design and deployment of responsible and trusted AI will enable businesses, governments and individuals to reap the benefits of using data for good and harness the AI opportunity for sustainable growth and prosperity.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier