Safety of advanced AI under the spotlight in new report
New research, supported by over 30 nations, as well as representatives from the EU and the UN, has highlighted the potential impact of AI if governments and society do not deepen their collaboration on AI safety.
The first iteration of the International Scientific Report on the Safety of Advanced AI has been published, following its launch at the AI Safety Summit. This report stems from the landmark Bletchley Declaration, a key outcome of the Bletchley Park discussions.
Initially launched as the State of Science report last November, this comprehensive study unites a global team of AI experts, including an Expert Advisory Panel from 30 leading AI nations, and representatives from the UN and the EU. The report aims to provide policymakers worldwide with a single source of information on AI capabilities and risks to inform their approaches to AI safety.
This report acknowledges the positive uses of advanced AI, such as boosting wellbeing, prosperity, and scientific breakthroughs in fields like healthcare, drug discovery, and climate change mitigation. However, it also warns of potential harms, including disinformation campaigns, fraud, and scams by malicious actors. Future AI developments could pose wider risks, including labour market disruption and economic inequalities.
The report notes a lack of universal agreement among AI experts on topics such as current AI capabilities and their potential evolution. It explores differing opinions on extreme risks, such as large-scale unemployment, AI-enabled terrorism, and loss of control over the technology. The report underscores the need for improved understanding, emphasising that the future decisions of societies and governments will have a significant impact.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan, stated: “AI is the defining technology challenge of our time, but I have always been clear that ensuring its safe development is a shared global issue. When I commissioned Professor Bengio to produce this report last year, I was clear it had to reflect the enormous importance of international cooperation to build a scientific evidence-based understanding of advanced AI risks. This is exactly what the report does.
“Building on the momentum we created with our historic talks at Bletchley Park, this report will ensure we can capture AI’s incredible opportunities safely and responsibly for decades to come.
“The work of Yoshua Bengio and his team will play a substantial role informing our discussions at the AI Seoul Summit next week, as we continue to build on the legacy of Bletchley Park by bringing the best available scientific evidence to bear in advancing the global conversation on AI safety.”
This interim publication focuses on advanced ‘general-purpose’ AI, which includes systems capable of producing text, images, and automated decisions. The final report, expected to be published in time for the AI Action Summit hosted by France, will incorporate feedback from industry, civil society, and the AI community to ensure it remains up-to-date with the latest research and developments.
Professor Yoshua Bengio, Chair of the International Scientific Report on the Safety of Advanced AI, said: “This report summarises the existing scientific evidence on AI safety to date, and the work led by a broad swath of scientists and panel members from 30 nations, the EU and the UN over the past six months will now help inform the next chapter of discussions of policy makers at the AI Seoul Summit and beyond.
“When used, developed, and regulated responsibly, AI has incredible potential to be a force for positive transformative change in almost every aspect of our lives. However, because of the magnitude of impacts, the dual use, and the uncertainty of future trajectories, it is incumbent on all of us to work together to mitigate the associated risks in order to be able to fully reap these benefits.
“Governments, academia, and the wider society need to continue to advance the AI safety agenda to ensure we can all harness AI safely, responsibly, and successfully.”
Professor Andrew Yao from the Institute for Interdisciplinary Information Sciences, Tsinghua University, noted: "A timely and authoritative account on the vital issue of AI safety."
Marietje Schaake, International Policy Director at Stanford University Cyber Policy Centre, remarked: "Democratic governance of AI is urgently needed, on the basis of independent research, beyond hype. The Interim International Scientific Report catalyses expert views about the evolution of general-purpose AI, its risks, and what future implications are. While much remains unclear, action by public leaders is needed to keep society informed about AI, and to mitigate present day harms such as bias, disinformation and national security risks, while preparing for future consequences of more powerful general purpose AI systems."
Nuria Oliver, PhD, Director of ELLIS Alicante, the Institute of Humanity-centric AI, stated: "This must-read report – which is the result of a collaborative effort of 30 countries – provides the most comprehensive and balanced view to date of the risks posed by general purpose AI systems and showcases a global commitment to ensuring their safety, such that together we create secure and beneficial AI-based technology for all."
This year promises significant advancements in AI technology, with increasingly capable AI models expected to hit the market. The speed of AI’s development is a focal point of today’s report, which notes rapid progress but acknowledges disagreement on current capabilities and uncertainty about the long-term sustainability of this pace.
The UK has rapidly established itself as a leader in AI safety, supported by the establishment of the AI Safety Institute, backed by £100 million in funding. The Institute has already formed an alliance with the United States on AI safety and published its approach to model safety evaluations earlier this year.
The AI Seoul Summit this month represents an important opportunity to cement AI safety’s place on the international agenda. Attendees will use the interim International AI Safety Report to further discussions initiated at November’s AI Safety Summit. A final edition of the report is expected ahead of the next round of discussions on AI safety, to be hosted by France.