Expertise drives international AI safety report
The international AI safety report is getting a boost as a gathering of international experts drive the project forward.
A raft of international talent and expertise from 30 leading AI nations, as well as representatives of the EU and UN, will drive forward the first edition of the International Scientific Report on Advanced AI Safety, bringing together the best scientific research on the capabilities and risks of frontier AI. Following a meeting in Canada, Technology Secretary Michelle Donelan and Yoshua Bengio, one of the godfathers of AI, have unveiled this crack team of global talent who will now play a crucial role in advising on the report’s development and content.
The report was first unveiled as the State of the Science Report at the UK AI Safety Summit in November, and will help inform discussions at future AI Safety Summits and wider policy-making around the world. Building on the legacy of Bletchley Park and underlining the critical importance of continuing the global conversation on AI Safety, the flagship report is today re-branded as the International Scientific Report on Advanced AI Safety.
Its new international Expert Advisory Panel, features nominees from nations invited to the UK’s AI Safety Summit, as well as representatives of the EU and the UN. 32 prominent international figures like Dr Hiroaki Kitano (CTO of Sony, Japan), Amandeep Gill (UN Envoy on Technology), and the UK’s Chief Scientific Adviser Dame Angela McLean will now set to work advising on the report's development. The panel will be engaging regularly throughout the development of the report, to build a broad consensus on the vital global AI safety research, as it looks to improve the understanding of powerful AI systems, their capabilities, and the associated risks on a global scale. The report will be published in two iterations. Initial findings are due to be released ahead of the Republic of Korea’s AI Safety Summit in the Spring, before a second publication then coincides with talks which are due to be hosted by France.
Secretary of State for Science, Innovation and Technology Michelle Donelan said: "The International Scientific Report on Advanced AI Safety will be a landmark publication, bringing the best scientific research on the risks and capabilities of frontier AI development under one roof.
“The report is one part of the enduring legacy of November’s AI Safety Summit, and I am delighted that countries who agreed the Bletchley Declaration will join us in its development.
“The International Expert Advisory Panel will ensure a diverse range of opinions are contributing to the report, as we continue to lead the global conversation on the safe development of AI."
Professor Yoshua Bengio said: “I’m delighted to confirm the breadth of international talent who will be working on the International Scientific Report on Advanced AI Safety.
“The publication will be an important tool in helping to inform the discussions at AI Safety Summits being held by the Republic of Korea and France later this year, bringing together the best scientific research on advanced AI safety.
“Countries who agreed to the Bletchley Declaration will all have a hand in its writing, building on the legacy of November’s summit at Bletchley Park and ensuring discussions on AI safety will continue to be an international endeavour.”
The announcement comes as the Technology Secretary visits Montréal for talks with Yoshua Bengio at Quebec’s AI Institute Mila, on the final leg of a three-day trip to Canada. The Secretary of State has also taken part in engagements in Toronto and Ottawa, meeting with her Canadian counterpart Minister Champagne and announcing a deepening of the science and innovation ties between the two countries earlier this week.
Also announced today are the guiding principles which will help shape the report, inspired by best practices in similar initiatives such as those of the IPCC. Its drafting will be underpinned by comprehensiveness, objectivity, transparency, and scientific assessment – a framework which will ensure a thorough, robust, and balanced assessment of the risks of AI.