Artificial Intelligence

AI Safety Institute releases safety evaluation platform

Global AI safety evaluations are set to be enhanced as the UK AI Safety Institute’s evaluations platform is made available to the global AI community, paving the way for safe innovation of AI models.

After establishing the world’s first state-backed AI Safety Institute, the UK is continuing the drive towards greater global collaboration on AI safety evaluations with the release of the AI Safety Institute’s homegrown Inspect evaluations platform. By making Inspect available to the global community, the Institute is helping accelerate the work on AI safety evaluations being carried out across the globe, leading to better safety testing and the development of more secure models. This will allow for a consistent approach to AI safety evaluations around the world.

Inspect is a software library which enables testers – from startups, academia and AI developers to international governments – to assess specific capabilities of individual models and then produce a score based on their results. Inspect can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open-source licence, it means Inspect it is now freely available for the AI community to use.

The platform is available now - the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use.

Sparked by some of the UK’s leading AI minds, its release comes at a crucial time in AI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsible AI development more pressing than ever.

Secretary of State for Science, Innovation, and Technology, Michelle Donelan said: “As part of the constant drumbeat of UK leadership on AI safety, I have cleared the AI Safety Institute’s testing platform - called Inspect - to be open-sourced. This puts UK ingenuity at the heart of the global effort to make AI safe and cements our position as the world leader in this space.

“The reason I am so passionate about this, and why I have open-sourced Inspect, is because of the extraordinary rewards we can reap if we grip the risks of AI. From our NHS to our transport network, safe AI will improve lives tangibly - which is what I came into politics for in the first place.”

AI Safety Institute Chair Ian Hogarth said: “As Chair of the AI Safety Institute, I am proud that we are open-sourcing our Inspect platform.

“Successful collaboration on AI safety testing means having a shared, accessible approach to evaluations, and we hope Inspect can be a building block for AI Safety Institutes, research organisations, and academia.

“We have been inspired by some of the leading open source AI developers - most notably projects like GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code, model weights, and partially trained checkpoints. This is our effort to contribute back.

“We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board.”

Alongside the launch of Inspect, the AI Safety Institute, Incubator for AI (i.AI) and Number 10 will bring together leading AI talent from a range of areas to rapidly test and develop new open-source AI safety tools. Open source tools are easier for developers to integrate them into their models, giving them a better understanding of how they work and how they can be made as safe as possible. Further details will be announced in due course.

Simon Baxter, Principal Analyst at TechMarketView, adds: “The launch of the Inspect platform by the AI Safety Institute has come quicker than I think many were expecting, but really typifies the speed at which the UK government is moving when it comes to both driving AI innovation, and ensuring it is developed in a safe and responsible manner. It is a very positive step, as was the agreement made with the U.S in April regarding AI testing, recognising the fact that this is an international challenge, particularly with so many of the AI foundation model providers based outside the UK.

“However, we now have hundreds of foundation models, and AI-related product releases on an almost weekly basis. It is not practical for any government to test every AI model, so there is going to be a lot of onus on the creators of AI solutions to be responsible for what is developed and how it is used.

“From the perspective of organisations looking to use the latest AI innovations, many understand the need to make sure AI is implemented correctly and responsibly, but have to balance that need with the pace of technological change, and the potential benefits AI can bring to business productivity and enhancing the customer experience. It is much easier to lose customer trust than to gain it, a lesson many have learned the hard way and one all still need to be cognizant of as they seek to apply AI.”

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier