UN advisory board unveils proposals to plug AI governance gaps
A United Nations (UN) advisory board on artificial intelligence (AI) has unveiled seven key recommendations aimed at addressing gaps in AI governance and ensuring the safe and responsible development of AI systems worldwide.
Formed last year, the 39-member advisory body was tasked with tackling the complex issues surrounding the global regulation of AI. Their recommendations will be a focal point of discussion at the upcoming UN summit this September.
One of the primary proposals put forward is the establishment of a dedicated panel to provide impartial and credible scientific advice on AI. This panel would help bridge the information divide between AI research laboratories and the broader global community, ensuring that AI developments are transparent and accessible.
The rapid advancement of AI technology has sparked growing concerns about its potential misuse, including the spread of misinformation, the proliferation of fake news, and the infringement of copyrighted content.
The European Union has already made strides in regulating AI with the introduction of its comprehensive AI Act, which has set a standard for other regions to follow. However, the UN warns that the global development of AI is largely controlled by a few multinational corporations, posing the risk of the technology being deployed without sufficient public input or oversight.
To address this, the advisory body has also recommended initiating a global policy dialogue on AI governance, establishing an AI standards exchange, and creating a global AI capacity development network. These initiatives are designed to enhance governance capabilities and promote the responsible use of AI technology.
Jay Limburn, Chief Product Officer at Ataccama, welcomed the UN’s efforts: “Bridging gaps in AI governance is a positive step forward, especially as progress in developing AI safeguards has been limited, aside from the EU AI Act, since last year’s Safety Summit. While it’s crucial to mitigate the risks posed by AI, regulation should be balanced to avoid stifling innovation. A clear governance structure for AI is an important move to oversee its development and applications.”
Limburn further emphasised the importance of data quality in responsible AI use: “Responsible AI relies on high-quality data. Clean, consolidated data inputs are essential for producing trustworthy outcomes and actionable insights. As part of the AI governance framework, there should be a strong focus on ensuring organisations work with accurate data to improve efficiency and reduce the risk of poor outputs. While regulation is necessary, we must be cautious not to overregulate in a way that stifles innovation. Instead, guidelines for AI development, safety testing, and governance should be implemented to support AI’s growth.”
Among other significant proposals, the UN advisory body has advocated for the creation of a global AI fund to promote international collaboration and address capacity gaps. Additionally, they suggest establishing a global AI data framework to ensure transparency and accountability in the use of AI systems.
Libero Raspa, Director of adesso UK, commented on the importance of governing AI’s development: “Regulating AI is essential for guiding how businesses develop and adopt AI systems, but overregulation could hinder the potential of ground-breaking AI solutions. AI is a key driver of global innovation, helping to accelerate automation and boost efficiencies, giving businesses a competitive edge. Regulators should collaborate with business leaders and AI experts to instil confidence in AI adoption while addressing concerns and minimising risks.”