Artificial Intelligence

The EU AI Act: A new dawn for AI regulation

18th July 2024
Sheryl Miles
0

The Artificial Intelligence Act (AIA) is a legislative framework established by the European Parliament to regulate the use of AI within the European Union.

Its primary objectives are to promote the development and use of human-centric and trustworthy AI, ensuring high levels of protection for health, safety, fundamental rights, and support innovation.

The AIA applies to both AI providers and deployers within the EU, as well as those outside the EU whose AI systems produce outputs used within the EU. This application is to ensure that the regulations have wide-reaching impact, aiming to create a uniform legal framework across member states.

The AIA risk-based approach is a framework that categorises AI systems based on their potential risk to health, safety, and fundamental rights. High-risk AI systems, such as those used in critical infrastructures, educational and vocational training, employment, essential services, law enforcement, and migration management, are subject to stringent requirements. These include rigorous testing, comprehensive documentation, and ongoing monitoring to ensure compliance and safety.

Notably, the AIA prohibits several AI practices outright. These include AI systems that manipulate human behaviour subliminally or exploit vulnerabilities based on age, disability, or socioeconomic status. Additionally, AI systems used for social scoring, predictive policing based solely on profiling, and creating or expanding facial recognition databases through untargeted scraping are banned under the new regulation.

Transparency is another cornerstone of the AIA. AI systems that interact with humans, perform emotion recognition, or engage in biometric categorisation must clearly disclose their nature to users. This measure ensures that individuals are aware when they are interacting with AI, promoting transparency and trust.

Governance and enforcement of the AIA will be overseen by a newly established European Artificial Intelligence Board. This body, along with national competent authorities in each Member State, will supervise and ensure compliance with the regulation. The AI Office will support these efforts, developing Union expertise and capabilities in AI.

To support innovation, particularly among small and medium-sized enterprises (SMEs) and startups, the AIA includes measures such as AI regulatory sandboxes. These controlled testing environments will allow new AI technologies to be safely developed and refined, ensuring that innovative solutions can flourish within a compliant framework.

Phil Burr, Head of Product at Lumai, emphasises the importance of the AIA for businesses: "When it comes to the EU AI Act, the biggest risk for business will be to ignore it. The good news is that the Act takes a risk-based approach and, given that the vast majority of AI will be minimal or low-risk, the requirements on businesses using AI will be relatively small. It’s likely to be far less than the effort required to implement the GDPR regulations, for example."

Burr also addresses concerns that the Act might deter businesses from deploying AI: "Another risk is that businesses are put off from deploying AI because of the Act. They shouldn’t be! Again, for the majority of businesses, the requirements are small, yet the benefits can be transformational."

However, Burr notes that compliance will require ongoing diligence: "The biggest problem for compliance is the need to document and then perform regular assessments to ensure that the AI risks – and therefore requirements – haven’t changed. For the majority of businesses, there won’t be a change in risk, but businesses at least need to remember to perform these."

The AIA in summary:

  • Scope and application: Applies to AI providers and deployers within the EU, and those outside the EU whose AI systems produce outputs used within the EU.
  • Risk-based approach: Categorises AI systems based on their potential risk to health, safety, and fundamental rights.
  • High-risk AI systems: Subject to stringent requirements including rigorous testing, documentation, and ongoing monitoring.
  • Prohibited AI practices: Bans manipulation of behaviour, exploitation of vulnerabilities, social scoring, and untargeted facial recognition scraping.
  • Transparency requirements: AI systems interacting with humans or performing biometric categorisation must disclose their nature.
  • Governance and enforcement: Overseen by the European Artificial Intelligence Board and national competent authorities.
  • Support for innovation: Includes AI regulatory sandboxes to support SMEs and startups.
  • General-purpose AI models: Specific provisions for AI models with systemic risks, requiring strict guidelines and Commission notification.

The Artificial Intelligence Act is an important step forward in the regulation of AI within the European Union. By balancing the need for innovation with the protection of public interests, the AIA aims to create a trustworthy AI ecosystem that benefits all stakeholders. Businesses, while facing new compliance challenges, have a clear path to integrating AI in ways that can drive transformative benefits.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier