EU Commission proposes AI liability changes
The European Commission has proposed rules to help those harmed by products controlled by artificial intelligence (AI) and digital devices including drones.
Self-driving cars, voice assistants and search engines could also fall under this category. In a 2020 survey, liability ranked amongst the top three barriers to the use of AI by European companies. It was cited the most relevant obstacle for companies planning to but not yet adopted AI.
These plans to update EU law to recognise by default that there is a casual link between the fault of AI system providers and the output produced by their AI systems, have been set out by the European Commission. This makes for a legal framework fit for the digital age. This is contained in a proposed AI Liability Directive, designed to help consumers raise damages claims when something went wrong with the operation of the system.
The AI Liability Drive proposal covers national liability claims mainly based on the fault of any person with a view to compensating any damage and any victim. They compliment one another to form an overall effective civil liability system.
Under the proposed new AI Liability Directive, the presumption of causality will apply only if claimants can satisfy three core conditions.
- The fault of an AI system provider or user has been demonstrated or at least presumed to have been so by a court
- It can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output
- The claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage
The aim is for these rules in conjunction to promote trust in AI and other digital technologies by ensuring that victims are effectively compensated if damage occurs despite the preventative requirements of the AI Act and other safety rules.
If passed, the Commission’s rules could run alongside the EU’s proposed Artificial Intelligence Act – the first law of its kind to set limits on how and when AI systems can be used.
The proposal aims to ensure manufacturers can be held liable for changes they make to products already on the market, including changes triggered by software updates or machine learning.
The AI Liability Directive will introduce a “presumption of causality” for those claiming injuries by AI-enabled products. What does this mean for victims? They won’t have to untangle complicated AI systems to prove their case, so long as a casual link to a product’s AI performance and the associated harm can be shown. It seeks to alleviate the burden of proof in complex cases.
EU rules on AI liability were trialled alongside the EU’s digital future strategy and AI whitepaper published in 2020.
The UK’s position differs as no bespoke AI liability rules have been envisaged. The government however has set out its plans for sectoral, risk-based regulation of AI in early 2022. This move diverges from the approach proposed in the EU with the AI Act.