blog

Blog: What are the EU’s orientations and envisaged choices for the regulation of liability and artificial intelligence?

22.05.2020

Dr. Jan de Bruyne & Orian Dheu (CiTiP KU Leuven)

Artificial intelligence (AI) is a key driver of economic development. Although the use of AI has benefits for a variety of sectors, legal challenges remain. Against this background, several initiatives have been taken by institutions of the European Union (EU). A reccurring topic is the liability for damage caused by AI-systems.

This blog is based on a previous analysis and evaluates some actions on liability and AI proposed at the EU level. It will do so by analysing recent documents such as the Report on Liability for AI and other Emerging Digital Technologies and the European Commission White Paper on AI as well as the accompanying Report on the Safety and Liability Implications of AI, the Internet of Things and Robotics. We will also analyse the European Parliament’s Draft Report with Recommendations to the Commission on a Civil Liability Regime for AI. Some final thoughts are provided in a conclusion.

Expert group’s Report on liability for AI and other Emerging Digital Technologies

Liability and AI is clearly an area of concern for the European Union. The Commission set up an expert group (the ‘New Technologies Formation’) to explore the liability implications of AI. The expert group published its Report on Liability for AI and other Emerging Digital Technologies in November 2019.

The report contains several interesting recommendations:

  • it rejects the idea of granting legal personhood to AI-systems (p. 37);
  • it proposes that certain operators of emerging digital technologies should be subject to strict liability (those who operate in a ‘non private environment’ which could cause ‘significant harm’ - p. 39);
  • it also suggests to amend the Product Liability Directive (PLD). According to the PLD, a producer will not be held liable if he proves that it is probable that the defect which caused the damage did not exist at the time when the product was put into circulation or that this defect came into being afterwards. In this regard, the report stresses that product liability should also apply to producers for defects in ‘products or digital content incorporating emerging digital technology’, even if the ‘defect appeared after the product was put into circulation’ (p. 42). It also suggests that the development risk defence should not be made available to producers in cases where ‘it was predictable that unforeseen developments might occur’ (p. 43). Finally, it proposes to reverse the burden of proof in light of the victims anticipated difficulties in implementing the product liability regime (p. 44);
  • it suggests that both operators and producers of such new technologies should comply to a range of obligations. For instance, the operator should chose the ‘right system for the right task and skills’, monitor the system and maintain it appropriately (p. 44). Failure to comply with such duties may trigger fault liability regardless of whether the operator may also be strictly liable for the risk created by the technology (p. 44). The producer would have to ‘design, describe and market products in a way effectively enabling operators to comply with (their) duties’ and ‘adequately monitor the product after putting it into circulation’ (p. 44);
  • it proposes that producers equip their technology ‘with means of recording information’ (logging by design) and that failure to ‘give the victim reasonable access to the information should trigger a rebuttable presumption that the condition of liability to be proven by the missing information is fulfilled’ (p. 47);
  • it suggests alleviating the of burden of proving causation in cases where certain factors are met (such as the technology’s potentiality for harm, the damage has multiple causes, etc). However, by default, the burden of proving causation should continue to rest on the victim (p. 49). It also suggests that in cases where it becomes nearly impossible to prove fault, the burden of proof should be reversed (p. 52);
  • it suggests mandating compulsory third party insurance for certain emerging technologies which carry high risks (p. 61-62). However, the insurer would keep a right of recourse action against the liable tortfeasors. It also considers putting compensation funds into place for victims which could not effectively claim compensation because the tortfeasor’s difficult identification or because the technology is uninsured (p. 62).

European Commission White Paper and Accompanying Report on Safety and Liability

The importance of liability is also highlighted in the Commission’s recent White Paper on AI and its associated Report on safety and liability.

The European Commission stresses that any regulatory intervention regarding AI should be targeted and proportionate. That is why it does not want to regulate all AI systems but only high-risk AI-systems. Systems that are not considered high-risk should only be covered by more general legislation, for example on data protection, consumer protection and product safety/liability. These rules may, however, need some targeted modifications to effectively address the risks created by AI systems. High-risk AI-systems by contrast will have to comply with several specific requirements (e.g. adequate record and data keeping requirements). Strangely, accountability is not mentioned as one of them.

Although the White Paper does not extensively address the issue of liability and AI, it acknowledges that:

  • the legal framework could be improved to address the ‘uncertainty regarding the allocation of responsibilities’ between different economic actors in the supply chain (p. 14);
  • the features of AI-systems may challenge aspects of liability frameworks and could reduce their ‘effectiveness’ (p. 15). For instance, AI technologies’ characteristics would make it harder for victims to ‘trace the damage back to a person’, which can be required for fault-based liability schemes (p. 15);
  • people who have been injured or suffered damage as the result of an AI-system should benefit from an equal level of protection as those having suffered harm caused by other technologies (p. 15);
  • the PLD may need to be amended, while a targeted harmonisation of national liability rules is suggested as well. According to the PLD, a producer is liable for damage caused by a defect is its product. A product is defective when it does not provide the safety which a person is entitled to expect, taking all circumstances into account (cf. criterion of legitimate expectations). It still remains unclear whether software can be qualified as a ‘product’ and especially when it will be considered ‘defective’.

The accompanying Report on safety and liability goes a bit further. After a brief assessment of the legal framework, it considers several points such as:

  • clarifying the scope of the PLD (p. 14) inter alia by considering to revise the notion of putting a product into circulation (p. 15);
  • reversing or alleviating the burden of proof required by national rules ‘for damage caused by the operation of AI-systems, through an appropriate EU initiative’ and facilitating the burden of proof for victims under the product liability directive (p. 14);
  • establishing a strict liability regime for AI-systems with a ‘specific risk profile(e.g. those with a high risk) and coupling it with a mandatory insurance requirement (p. 16);
  • examining the question whether or not to adapt the burden of proof regarding fault and causation for other AI-systems (p. 16). The Commission thus considers a differentiated liability approach depending on the level of risk posed by AI-systems.

European Parliament Draft Report with Recommendations to the Commission on a Civil Liability Regime for AI

The White Paper identifies the importance and need to adopt a common approach at the EU level. Against this background, the JURI Committee of the European Parliament made available its Draft Report with Recommendations to the Commission on a Civil Liability Regime for AI. The latest draft proposes a framework that could serve as a basis for a future legislative initiative by the Commission. As with its previous Recommendations on Civil Law Rules on Robotics of 2017, this draft could elicit further discussions on future evolutions on liability and AI.

The draft report’s key recommendations for a liability framework can be summarised as follows:

  • a twofold liability regime could be created depending upon the risk of the AI-system. High-risk systems would be subject to a strict liability regime in which the deployer of the system is liable without fault (article 4.1). Low-risk systems would remain subject to fault-based liability, again only targeting the deployer (article 8.1). The deployer is the person ‘who decides on the use of the AI-system, exercises control over the associated risk and benefits from its operation’ (article 3(d));
  • the Annex lists AI-systems that pose a high risk as well as critical sectors where they are being deployed (e.g. transportation). The Commission could amend the list, for instance by including news sectors;
  • a deployer of a high-risk AI-system would not be able to exonerate him/herself except for force majeure (article 4.3). A liability insurance covering compensation would be required for the deployers (article 4.4);
  • the liability of the deployer for high risk AI-systems would be capped up to a maximum of €10 million in the event of death or harm to a person’s health or physical integrity and of €2 million euros for damage to property (article 5.1);
  • limitations periods would be provided for high-risk systems depending upon the type of damage (article 7);
  • when it comes to low-risk systems, the deployer would be subject to a fault-based liability regime. The deployer would not be able to escape liability on the ground that the harm was caused by an autonomous activity, device or process driven by the AI system. However, the deployer can refute liability when proving that the harm or damage was caused without his/her fault, relying on the following grounds: ‘(a) the AI system was activated without his knowledge and all reasonable and necessary measures to avoid such activation were taken or (b) if due diligence was observed by selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates’. He would not be liable in case of force majeure (article 8.2);
  • the deployer of a low-risk system would be liable for the payment of compensation if damage results from a third party that interfered with the AI-system and is untraceable or impecunious (article 8.3). This seems to refer to cybersecurity threats. Interestingly, the deployer may request the producer to collaborate to prove that he/she acted without fault (article 8.4). National provisions on compensation and limitations periods with regard to fault liability would remain applicable (article 9);
  • there are also several rules on the apportionment of liability for damages caused by AI-systems as well as recourse actions. For instance, if there is more than one deployer, they would be held jointly and severally liable (article 11).

Although we welcome this initiative, the draft report suffers some shortcomings that may require further attention.

  1. It proposes a horizontal European liability framework for AI-systems based on their risk level. However, it does not seem to take into account existing (supra)national sectoral liability regimes. Each sector bears its own specificities which could warrant a more granular approach instead of a one size-fit-all framework.
  2. The report continues to refer to national law for several aspects as well as for the interpretation of concepts such as ‘force majeure’, ‘reasonable measures’ or ‘due diligence’. The aim of creating an harmonised framework seems thus undermined.
  3. The draft report seems to omit questions in relation to the PLD and does not really tackle that regime’s implementation difficulties. Several amendments to the current PLD are necessary for claims to be effective. However, the draft report seems to suggests that the current regime is more or less adequate and effective for AI-systems.
  4. Several problems remain regarding some provisions in the report itself. For instance, the caps may in some cases be low considering the potential severity and magnitude of damages that could result from some AI-systems and the amount of parties involved in the operation of the same high-risk AI-system (e.g. damage following a collision of a unmanned aircraft with a building). Moreover, the Annex does not mention healthcare even though the White Paper mentions it as a sector where significant risks can occur. The notion of deployer is also rather extensive and at the same time unclear as to who is covered and not covered. This could create legal uncertainty and an overexposure to liability.

Conclusions:

This blog gave an overview of the proposed adjustments and innovations to supranational frameworks on liability and AI. Whereas the Commission especially identified (potential) shortcomings in the PLD, the Parliament draft report focused on the liability of the ‘deployer’ of AI. The initiatives show that things are moving ahead at a European level. Nevertheless, some shortcomings remain and several key issues still need to be examined in the future such as the question whether software can be seen as a ‘product’ and when exactly an AI-system will be qualified as ‘defect’.

References:

About the authors:

  • * Dr. Jan De Bruyne works as senior academic researcher on legal and ethical aspects of AI at the Knowledge Centre for Data & Society. He is a lecturer e-contracts and postdoctoral researcher on liability and AI at CiTiP. He works as a postdoctoral researcher on AI, liability and certification at the Ghent University Faculty of Law and Criminology.
  • ** Orian Dheu works as a doctoral researcher on legal aspects of AI and autonomous systems at the KUL-Centre for IT & IP Law (CiTiP). He is part of the Marie Skłodowska-Curie Actions ETN project Safer Autonomous Systems (SAS). This research has received funding from the EU’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement n° 812.788 (MSCA-ETN-SAS). The publication reflects only the author’s view, exempting the EU from any liability.