policy monitor

Council of Europe – Possible Elements of a Legal Framework on AI

To conclude its work the Ad Hoc Committee on Artificial Intelligence (CAHAI) of the Council of Europe has adopted a report which outlines the various elements that could be included in a legal framework on AI. Such framework should revolve around a risk classification mechanism and entail both direct positive rights of individuals as well as obligations on states to introduce certain rules in their domestic legal order.

What: Policy-orienting document

Impactscore: 3

For who: policy makers, non-profit organisations (e.g. digital rights activists), IT sector organisations, AI-developers/-providers.

URL:

Summary

Background

The report is the result of the work of the Council of Europe (CoE) Ad hoc Committee on Artificial Intelligence (CAHAI) on the potential elements of a legal framework for the development, design and application of artificial intelligence. As a first step, CAHAI published a feasibility study which was subsequently the subject of a multi-stakeholder consultation. Taking into account the outcomes of said consultation, the CAHAI drafted this report.

Content

The main recommendation is that the CoE should adopt a legally-binding treaty. Such treaty could be complemented by sectoral (non-)legally binding instruments and guidance, and should facilitate the accession by non-CoE member states.

The object and purpose of the treaty should be to ensure that the development, design and application of AI systems respect human rights, democracy and the rule of law, irrespective of whether these activities are undertaken by private or public actors. In that regard, the report explicitly mentions that the treaty should aim for minimum standards, meaning that member states should be allowed to impose additional rules (e.g. under the EU AI Act).

Principles and risk classification

More concretely, the report suggests that such framework should include the following elements:

  • The treaty should include certain notions and define them (e.g. AI system, AI-user,…) but, interestingly, the report itself does not include textual suggestions.
  • It should contain certain fundamental principles which should apply to both public and private development, design, and application of AI systems and revolve around the notion of human dignity. These principles can be:
    • the freedom of development and design of, and research in AI systems, in compliance with the CoE standards on human rights. (Note that 'application' is not mentioned.)
    • a duty to prevent unlawful harm as a consequence of AI-related activities
    • a duty to warrant equal treatment and non-discrimination of individuals in the context of AI-related activities as to avoid unjustified bias being built into AI systems and the use of AI systems leading to discriminatory effects.
    • A duty to ensure gender equality and that rights related to people in vulnerable groups and situations, including children, are being upheld throughout the lifecycle of AI systems.
  • The treaty can both contain direct positive rights of individuals as well as obligations on states to introduce certain rules in their domestic legal order
  • At the core of the framework, there should be a risk classification methodology of AI systems with an emphasis on human rights, democracy, and the rule of law. A two-step approach is proposed whereby an initial risk review is used to determine whether a full HUDERIA (Human Rights, Democracy and Rule of Law Impact Assessment) is required.
    • The concrete requirements for the initial risk review are not entirely clear. Based on the report, it seems that such review should take into account certain criteria (e.g. context and purpose of the AI system, level of autonomy of the AI system, underlying technology of the AI system,… ) and should allow to determine whether an AI system poses a ‘low risk’, ‘high risk’ or ‘unacceptable risk’.
    • The concrete requirements or model for a HUDERIA are more concrete:
      • a HUDERIA should only be performed if “there are clear and objective indications of relevant risks emanating from the application of an AI system”. (Note that only application is mentioned, not development or design.)
      • a HUDERIA should consist of four stages: (i) risk identification, (ii) impact assessment (for which the abovementioned criteria should be used, see par. 51-52 of the report for the whole list) (iii) governance assessment and (iv) mitigation and evaluation.
      • Stakeholder involvement should be assured throughout the impact assessment.
      • Performing a HUDERIA should be an iterative process and should be done on a regular basis. Using an AI system in a different context or for a different purpose, as well as substantial changes to the system may be specific reasons to perform a new HUDERIA.
      • Interestingly, CAHAI proposes that said requirements or model should not be legally binding. This may allow member states to modify or amend it in accordance with their specific situation.
      • It also stresses that a HUDERIA should not stand alone but be supplemented by other compliance mechanisms, such as certification and quality labelling, audits, regulatory sandboxes and regular monitoring.
  • Regarding prohibited applications of AI, it is considered that there should be a possibility of putting a full or partial moratorium or ban on the application of AI systems, which present an unacceptable risk of interfering with the enjoyment of human rights, the functioning of democracy, and the observance of the rule of law. How these notions should be interpreted in practice remains unclear at the moment. Such moratorium or ban should also be considered for the research and development of unacceptable AI systems.
    • Notably, CAHAI explicitly considers AI systems using biometrics to identify, categorise or infer characteristics or emotions of individuals, in particular if they lead to mass surveillance, and AI systems used for social scoring to determine access to essential services, as applications that may require particular attention in this context.
    • Review procedures should be put in place to enable reversal of a ban or moratorium if risks are sufficiently reduced or appropriate mitigation measures become available, on an objective basis, to no longer pose an unacceptable risk.

General and specific rules

Related to this risk classification, the report suggests to establish both general, minimum rules applying to all AI systems as well as specific rules applying to the higher risk AI systems.

General rules can include obligations regarding:

  • Data governance, robustness, safety and cybersecurity, sustainability, auditability, transparency, explainability and accountability. The last three notions are considered to be of ‘paramount importance’.
  • Human oversight over AI systems and their outputs throughout their whole lifecycle
  • The establishment of regulatory sandboxes by member states
  • Evidence-based public deliberations on and inclusive engagement with this topic, enabled by member states
  • Measures to increase digital literacy and skills among the general public and civil servants

Specific rules may apply with regard to:

  • The use of AI to unlawfully or unduly interfere in democratic processes. Election manipulation such as micro-targeting, profiling, and manipulation of content (including so-called “deep fakes”) could be dealt with in sectoral instruments.
  • The use of AI for the purpose of deciding or informing decisions impacting the legal rights and other significant interests of individuals and legal persons. In this regard the following safeguards should be installed:
    • a right to an effective remedy before a national authority (including judicial authorities) against such decisions
    • a right to be informed about the application of an AI system in the decision-making process
    • a right to know that one is interacting with an AI system rather than with a human
    • a right to choose interaction with a human in addition to or instead of an AI system
    • The modalities of the exercise of these rights should be foreseen by national law. Legitimate exceptions to these rights may be foreseen by law, where necessary and proportionate in a democratic society.

The report furthermore states that the treaty should also include a provision on the protection of whistle-blowers.

Enforcement

With regard to enforcement, CAHAI considers that the treaty should include provisions obliging parties to:

  • take all necessary and appropriate measures to ensure effective compliance with the instrument, in particular through the establishment of compliance mechanisms and standards
  • establishment or designate national supervisory authorities, define their powers, tasks and functioning as well as ensuring their expertise, their independence and impartiality in performing their functions, and the allocation of sufficient resources and staff
  • cooperate and provide mutual legal and other assistance, including exchange of data and other forms of information
  • establish a “committee of the parties” to support the implementation of the instrument

Public sector use of AI

The report also contains specific parts regarding the use of AI systems in the public sector. Also here, it focuses on AI systems which can interfere with human rights, democracy or the rule of law. More specifically, it recommends that the envisaged treaty should focus on the use of AI systems for the purposes of law enforcement, the administration of justice, and public administration. With regard to the latter, it does, however, state that provisions should be limited to general prescriptions about the responsible use of AI systems in public administration.

In the context of public sector use of AI, the future framework should at least provide for the following:

  • access to effective remedy by impacted actors,
  • a mandatory right to human review of decisions taken or informed by an AI system except where competing legitimate overriding grounds exclude this
  • an obligation for public authorities to implement adequate human review for processes which are informed or supported by AI systems
  • an obligation to provide individuals or legal persons with meaningful information concerning the role of AI systems in taking or informing decisions relating to them, except where competing legitimate overriding grounds exclude or limit such review or disclosure
  • an obligation on member states to ensure that adequate and effective guarantees against arbitrary and abusive practices due to the application of an AI system in the public sector are afforded by their domestic law
  • various provisions setting out specific requirements relating to the design, procurement, development and deployment of an AI system by a public entity (see par. 58-61)

Finally, the report proposes that additional (non-) legally binding instruments are adopted at sectoral level (e.g. healthcare or education) with the aim to clarify topics such as transparency, fairness, responsibility, accountability, explainability, and redress to ensure the responsible public sector use of AI.