policy monitor

Council of the EU - Compromise Text AI Act

The new compromise text of the Artificial Intelligence Act (AIA) by the Council of the EU has been published. The text, presented by the Czech presidency, follows and builds on the previous (partial) compromises under Slovenian and French presidencies. In this update, we briefly highlight some of the new, important elements included in this latest compromise text.

What: Legislative proposal

Impactscore: 2

For who: policy makers, sector organisations, AI-companies and-users

URL: AIA – CZ – 4th Proposal (19 Oct 22)

Summary

1. Fundamentals: definition of AI and ‘user’

Since the publication of the AIA by the European Commission (EC) in April 2021, the definition of AI has been one of the most debated issues. This debate will likely only further intensify due to the most recent addition by the Council. More specifically, the Council has now included in the definition (art. 3 §1 AIA) that an AI system should operate ‘with elements of autonomy. According to recital 6, the concept of autonomy of an AI system relates to the degree to which such a system functions without human involvement. However, what should be understood under ‘with elements of’ is entirely unclear. Does this refer to a quantitative or a qualitative threshold or both? If quantitative, how much autonomy constitutes one element? How many elements are required to pass the threshold? If qualitative, how to assess the level of autonomy? How should or could this assessment differ per type of AI system?

Another fundamental amendment relates to the confusing notion of users (which should rather be deployers), as art. 2 §8 AIA now explicitly excludes the purely personal non-professional use of AI systems from its scope of application, with the exception of the transparency obligations included in art. 52 (see below). Or at least, it is believed that said exclusion is the intention of art. 2 §8 AIA because what is literally stated, is that the AI Act will not apply to “the obligations of” purely personal non-professional users. It remains unclear why the Council chose for this formulation and did not opt for e.g. the formulation included in recital 58 (“These obligations […] should not apply where the use is made in the course of a personal non-professional activity”). Intriguingly, this exclusion would mean that, under the AIA, non-professional users would not have to use AI systems in accordance with the instructions of use (art. 2 §8 juncto art. 29 AIA).

2. General Purpose AI systems (GPAIs)

The absence of GPAIs was an important drawback of the initial EC proposal and the Council has chosen to include these systems into the scope of the AIA (Title Ia). However, the Council leaves it essentially to the EC to establish the specific rules applicable to GPAIs (through the use of implementing acts, art. 4b §1 AIA) and limits the scope to GPAIs that would fall under the category of high risk AI systems (which is further weakened by the exception in art. 4c AIA). From a democratic point of view, the delegation to the EC should be reconsidered, especially because GPAIs are probably the AI systems with the most difficult to assess societal risks and challenges, meriting a thorough democratic debate.

3. High risk AI systems

Apart from the product/safety component-regime, AI systems are considered high risk if they feature in the list of Annex III “unless the output of the system is purely accessory in respect of the relevant action or decision to be taken” (amended art. 6 §3 AIA). This latter part was added by the Council in the latest version. Recital 32 clarifies ‘purely accessory’ as follows: “the output of the AI system has only negligible or minor relevance for human action or decision”. Unfortunately, under this wording, it cannot be ruled out that there will be systems on the market that, according to their manual or in theory, need human verification or validation (and where the AI output is therefore purely accessory), but where in practice, partly because of automation bias, no actual human validation happens and it would therefore be a high-risk application in fact. This amendment risks eroding the entire Annex III list. On the contrary, what is still missing from art.6 AIA is a list of (meta-)criteria that was/is/will be used to determine what makes a particular AI system a high-risk system (e.g. by further developing the criteria included in art. 7 AIA). This exercise will very likely show that AI systems such as GPAIs, emotion recognition or deep fakes should fall under a higher risk category. Moreover, the Council maintained art.7 §1.a AIA which restricts the possibility of adding new high risk AI systems to the areas already listed in Annex III, which is not future-proof.

4. Transparency obligations

Another interesting amendment by the Council can be found art.52, §3 AIA. This paragraph now explicitly states that AI-generated (audiovisual) output does not have disclose said nature “where the content is part of an evidently creative, satirical, artistic or fictional work or programme”. In other words, artists using e.g. Dall-E would not have to disclose that they are wielding artificially intelligent tools. This is, however, very important from a copyright-perspective as AI-generated art cannot enjoy such protection under the current European copyright regime. The desirability of that exclusion should therefore be reconsidered. More generally speaking, one wonders if the AI systems currently featured under art.52 AIA (i.e. chatbots, emotion recognition and deep fake technology) shouldn’t be regarded as high risk AI systems.

5. Regulatory sandboxes

The potential of regulatory sandboxes as an innovation facilitator had been discussed long before the proposal for the AIA became a reality. The general consensus and the direction taken by the EC was that sandboxes were fit for purpose of bringing legal certainty and mitigating the possible negative societal effects of disruptive technologies, such as AI.

Unsurprisingly, the regulatory sandboxes were introduced in Title V of the initial AIA proposal as measures in support of innovation. Nonetheless, the EC remained quite conservative with its approach, describing them as “a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan”. The rest of art. 53 was dedicated to all the guarantees and restrictions envisioned inside the sandbox, which attracted much criticism, especially from the industry, which showed great interest in participating in this new way of regulation.

The current compromise text appears to reflect the growing interest of European society in more agile regulation in the form of regulatory sandboxes. Unleashing their potential was deemed an important issue by both the Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) Committees of the European Parliament.

Unfortunately, the vivid discussion on the regulatory sandboxes produced mixed results, which might be due to the impressive amount of contributions on the topic by various stakeholders. Part of those are to be found in the updated text of several recitals (71, 72, 72-a, 72a) in the latest version. These amendments seem to disregard, however, the purpose of recitals as supplementary normative items and turn them into some kind of a policy report with all the negative consequences this entails for legal certainty.

Aside from the additional recitals, the current version contains a definition of AI regulatory sandbox which according to art. 3 §52 AIA is a “concrete framework set up by a national competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a specific plan for a limited time under regulatory supervision”. It should be noted that this definition is rather different from the initial proposal, which also didn't feature a separate definition in art. 3 AIA. Moreover, it seems that the primary purpose of regulatory sandboxes, which used to be reducing the time to market of innovative products and services, has been replaced by focusing mostly on ensuring compliance with not only the provisions of the AIA but also other relevant legal instruments (as evidenced by art. 53 §1b AIA and recital 72). This legal interoperability is not new, nevertheless, it remains uncertain how it could be operationalised between various regulators and/or governmental agencies, not only from a practical point of view but also due to the different jurisdictions they might have, especially in countries with more complicated systems such as Belgium, Germany, Spain, etc.

Article 53 itself went through a significant number of changes which led to the current version which contains several main points.

  • First, national competent authorities may establish regulatory sandboxes which differs from the previous formulation which imposed their establishment.
  • Second, the potential sandboxes include much more than just testing and validation, they can also include development and training, including in “real world conditions”.
  • Third, the sandboxes are to involve a variety of stakeholders which could prove to be beneficial and actually increase the incentives for companies to participate.
  • Fourth, the transparency concerns were addressed by attempting to strengthen the learning and information sharing benefits of a sandbox, including through mandatory exit reports.

On a more general level, the lack of structure of art. 53 should be noted as well as the fact that the EC is yet again entitled to adopt implementing acts in order to determine the modalities and conditions for establishment and operation of regulatory sandboxes, raising questions on both legal certainty and democracy.

To end on a positive note, the new art. 54a is a daring and unusually brave attempt to facilitate testing and boost innovation. It regulates testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes. Although still very restrictive (e.g. in the light of art 54a §4’s conditions such as approval from the market surveillance authority), this approach would actually increase the efficiency of the testing and allow sort of an integration testing with other systems but also with real life situations and data. Even though this approach needs more work with regard to proper guarantees for all participants in the testing, it is closer in spirit to real regulatory sandboxes as they were first intended.

This post is authored by Katerina Yordanova (researcher at CiTiP) and Thomas Gils (Knowledge Centre Data & Society)