New training course: Responsibly innovating with AI

(NL) Three-day training course "Responsibly innovating with AI"
Publicatie Cover image ezeltje prik blog
23.05.2024

Defining AI in the AI Act: pin the tail on the system

Following the AI Act’s adoption by Parliament, the law’s definition of AI systems still lacks clarity. This blogpost introduces the much-debated definition of AI systems and briefly notes its shortcomings stemming from ambiguous language, optional definitional criteria, and a lack of linkage between core elements such as autonomy and adaptiveness, all of which may hinder the law’s effective implementation.

What is an AI?

To this point, the definition of an AI system has been highly debated and has varied across the initial Commission proposal, and the texts put forward by the Parliament and the Council. Following the adoption of the Artificial Intelligence Act (AI Act) by the European Parliament on 13 March, the new Regulation’s text is not expected to undergo further substantive changes. Is it finally time, then, to learn what an AI system actually is?

According to Art. 3(1) AI Act, an AI system is a “machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” From this, the main functional criteria that distinguish an AI system’s behavior from other machine-based systems are revealed to be (i) autonomy, (ii) adaptiveness, and (iii) inference.

To provide further guidance on the definitional criteria, Rec. 12 AI Act elaborates that “autonomy” indicates “some degree of independence of actions from human involvement and of capabilities to operate without human intervention”, whereas “adaptiveness” refers to “self-learning capabilities, allowing the system to change while in use.” Furthermore, Rec. 12 also clarifies that “inference” refers to “the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments and to a capability of AI systems to derive models and/or algorithms from inputs/data.” The techniques that enable inference for AI are said in Rec. 12 to “include machine learning approaches that learn from data how to achieve certain objectives; and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved”, with either implementation going “beyond basic data processing” and enabling “learning, reasoning or modelling.”

Is this definition reliable?

Despite being modeled after the OECD’s conceptualization of AI, issues still remain with the latest version of the AI Act’s definition. Core notions and thresholds are left ambiguous, the first of which relate to the stipulation that AI has varying levels of autonomy. Further guidance will be needed in order to specify which of these varying levels will be the minimum that must be met to fulfill the autonomy criterion in practice. Furthermore, Rec. 6 seems to conflate “autonomy” with “automation” via an overly mechanistic definition. An articulated industrial robot on an assembly line is able to carry out actions (mechanical movements) independently of human intervention once started, without anything that would conventionally be called “AI”. It is thus automated.

To further illustrate the conceptual divide between autonomy and automation that the AI Act sadly neglects, we can take Kant as an example. According to Kant, autonomy consists in self-legislation, in authoring the laws that bind the will. To be truly autonomous, the articulated industrial robot from before would have to self-legislate by altering its own operating parameters via adaptation. In this light, autonomy and adaptiveness are intertwined, and autonomy requires adaptiveness in order to be realized. In the AI Act’s definition of AI, autonomy and adaptiveness are sadly divorced. Under the AI Act, adaptiveness is a soft, optional feature of AI – one that “may” (Article 3(1)) or “could” (Recital 12) be exhibited, rather than a core component of every AI system. The utility of the entirely optional adaptiveness criterion in the legal definition is therefore called into question, since it seems to offer scant interpretative guidance at best and considerable legal ambiguity at worst. In sum, adaptiveness as a definitional criterion in the AI Act is rather toothless and this by extension also lessens the usefulness of autonomy as a criterion.

Therefore, the central mark of AI remains the capacity to “infer”, enabled by either machine learning approaches or knowledge-based approaches. Here looms the Bayesian bugbear that the AI Act’s initial proposal invited. In Annex I of the original Commission proposal, statistical approaches (including Bayesian estimation in particular) were pointed out as a third pathway to AI beyond machine learning and knowledge-based approaches. While any mention of statistical modeling has thankfully been scrubbed from the current definition and recital, it is interesting to consider how the AI Act may apply to automated statistical modeling. After all, Rec. 12’s list of techniques enabling inference is non-exhaustive in nature. The issue here is that separating machine learning approaches from statistical approaches on a technical level may prove challenging, as, for example, Bayesian inference was a statistical technique before it was a “machine learning” technique. Consequently, edge cases in advanced statistical modeling may challenge our inference-focused understanding of AI. As Gelman once wrote, “A statistical procedure is a sort of machine that can run for awhile on its own, but eventually needs maintenance and adaptation to new conditions.” A pity, then, that the AI Act has defanged the self-adaptation criterion.

The AI Act’s pedigree and future

The AI Act’s confusing concept of an AI system may be juxtaposed with the OECD’s definitionof an AI system, which the EU legislature adapted. The OECD defines an AI system as “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” The OECD’s definition is less tentative and more reliable: some level of both autonomy and adaptiveness is assumed to be inherent in every AI system. There is furthermore no consideration of how an AI system is “designed” to operate. Under the AI Act, the notion of “designed” represents a strange intentionality requirement that elicits questions of whether an AI system is still an AI system if it was not intended to act as an AI system by its architect, or vise-versa. In contrast, the OECD definition merely relates to a putative AI system’s objective functioning in practice.

While defining AI is not an easy task, it is striking how all the changes made by the AI Act to its inspirational material, the OECD definition, seem to have only muddied the waters and introduced further legal ambiguities. As a consequence, the AI Act’s concept of an AI system will have to crystalize through case law informed by guidance from the AI Office and national competent authorities. As this blogpost has discussed, particular attention will have to be devoted to clarifying the substance of and relationships between the notions of “autonomy”, “adaptiveness”, and “inference”.

About

This blog was written by Tervel Bobev and translated by Arno Cuypers, legal researchers at CiTiP-KULeuven. The author thanks Thomas Gils for his feedback.

The original and English version of the blog is available via the link below: https://www.law.kuleuven.be/ci...

This blog was made possible in part by the research projects RE4DY (Grant Agreement no. 101058384) and MOZAIK (FWO - SBO file S003321N)

Illustration by Matt Cole via Vecteezy.com

Authors

Tervel Bobev and translated by Arno Cuypers (legal researchers at CiTiP-KULeuven)