Despite being modeled after the OECD’s conceptualization of AI, issues still remain with the latest version of the AI Act’s definition. Core notions and thresholds are left ambiguous, the first of which relate to the stipulation that AI has varying levels of autonomy. Further guidance will be needed in order to specify which of these varying levels will be the minimum that must be met to fulfill the autonomy criterion in practice. Furthermore, Rec. 6 seems to conflate “autonomy” with “automation” via an overly mechanistic definition. An articulated industrial robot on an assembly line is able to carry out actions (mechanical movements) independently of human intervention once started, without anything that would conventionally be called “AI”. It is thus automated.
To further illustrate the conceptual divide between autonomy and automation that the AI Act sadly neglects, we can take Kant as an example. According to Kant, autonomy consists in self-legislation, in authoring the laws that bind the will. To be truly autonomous, the articulated industrial robot from before would have to self-legislate by altering its own operating parameters via adaptation. In this light, autonomy and adaptiveness are intertwined, and autonomy requires adaptiveness in order to be realized. In the AI Act’s definition of AI, autonomy and adaptiveness are sadly divorced. Under the AI Act, adaptiveness is a soft, optional feature of AI – one that “may” (Article 3(1)) or “could” (Recital 12) be exhibited, rather than a core component of every AI system. The utility of the entirely optional adaptiveness criterion in the legal definition is therefore called into question, since it seems to offer scant interpretative guidance at best and considerable legal ambiguity at worst. In sum, adaptiveness as a definitional criterion in the AI Act is rather toothless and this by extension also lessens the usefulness of autonomy as a criterion.
Therefore, the central mark of AI remains the capacity to “infer”, enabled by either machine learning approaches or knowledge-based approaches. Here looms the Bayesian bugbear that the AI Act’s initial proposal invited. In Annex I of the original Commission proposal, statistical approaches (including Bayesian estimation in particular) were pointed out as a third pathway to AI beyond machine learning and knowledge-based approaches. While any mention of statistical modeling has thankfully been scrubbed from the current definition and recital, it is interesting to consider how the AI Act may apply to automated statistical modeling. After all, Rec. 12’s list of techniques enabling inference is non-exhaustive in nature. The issue here is that separating machine learning approaches from statistical approaches on a technical level may prove challenging, as, for example, Bayesian inference was a statistical technique before it was a “machine learning” technique. Consequently, edge cases in advanced statistical modeling may challenge our inference-focused understanding of AI. As Gelman once wrote, “A statistical procedure is a sort of machine that can run for awhile on its own, but eventually needs maintenance and adaptation to new conditions.” A pity, then, that the AI Act has defanged the self-adaptation criterion.