European AI Office: guidelines on the AI system definition
Since the EU AI Act applies to "AI systems", it is crucial to understand what this term encompasses. To this end, the EU AI Office released its first set of guidelines, providing detailed insights into which systems qualify as AI systems under article 3(1) AI Act. These guidelines elaborate on previously somewhat ambiguous terms, such as "machine-based system", "autonomy", "adaptiveness", "AI system objectives", "inferencing to generate outputs using AI techniques", and "generation of outputs that can influence physical or virtual environments and interact with them".
What: policy orienting document
Impactscore: 2
For who: AI providers/importers/manufacturers, supervising authorities/national competent authorities, Belgian/European legislators
1. Purpose of the Guidelines
The AI Act entered into force on 1 August 2024. Its goal is to boost AI innovation while ensuring high protection of health, safety and fundamental rights. The Act only applies to systems that meet the definition of an ‘AI system’ as specified in Article 3(1). Discussions emerged regarding the interpretation of what constitutes as an ‘AI system’. Therefore the EU AI Office has developed guidelines, after consultations with stakeholders and the European AI Board, regarding this definition in order to help providers and other relevant persons determine whether a system qualifies as an ‘AI system’. These guidelines still have to be formally adopted and are not binding with the authoritative interpretation resting with the Court of Justice of the EU.
2. Main elements of the AI system definition
Article 3(1) of the AI Act states: “‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;”. This definition consists of seven main elements. According to the guidelines, a system does not need to exhibit all these elements continuously throughout its entire lifecycle to fall within the scope of the definition.
- Machine-based system
An AI system must be machine-based. This element underlines that the system must be computationally driven. The guidelines generally state that all AI systems are machine-based, as their lifecycle relies on machines that include hardware or software components. This term covers a wide variety of computational systems – even those that are biological or organic - as long as they possess computational capacity.
- Autonomy
‘Autonomy’ refers to a system’s ability to function at varying levels of independence from human intervention. According to the AI Office, systems should be designed to operate with ‘some reasonable degree of independence of actions’ in order to be considered AI systems. Relevant systems should operate with limited or no human involvement or intervention. Systems that require full manual control and constant human input are excluded from this definition. For instance, an expert system that produces recommendations on its own based on input provided by a human would be deemed to have this independence/autonomy.
- Adaptiveness
The third element of the AI system definition is its potential to exhibit adaptiveness after deployment, meaning it may have self-learning capabilities that allow its behavior to change over time. While autonomy and adaptiveness are related, they are distinct concepts. Importantly, a system does not need to be adaptive to qualify as an AI system. The self-learning element is optional rather than a defining requirement.
- AI system objectives
The fourth element of the definition concerns the AI system’s objectives. AI systems operate based on one or more objectives, which can be explicitly encoded by developers or implicitly derived from system behavior or underlying assumptions of the system. These internal objectives may differ from the intended purpose in a specific context. According to the guidelines, the objectives of an AI system are internal to the system, focusing on the goals of the tasks and desired outcomes. In contrast, the intended purpose is external, relating to the system's deployment context and operational requirements.
- Inferencing how to generate outputs using AI techniques
The capability to infer, from the received input how to generate output, is a key characteristic, setting AI systems apart from traditional software that operates solely based on predefined human rules.
Recital 12 clarifies that the ability of an AI system to generate outputs, based on inputs, mainly occurs during the use phase, while the capability to derive models or algorithms from data primarily relates to the building phase. The AI Office cites that ‘infer how to’, as mentioned in Article 3(1) should not be limited to a narrow interpretation as an ability to derive outputs from given inputs. However, the Office additionally states that ‘infers, how to generate output’ should be understood as referring to the building phase. This is a highly confusing part which requires further clarification.
Various AI techniques facilitate inference, such as machine learning approaches and logic- and knowledge-based. Oppositely, other systems, although having a limited ability to infer, would fall outside the definition of an AI system due to their restricted capacity to analyze patterns and autonomously adjust their output, such as systems for improving mathematical optimization, basic data processing, systems based on classical heuristics and simple prediction systems. These (debatable) notions are further elaborated in the guidelines. An especially problematic excerpt can found in paragraph 42 where it is seemingly implied that a consolidated use of a system during a number of years can be used as an indication that said system does not transcend basic data processing (i.e. systems currently in use will/should be considered basic data processing in the future.)
- Generation of outputs that can influence physical or virtual environments and interaction with the environment
AI systems, such as those based on machine learning and knowledge-based approaches, differ from non-AI systems in their ability to produce outputs like predictions, content, recommendations, and decisions. This is because AI can handle complex relationships and patterns in data, enabling it to generate more nuanced results. These outputs can vary in the degree of human involvement. The last element is the ability to ‘influence psychical or virtual environments’. This entails that AI systems should actively impact the environments in which they are deployed. The guidelines are very brief on this aspect
3. Concluding remarks: scope of application
Since it isn’t possible to list all potential AI systems exhaustively, the guidelines only discuss the various elements of the definition. This should also guarantee flexibility to adapt to rapid technological developments in this field. Whether a system qualifies as an AI system depends on its specific individual characteristics , considering the aforementioned criteria. Only AI systems that give rise to the significant risks to fundamental rights and freedoms are subject to regulatory obligations under the AI Act. According to the AI Office, the vast majority of AI systems will therefore not be subject to regulatory requirements under the AI Act.
This post was written by Angelina Galimova, intern at the Knowledge Centre Data & Society.