Policy Monitor

European AI Office: guidelines on prohibited AI systems

On 4 February 2025, two days after the first batch of AI Act provisions became applicable – including the rules on prohibitions and AI literacy - the EU AI Office published its Guidelines on prohibited AI systems. The guidelines (135 pages) clarify the scope and objective of the prohibitions included in Article 5 AI Act and related recitals and address the eight categories of banned AI practices in the EU. The document provides interpretations, practical insights and examples to explain these practices. As highlighted by the AI Office, the guidelines will serve as a key resource for market surveillance authorities, which must be designated by EU member states by 2 August 2025. Also importantly, it should assist providers and deployers of AI systems understand and comply with the AI Act.

What: policy orienting document

Impactscore: 2

For who: AI providers/importers/manufacturers, Belgian legislator

URL: https://digital-strategy.ec.eu...

Methodology, structure and scope of the Guidelines

The methodology used in the guidelines correctly points out that before delving into any analysis of the prohibitions, first, the scope of application of the AI Act must be checked. After all, certain activities are excluded from its scope ( e.g., research, national security, non-professional use etc.). If an AI system or related activity falls outside the AI Act’s scope, none of its provisions - including the ones concerning prohibitions - apply. One should keep in mind that, the scope assessment is relevant not only for prohibited AI practices but also for the other risk categories (high-risk AI systems and AI systems to which transparency obligation apply). For more information on the scope of application of the AI Act, please also see our policy brief.

The guidelines emphasize the importance of a case-by-case analysis when determining whether an AI system falls within one of the prohibitions. This analysis should consider the specific use of the AI system, assess whether all cumulative criteria outlined for each category of prohibitions under Article 5 AI Act are met, and determine whether any exceptions apply. Furthermore, the guidelines provide examples of practices which are similar to the prohibited ones, but which should be considered legitimate. By providing these examples of what is not prohibited under the AI Act, they intend to increase the usability of the guidelines and assist stakeholders in correctly defining the scope of the prohibitions. However, these legitimate AI practices or the ones that fall under the “exceptions” may still be classified as "high-risk" or could have comply with specific transparency requirements (e.g. chatbots). Importantly, the guidelines stress that prohibitions should be interpreted narrowly: “Since violations of the prohibitions in Article 5 AI Act interfere the most with the freedoms of others and give rise to the highest fines, their scope should be interpreted narrowly.”(para. 57, page 18). Similarly, it stresses that any exceptions to prohibitions should also be strictly interpreted to ensure maximum protection of fundamental rights. Finally, the interpretations should be done “in a manner that does not allow circumvention of the prohibition,” which is also relevant to other requirements of the AI Act.

The guidelines, where relevant to each category of prohibitions, highlight the relevance of data protection laws ( e.g., General Data Protection Regulation ‘GDPR’), anti-discrimination laws, and consumer protection laws, particularly Directive 2005/29/EC (the Unfair Commercial Practices Directive, or ‘UCPD’), among others. In this regard, providers and deployers must ensure their AI systems also comply with these overlapping laws, even if their systems are not prohibited under the AI Act.

Furthermore, the guidelines rely on interpretations and rulings from the Court of Justice of the European Union (CJEU) to clarify key notions within the prohibition clauses, including concepts such as harm, profiling, etc. It also establishes clear links with pivotal CJEU case laws on data protection and automated decision-making tools, most notably, the C-634/21 Passenger Name Record (PNR) and the C-634/21 SCHUFA credit-scoring cases. By drawing strong connections to the data protection domain, the guidelines closely align with the European Data Protection Board’s (EDPB) opinions and guidance on biometric technologies and automated decision-making. Lastly, the guidelines incorporate insights from several academic contributions, reflecting state of art developments and understanding in the AI domain. Through these combined legal and scholarly sources, along with the guidance from supervisory authorities, the guidelines guide stakeholders toward interpreting and applying the AI Act’s provisions in a way that aligns with the relevant state-of-the-art developments, rather than in isolation.

From a strictly legal perspective, the guidelines are not binding nor authoritative. Authorities and courts have the final say on interpretation (para. 5). Nevertheless, the guidelines offer invaluable insights and interpretations that help stakeholders navigate the AI Act’s rules, making them an essential resource for enforcement and compliance efforts.

What is next: Enforcement, Fines and Other developments

Violations of the AI Act's prohibitions can result in hefty fines, reaching up to €35 million or 7% of a company's global annual turnover, whichever is higher (vice versa for SMEs). Notably, the AI Act chapters concerning governance ( i.e. market surveillance authorities), enforcement, and fines will only take effect on 2 August 2025. This means that while the prohibitions are mandatory as of 2 February 2025, the mechanisms for monitoring compliance and imposing penalties will not be operational until August 2025. During this interim period, however, providers and deployers are still obliged to ensure that their AI practices are not prohibited under Article 5 AI Act. The guidelines note that even without active enforcement bodies, these prohibitions have direct effect, allowing affected parties to seek enforcement through national courts, including the possibility of interim injunctions against non-compliant practices. Therefore, providers and deployers cannot ignore compliance obligations, even before fines officially take effect.

Separately, the EU AI Office also published guidance on the definition of AI system under the AI Act on 6 February 2025. This guidance is crucial in determining whether a given system qualifies as an AI system under the AI Act, which in turn affects the AI Act’s overall applicability. Therefore, the definition guidance should be read next to the guidelines on prohibitions to correctly assess whether the AI Act applies in the first place. See here for a summary of the guidance on the definition.

Overview of Prohibited AI Practices

Provision

Prohibition

Content

Article 5(1)(a)

Harmful manipulation, and deception

AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or with the effect of distorting behaviour, causing or reasonably likely to cause significant harm

Article 5(1)(b)

Harmful exploitation of vulnerabilities

AI systems that exploit vulnerabilities due to age, disability or a specific social or economic situation, with the objective or with the effect of distorting behaviour, causing or reasonably likely to cause significant harm

Article 5(1)(c)

Social scoring

AI systems that evaluate or classify natural persons or groups of persons based on social behaviour or personal or personality characteristics, with the social score leading to detrimental or unfavourable treatment when data comes from unrelated social contexts or such treatment is unjustified or disproportionate to the social behaviour

Article 5(1)(d)

Individual criminal offence risk assessment and prediction

AI systems that assess or predict the risk of people committing a criminal offence based solely on profiling or personality traits and characteristics; except to support a human assessment based on objective and verifiable facts directly linked to a criminal activity

Article 5(1)(e)

Untargeted scraping to develop facial recognition databases

AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or closed-circuit television (‘CCTV’) footage

Article 5(1)(f)

Emotion recognition

AI systems that infer emotions at the workplace or in education institutions; except for medical or safety reasons

Article 5(1)(g)

Biometric categorisation

AI systems that categorise people based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation; except for labelling or filtering of lawfully acquired biometric datasets, including in the area of law enforcement

Article 5(1)(h)

Real-time remote biometric identification (‘RBI’)

AI systems for real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement; except if necessary for the targeted search of specific victims, the prevention of specific threats including terrorist attacks, or the search of suspects of specific offences (further procedural requirements, including for authorisation, outlined in Article 5(2-7) AI Act).


This post was written by Abdullah Elbi, researcher at CiTiP-KU Leuven.