Policy Monitor

EU - EU AI Act: entry into force & overview main provisions

Twenty days after its publication in the Official Journal of the European Union, the EU AI Regulation entered into force on 1 August 2024. The actual application of the Regulation will be phased in gradually, with the first application date in February 2025 and the final one in August 2027. With this entry into force, the countdown has begun for the actors targeted by the AI Regulation, including providers, importers, distributors, manufacturers, and users of AI.

Wat: European regulation

Impactscore: 1

For who: citizens, AI providers/importers/manufacturers, Belgian legislators

URL: https://eur-lex.europa.eu/legal-content/...

Key takeaways for Flanders:

August 1st 2024, the AI Act came into force. From February 2025, the AI Act will also gradually become applicable. Not only Flemish companies have to take into account the application of the AI Act by regulating themselves, the Belgian legislator will also have to take action, e.g. by appointing a national competent authority.

The focus of the obligations lies with (Flemish) providers, manufacturers, distributors and users, who work with high-risk AI systems. In addition, companies must take into account the prohibition of AI systems with unacceptable risk.

The European AI Regulation (AIA) entered into force on 1 August 2024, twenty days after its publication in the Official Journal of the EU. The application of this regulation will be phased, with the first significant date set for February 2025. The countdown to its full application has officially begun.

This summary provides a high-level overview of the AIA’s key content and upcoming deadlines. It is not an exhaustive analysis of the AIA’s implications. For more detailed discussions, such as the role of the European AI Office, or the definition of AI systems in the AI Act, or transparency requirements, further resources are available through our website (such as our Policy Prototyping report or parts 1 and 2 of our AI Act webinars).

The AIA is a legislative tool designed by the European Union which aims to enhance the functioning of the internal market, promote the adoption of human-centric and trustworthy AI, and ensure a high level of protection of health, safety and fundamental rights. At its core, the AIA is a market regulation instrument.

The regulation categorises AI systems into four risk levels, which determine the obligations that actors must follow. It is possible that some of the categories cumulatively apply to an AI system.

The strictest category involves a ban within the EU on systems that pose an “unacceptable” risk, such as those used for social scoring based on behaviour or emotion detection in the workplace.

The second category, and also the primary focus of the AIA, are the high-risk AI systems (HR AIS). These systems are permitted, provided they meet specific requirements and undergo a conformity assessment. An AI system qualifies as a HR AIS when it

  1. is used as a safety component or a product covered by EU laws listed in Annex I of the AIA and is required to undergo a third-party conformity assessment (e.g. medical devices, industrial machinery, toys, radio equipment, etc.); or
  2. is applied in specific areas listed in Annex III AIA. These areas include biometrics, critical infrastructure, education, employment, access to essential services (both public and private), law enforcement, immigration and administration of justice and democratic processes.

There are exceptions to these classifications. For example, AI systems used for military, defence, national security, scientific research and development, or purely personal, non-professional purposes do not fall under the scope of application of the AI Act. Moreover, if an AI system is high-risk under Annex III if the AIA but poses no significant risk to the health, safety, or fundamental rights of individuals—such as when it performs a narrow procedural task—it may not be classified as HR AIS. However, any HR AIS from Annex III that performs profiling of individuals is not subject to this exception and still considered a HR AIS.

The AIA also distinguishes between AI systems with minimal to no risk and those that require adherence to specific transparency rules based on risks of impersonation or deception (such as chatbots, generative AI for synthetic content, deepfakes and emotion recognition systems). These specific transparency rules include information obligations and mandatory marking of the generated content.

Overall, the majority of the AI Act’s obligations focus on high-risk AI systems. The regulation mainly targets AI providers and deployers, who now face more clearly defined requirements and obligations for AI development, marketing and use.

In addition to these categories, the AIA imposes specific obligations for general-purpose AI (GPAI) models. These include additional documentation requirements, creating a copyright policy and summarising the training content of the model. GPAI models with a systemic risk are subject to additional obligations such as model evaluations, mitigating systemic risks, tracking and reporting serious incidents, and ensuring cybersecurity protection.

The AIA’s phased application timeline extends until 2027:

  • 2 February 2025: The prohibition on AI systems deemed to pose an unacceptable risk takes effect.
  • 2 August 2025:
    • Member States must designate national competent authorities to oversee AIA implementation and market surveillance. At the EU level, the AI Office will handle enforcement of the obligations on GPAI models. Sanctions will also start applying.
    • Rules for general-purpose AI models will come into force.
  • 2 August 2026: Most AIA rules take effect, except for those with specific later dates.
  • 2 August 2027: Classification rules for high-risk AI systems from annex I and their obligations will be fully enforced.

Non-compliance with the AIA can result in substantial fines: up to €35 million or 7% of a company’s total worldwide annual turnover for the preceding financial year, whichever is higher, for prohibited practices. Other violations may incur fines of up to €15 million or 3% of annual turnover.