New training course: Responsibly innovating with AI

(NL) Three-day training course "Responsibly innovating with AI"
Person holding a pile of papers. A white banner with the European Commission logo is placed over the image, this reflects that this policy monitor item is about an administrative decision in the European Union. 

Persoon houdt een stapel papieren vast. Een witte banner met de vlag van de Europese Commissie is over het beeld geplaatst, dit geeft aan dat dit beleidsmonitor item over een administratieve beslissing in de Europese Unie gaat.
15.07.2025

European Commission - The General-Purpose AI Code of Practice

European Commission - The General-Purpose AI Code of Practice

This voluntary Code of Practice aims to align GPAI models with the principles of the EU AI Act. It provides guidance for model providers to comply with the Act’s obligations, adhere to EU copyright law, and assess and mitigate potential systemic risks. The Code introduces a two-tier system for model providers. All providers are required to meet transparency and copyright compliance obligations. In addition, stricter compliance measures apply to providers whose models pose systemic risks to society. This is the final version of the General-Purpose AI Code of Practice. An extensive process preceded the drafting of the Code

What: policy-oriented document

Impact score: 3 – voluntary, yet likely to shape compliance strategies for major providers

For whom: policymakers, businesses, model providers, right-holders, researchers and Supervisory authorities

URL: https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice 

The General-Purpose AI Code of Practice has been designed under the framework of the EU AI Act. Under Article 3(44) of the EU AI Act, "general-purpose AI" refers to models that can be used for multiple purposes and perform generally applicable functions such as image/video generation, audio/text manipulation, code generation, or pattern recognition. The Act applies to any provider placing these models on the EU market or putting them into service in the EU, regardless of their established location (Article 2). The Code aims to serve as a dynamic and adaptable framework to ensure compliance with the Act, align AI innovation with European values, and stimulate collaboration among diverse stakeholders, including AI providers, policymakers, businesses, and civil society. The obligations do not apply to models released under a free and open-source license if their parameters, architecture, and usage information are publicly available. This exemption does not apply to general-purpose AI models with systemic risk (Article 53(2)).

Key Features

The Code is built on high-level principles that embody EU values, are future-proof, and promote the support and growth of the AI safety ecosystem. Proportionality is thereby an important feature, which should ensure that obligations imposed on AI providers align with the risks and capacities of different actors, such as SMEs versus larger organizations. The Code also incorporates flexibility to adapt to ongoing technological advancements.

A central aspect of the Code is its taxonomy of systemic risks. According to the text, general-purpose AI models with systemic risk can potentially cause significant large-scale negative impacts due to their capabilities, deployment contexts, or usage. The taxonomy includes cybersecurity threats, risks related to persuasion and manipulation, large-scale discrimination, and challenges posed by rapid, unregulated technological advancements. It also addresses the nature of systemic risk and details topics such as whether there is intent or the level of novelty This taxonomy serves as a foundation for providers to assess and address the potential negative impacts of their AI models.

Targeted Commitments

The Code sets out specific commitments for AI providers. Providers of general-purpose AI models are required to ensure the following:

1. Transparency
 

  • Document for Oversight: Maintain detailed technical documentation for submission to the AI Office or competent authorities upon request. This Model Documentation Form (AI Act Annex XI & XII) should include information about the model’s design, training process, testing outcomes, intended applications, energy use and acceptable use policies.
  • Inform Downstream Providers: Provide downstream AI system developers with sufficient information to understand the capabilities, limitations, and compliance requirements of the model. Documentation must include licensing terms, acceptable use policies, and distribution methods.
  • Public Transparency: Encourage disclosure of certain non-sensitive information to the public to promote accountability and trust.
  • Ensure quality, integrity and security of information: Documented information should be controlled for quality and integrity, retained as evidence of compliance with the obligations of the AI Act and protected from unintended alterations.
     

2. Rules Related to Copyright
 

  • Internal Copyright Policies: Develop, keep up-to-date and enforce internal policies to adhere to EU copyright laws, ensuring all training data and outputs respect intellectual property rights.
  • Scraping: reproduce and extract only lawfully accessible copyright-protected content. Exclude from web-crawling pirate websites. A dynamic list of hyperlinks of these websites will be published on an EU website.
  • Upstream Compliance: Conduct due diligence when sourcing datasets from third parties; verify compliance with copyright restrictions.
  • Downstream Compliance: Mitigate risks of copyright infringement by downstream users or applications, such as overfitting models on copyrighted content.
  • Respect for Text and Data Mining (TDM) Exceptions: Ensure lawful access to copyrighted material and compliance with rights reservations expressed through machine-readable means, such as robots.txt files or other appropriate machine-readable opt-out mechanisms.
  • Transparency in Copyright Measures: Publish details about how copyright compliance is achieved, including information about data sources, authorizations, and handling of complaints.

Considerations for providers of GPAI models

Providers of General-Purpose AI Models with Systemic Risk must adhere to the following:

  1. Governance & Framework
    1. Safety & Security Framework: Draft, implement and update a comprehensive risk-management framework and notify the AI Office. Providers should, notably, define trigger-points for extra evaluations and map risk tiers & safety margins. (Commitment 1)
    2. Governance / Responsibility: Assign clear roles for oversight, ownership, monitoring & assurance up to board level (SMEs may combine roles). (Commitment 8)
       
  2. Systemic-Risk Assessment Cycle
    1. Systemic-risk identification: Follow a structured process to list all potential systemic risks and draft detailed scenarios. (Commitment 2)
    2. Systemic-risk analysis: Gather evidence, run state-of-the-art model evaluations, model pathways and estimate probability/severity. (Commitment 3)
    3. Risk-acceptance determination: Decide using predefined risk tiers whether each risk and the overall risk are “acceptable”. The model should be stopped or adjusted if the risks are deemed unacceptable. (Commitment 4)
       
  3. Mitigations and controls
    1. Safety migitations: Implement robust, adversary-resistant safety controls across the model's lifecycle to minimize risks associated with dangerous model capabilities, such as misuse in cybersecurity or manipulation. Providers should, notably, introduce data filtering, refusal tuning, staged release and quantitative guarantees. (Commitment 5)
    2. Security mitigations: Achieve an adequate Security Goal against external and insider threats. Providers should follow the detailed controls mentioned in Appendix 4, including weight encryption, access controls, and red-teaming for vulnerabilities.. (Commitment 6) 
       
  4. Reporting, monitoring and transparency
    1. Safety and Security Model Report: File a comprehensive Model Report with the AI Office before placement and keep it updated. (Commitment 7)
    2. Serious-incident Reporting: Track, investigate and report incidents within 2–15 days depending on impact (critical-infra, cyber-breach, death, etc.) (Commitment 9)
    3. Extra documentation and transparency: Keep detailed technical files (architecture, evaluation code, mitigations) for 10 years. Publish summaries when needed. (Commitment 10)

Enforcement

Although the Code is formally voluntary, it is designed to function as an evidentiary safe-harbour: providers must maintain granular internal records (e.g. ten-year technical files and Model Documentation) that can be handed to the AI Office or national authorities whenever they ask, and they must deliver updated, un-redacted “Model Reports” to the AI Office within five business days of each confirmed update, plus at least every six months for their most capable models.

Day-to-day accountability is reinforced by user-facing tools: downstream developers can compel missing documentation within a hard 14-day service window, while right-holders get a digital complaints desk that providers must handle diligently and non-arbitrarily. For systemic-risk models, the safety chapter layers on incident pipelines and protects whistleblowers who report hidden risks. All supporting evidence (e.g. architecture descriptions, evaluation results, mitigation logs) must be archived for at least a ten. years. If a provider misses these deadlines, withholds information, or its documentation reveals gaps, the AI Office can escalate to the binding enforcement toolkit of the AI Act (compliance orders, model suspensions, or turnover-based fines) so the practical incentive is reputational at first but regulatory and financial once the Code’s transparency hooks expose non-compliance.

Next steps

The code was published on July 10, and trigger pointsguidelines on key concepts related to general-purpose AI models will be published soon. The Code must still be endorsed by Member States and the Commission. Following this, model providers are invited to sign and comply with the Code.

Deepen your knowledge of the AI Act

Do you want to learn more about regulation on data and AI? The Knowledge Centre Data & Society/Centre for IT and IP Law offers an on-demand course in which we'll guide you through the legal landscape related to data, algorithms and AI.