European Commission - First Draft of the General-Purpose AI Code of Practice
This Code aims to align GPAI models with the principles of the EU AI Act. It provides guidance for model providers to comply with the Act’s obligations, adhere to EU copyright law, and assess and mitigate potential systemic risks. The Code introduces a two-tier system for model providers. All providers are required to meet transparency and copyright compliance obligations. In addition, stricter compliance measures apply to providers whose models pose systemic risks to society. This draft represents the first of four planned drafting rounds. The finalized Code is expected to be published by May 2025.
What: policy-orienting document
Impact score: 2
For who: policymakers, businesses, researchers, Supervisory authorities
URL: https://digital-strategy.ec.europa.eu/en/library/first-draft-general-purpose-ai-code-practice-published-written-independent-experts
The General-Purpose AI Code of Practice has been designed under the framework of the EU AI Act. Under Article 3(44) of the EU AI Act, "general-purpose AI" refers to models that can be used for multiple purposes and perform generally applicable functions such as image/video generation, audio/text manipulation, code generation, or pattern recognition. The Act applies to any provider placing these models on the EU market or putting them into service in the EU, regardless of their established location (Article 2). The Code aims to serve as a dynamic and adaptable framework to ensure compliance with the Act, align AI innovation with European values, and stimulate collaboration among diverse stakeholders, including AI providers, policymakers, businesses, and civil society. The obligations do not apply to models released under a free and open-source license if their parameters, architecture, and usage information are publicly available. This exemption does not apply to general-purpose AI models with systemic risk (Article 53(2)).
Key Features
The Code is built on high-level principles that embody EU values, are future-proof, and promote the support and growth of the AI safety ecosystem. Proportionality is thereby an important feature, ensuring that obligations imposed on AI providers align with the risks and capacities of different actors, such as SMEs versus larger organizations. The Code also incorporates flexibility to adapt to ongoing technological advancements.
A central aspect of the Code is its taxonomy of systemic risks. According to the text, general-purpose AI models with systemic risk can potentially cause significant large-scale negative impacts due to their capabilities, deployment contexts, or usage. The taxonomy includes cybersecurity threats, risks related to persuasion and manipulation, large-scale discrimination, and challenges posed by rapid, unregulated technological advancements. It also addresses the nature of systemic risk and details topics such as whether there is intent or the level of novelty This taxonomy serves as a foundation for providers to assess and address the potential negative impacts of their AI models.
Targeted Commitments
The Code sets out specific commitments for AI providers. Providers of general-purpose AI models are required to ensure the following:
1. Transparency
- Document for Oversight: Maintain detailed technical documentation for submission to the AI Office or competent authorities upon request. This includes information about the model’s design, training process, testing outcomes, intended applications, and acceptable use policies.
- Inform Downstream Providers: Provide downstream AI system developers with sufficient information to understand the capabilities, limitations, and compliance requirements of the model. Documentation must include licensing terms, acceptable use policies, and distribution methods.
- Public Transparency: Encourage disclosure of certain non-sensitive information to the public to promote accountability and trust.
2. Rules Related to Copyright
- Internal Copyright Policies: Develop and enforce internal policies to adhere to EU copyright laws, ensuring all training data and outputs respect intellectual property rights.
- Upstream Compliance: Conduct due diligence when sourcing datasets from third parties, verifying compliance with copyright reservations.
- Downstream Compliance: Mitigate risks of copyright infringement by downstream users or applications, such as overfitting models on copyrighted content.
- Respect for Text and Data Mining (TDM) Exceptions: Ensure lawful access to copyrighted material and compliance with rights reservations expressed through machine-readable means, such as robots.txt files or other standards.
- Transparency in Copyright Measures: Publish details about how copyright compliance is achieved, including information about data sources, authorizations, and handling of complaints.
Some copyright compliance measures may prove contentious for AI companies. The Code requires signatories to acknowledge that the use of copyrighted content necessitates authorization from rights holders unless specific exceptions or limitations apply. Additionally, providers must adhere to the text and data mining (TDM) provisions outlined in the EU Copyright Directive (Point 4). Both measures are controversial and are likely to face resistance from the industry.
---------------------------------------------------------------------------------------
Providers of General-Purpose AI Models with Systemic Risk must adhere to the following:
1. Technical Risk Mitigation
- Safety and Security Measures: Establish proportional safety protocols to minimize risks associated with dangerous model capabilities, such as misuse in cybersecurity or manipulation.
- Model Protection: Implement robust security measures to safeguard unreleased model assets, including weight encryption, access controls, and red-teaming for vulnerabilities.
- Risk Mapping: Align mitigation strategies with specific systemic risk indicators and tiers of severity.
2. Risk Assessment
Providers must adopt a continuous risk assessment lifecycle to address potential systemic risks:
- Risk Identification: Use the taxonomy of systemic risks provided in the Code to identify risks associated with their models.
- Risk Analysis: Employ rigorous methodologies to map potential risks, categorize them by severity, and forecast their likelihood.
- Evidence Collection: Collect comprehensive evidence through model evaluations, adversarial testing, and exploratory studies to assess capabilities and limitations. Model providers are asked to do exploratory work, such as open-ended red teaming by qualified third parties.
- Lifecycle Monitoring: Continuously monitor risks throughout the model’s lifecycle, including during training, deployment, and post-deployment stages.
3. Governance Risk Mitigation
Governance measures aim to ensure systemic risk ownership and accountability:
- Organizational Responsibility: Allocate responsibility for risk management at the executive and board levels.
- Independent Assessments: Facilitate external expert evaluations of systemic risks and mitigation strategies before and after deployment.
- Safety and Security Frameworks: Develop and maintain a comprehensive framework to document risk mitigation measures, proportional to the scale of systemic risks. This should encourage providers to proactively identify and proportionately mitigate systemic risks.
- Incident Reporting and Transparency: Implement mechanisms for reporting serious incidents, protecting whistleblowers, and enabling public transparency around systemic risks.
Implementation & enforcement
The development of the Code follows an iterative and consultative process, involving a wide range of stakeholders from industry, academia, and civil society. It incorporates Key Performance Indicators (KPIs) to ensure transparency and risk mitigation. The Code also recognizes the importance of supporting smaller providers, offering tailored measures to help SMEs comply with its requirements.
The AI Office is responsible for enforcing obligations for general-purpose AI model providers and supporting governance bodies within Member States in enforcing requirements for AI systems. It has powers under the AI Act to request information, evaluate models, enforce risk mitigations, recall non-compliant models, and impose fines of up to 3% of global annual turnover or €15 million, whichever is greater.
Next Steps
The draft Code marks the beginning of a collaborative process to refine its provisions and ensure its relevance for future technological developments. Stakeholders are invited to provide feedback, which will be used to shape the final version, expected to be released by May 2025.