The First Draft Code of Practice on Transparency of AI-Generated Content aims to provide guidance for demonstrating compliance with the transparency obligations for content generated or manipulated by AI systems in Article 50(2) and (4) AI Act. The rules covering the transparency of AI-generated content will become applicable on 2 August 2026. If approved by the Commission, the final Code will serve as a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance. The Code is a first draft which needs further refinement. The final version of the Draft Code of Practice on Transparency of AI-Generated Content is expected in June 2026.
The Code includes (i) requirements for the labelling and detection of AI-generated content applicable to providers and (ii) disclosure requirements for deepfakes and AI-generated texts applicable to deployers.
The First Draft Code of Practice on Transparency of AI-Generated Content outlines different commitments which are further refined into measures to be implemented. Below, we provide an overview of the most important commitments and requirements, distinguishing between those applicable to providers (1) and deployers (2).
Requirements for providers for marking and detection of outputs of generative AI systems
First providers are required to ensure that the outputs of their generative AI systems are marked with multiple layers of machine-readable marking, including:
Information added as part of the metadata where the format allows to do so;
Imperceptible watermark ;
Fingerprinting or logging facilities where necessary to address deficiencies in the marking technique measures; and
Some other marking techniques for specific modalities (e.g. certificates where metadata embedding is not possible or markings that remain recognisable when only part of a multimodal output is altered).
It should be noted that providers should ensure that machine-readable markings cannot be removed and that the origin of content can be reliably traced throughout the entire provenance chain. In addition, providers should facilitate compliance by downstream providers by embedding appropriate marking techniques prior to the model’s placement on the market. To support deployers (see section 2), providers should also integrate default labelling functionalities into the system interface at the point of content generation.
Second, providers should implement measures for the detection of content as generated or manipulated by their AI models/systems, including by:
making available detection tools or interfaces to verify whether content is AI-generated;
implementing detection techniques prior to the model’s placement on the market, including forensic methods that do not rely on the presence of active AI marking, in order to support compliance by downstream providers;
providing human-understandable explanations as part of marking and detection results; and
promoting literacy related to AI content provenance and verification.
Third providers should ensure that marking and detection techniques meet specific technical requirements, including (i) effectiveness without compromising system performance or environmental sustainability, (ii) reliability through established metrics (e.g. false positive/false negative detection rates and bit error rates), (iii) robustness against common alterations and adversarial attacks, (iv) interoperability across distribution channels and technological environments, and (v) advancement of the state of the art.
Lastly, deployers should implement and maintain a compliance framework with a high-level description of the implemented measures. In addition, they should ensure appropriate testing and monitoring, provide relevant training, and cooperate with market surveillance authorities.
Requirements for disclosure of deep fakes and certain AI-generated text
Deployers should implement measures to disclose the origin of content using a common taxonomy and icon. Pending the adoption of an EU-wide icon, an interim two-letter acronym (e.g. AI, KI, IA) may be used. The disclosure should be visible and clearly distinguishable at the time of first exposure, appropriately placed in light of the format and context, and should not interfere with the enjoyment of artistic, creative, or fictional works. It should also support a two-level taxonomy distinguishing between fully AI-generated and AI-assisted content. In addition, deployers should ensure that disclosure measures are accessible to all users, including through compliance with visual accessibility standards and the provision of audio cues where appropriate.
Furthermore, like providers, deployers should implement and maintain internal compliance documentation specifying their labelling practices, and should also ensure monitoring, appropriate training, and cooperation with market surveillance authorities.
Lastly, the Code of Practice addresses specific situations relating to both deepfakes and AI-generated text. For example for deepfakes, the Code includes some specific labelling measures depending on the relevant format (e.g. for images, a fixed visible icon; for recorded video, a visible icon or disclaimer; for live video, a continuous visual icon alongside an opening disclaimer; for audio, an audible disclaimer that should be repeated for content longer than 30 seconds). For AI-generated text published in the public interest, the Code of Practice specifies that reliance on the disclosure exception requires human editorial responsibility. In that case documentation showing: the identity of the person with editorial responsibility, the applicable review measures, the date of approval and a reference to the final approved version.