brAInfood: How Watertight are Watermarks?
Publication Date: July 2024
It is becoming increasingly difficult to distinguish AI-generated text, images, and videos from human-made content. AI-generated content is increasingly finding its way to a broad audience. One example is deepfakes, where real people appear to do or say things they never actually did or said.
It is worrying when the line between reality and fiction blurs, as fake information can influence public opinion. Moreover, there are concerns about misuse. Therefore, it is important to recognise when content is AI-generated. In this brAInfood, you will explore how AI-generated content can be recognised and how European policymakers try to regulate it. Scroll down to download and read the full article.
How to recognise AI-generated content?
Identifying AI-generated content can be done through methods that are clearly recognisable either by humans or by computers using smart software and techniques. In this brAInfood, you will read some examples of these methods.
AI-generated content subject to rules
The rapid rise of generative AI has alerted policymakers. With the European AI Act, there are obligations to communicate information clearly, accessibly, and distinctly when it comes to AI-generated content such as deepfakes. In this brAInfood, you will find a brief overview of the European measures. For a comprehensive overview, you can consult this publication.
With brAInfood, the Knowledge Centre Data & Society wants to provide easily accessible information about artificial intelligence. brAInfood is available under a CC BY 4.0 license, which means that you can reuse and remix this material, provided that you credit us as the source.
Cover photo by Alan Warburton via Better Images of AI