policy monitor

European Parliament - Study on how to tackle deepfakes in European policy

The Panel for the Future of Science and Technology of the European Union has requested a study on the subject of how to tackle deepfakes in European policy. The study is prepared for, and addressed to, the Members and staff of the European Parliament as background material to assist them in their parliamentary work.

What: Study

Impact score: 4

For who: Policy makers, government, government agencies

URL: https://www.europarl.europa.eu/RegData/etudes/STUD/2021/690039/EPRS_STU(2021)690039_EN.pdf

Summary

The report first focuses on the current state of the art and the five-year development potential of deepfake techniques. This is followed by a mapping of the societal context in which these techniques are used. Further, it also defines what the benefits, risks and impacts of deepfakes are. Regarding the legal aspect, the report defines the current regulatory landscape related to deepfakes, what the remaining regulatory gaps are and what policy options could address these gaps.

Regulatory landscape and gaps

The study states that the landscape related to deepfakes constitutes a complex web of norms that entails both hard and soft regulations on both the EU and the Member State level. On the European level, the study gives a list of the most relevant policy trajectories and regulatory frameworks: the AI regulatory framework, the GDPR, the Copyright regime, the e-Commerce Directive, the Digital Services Act, the Audio Visual Media Directive, the Code of Practice on Disinformation, the Action plan on disinformation and the Democracy action plan. For example: under the GDPR, when legitimate interests are not applicable, the use of personal data for the creation and dissemination of deepfakes needs to be subjected to informed consent by the persons depicted in the video, otherwise the creators are at risk of violating the GDPR.

According to the study, the current rules and regulations offer some kind of guidance for mitigating potential negative impacts of deepfakes. The problem, however, is that the legal route for victims remains very challenging. Different actors are often involved in the lifecycle of a deepfake and more often than not these actors also act anonymously, which makes it very hard for victims to hold them accountable. Moreover, victims may lack the appropriate resources needed for starting a judicial procedure, leaving them vulnerable. The study finds that platforms could play a vital role in helping victims to identify malicious actors. Linked to this, in their view, technology providers also have responsibilities in safeguarding the positive and legal use of their technologies. The overall conclusion of the report is thus that policy makers should take different dimensions of the deepfake lifecycle into account when they are aiming to mitigate the potential negative impacts.

Policy options

The report further identifies various policy options for mitigating the negative impacts associated with deepfakes. Five dimensions of policy measures are distinguished, taking into account the different phases of the ‘deepfake lifecycle’:

  • The technology dimension covers policy options aimed at addressing the technology underlying deepfakes and the actors involved in producing and providing this technology. The study states that clarifications and additions to the AI framework proposal are recommended here. Relevant options are: additional clarification regarding which AI practices should be prohibited under the AI framework and/or regulation of deepfake technology as high-risk AI-systems.
  • The creation dimension targets the creators of deepfakes, or in AI framework terminology: ‘the users of AI systems’. Here again, additional measures in the AI framework are possible: the clarification of guidelines for the manner of labelling deep fakes, restricting the exceptions to the deepfake labelling requirement and the banning of certain applications altogether. This dimension also addresses those who use deepfake technology for malicious purposes, in other words: ‘the perpetrators’. These actors often hide behind anonymity and cannot be easily identified. It goes without saying that these users cannot be expected to willingly comply with the labelling requirement as introduced in the AI framework proposal. Policy measures therefore may include extending current legal frameworks with regard to criminal offences, diplomatic actions and international agreements to refrain from the use of deepfakes by foreign states and their intelligence agencies.
  • The circulation dimension covers the policy option aimed at addressing the circulation of deepfakes, by formulating possible rules and restrictions for the dissemination of (certain) deepfakes. This is a very important dimension because the dissemination and circulation of a deepfake determines to a large extent the scale and the severity of its impact. Online platforms, media and communication services play a crucial role in this regard and the study recommends introducing responsibilities and obligations for these platforms and other intermediaries. Policy options mainly fit within the domain of the proposed DSA, and include obliging platforms and other intermediaries to have deepfake detection software in place but also obligations regarding labelling and take down procedures, slowing down the speed of circulation etc.
  • The target dimension revolves around improving the protection of the victims of malicious deepfakes, including institutionalising support for victims of deepfakes. The study has showed that the rights of victims often are protected by law but that it may prove to be very difficult to enforce them. Therefore, the report stipulates several possibilities for improving this protection. Options include: strengthening the capacity of data protection authorities to respond to the use of personal data for deepfakes and developing a unified approach for the proper use of personality rights within the European Union.
  • The audience dimension is the final crucial dimension for policymakers to limit the risks and impacts of deepfakes. The issue here is that often deepfakes will transcend the individual level and can escalate to group or even societal levels. This can bring many dangers depending on the audience response: will people believe the deepfake, disseminate deepfakes further when they receive them and lose trust in institutions? Options for concrete measures include: the labelling of trustworthy sources, investing in media literacy and technological citizenship

Some of the options mentioned are already covered by the proposed AI framework and the proposed DSA, others would require further specification or expansion of the frameworks, and still others go beyond the scope of these two bodies of EU regulation. The study provides a table with an overview of all the options set out in the study and indicates into which EU legislation or other level of governance these policy options could be incorporated. Next to the five dimensions mentioned above, some institutional measures are also added.