Gastblog: EDPB-EDPS Opinion: four lessons for the AI Regulation and data protection
Samenvatting: Op 18 juni 2021 hebben het Europees Comité voor gegevensbescherming (European Data Protection Board – EDPB) en de Europese Toezichthouder voor gegevensbescherming (European Data Protection Supervisor-EDPS) een gezamenlijk advies uitgebracht over het voorstel van een AI Verordening. In deze blogpost worden enkele bevindingen van het gezamenlijk advies 5/2021 van de EDPB/EDPS door Jenny Bergholm besproken, met name wat betreft de relatie met het EU-kader voor gegevensbescherming.
Biofgrafie auteur: Jenny Bergholm was a Research Associate at the
Centre for IT & IP Law (CiTiP) of KU Leuven with a LL.M. from the
University of Helsinki, with focus on EU and data protection law. Having worked
for both the European Parliament and the European Commission in Brussels, she
focused her research on data protection, cybersecurity and AI. She now works as
a Case Handling Assistant at DG Competition.
Met deze opinie willen we bijdragen aan het bestaande debat rond AI. Deze bijdrage verscheen eerder op 22 juli op de website van CiTiP KU Leuven. Het Kenniscentrum herpubliceert deze blog (met toestemming van de auteur) omdat we geloven dat dit interessant is voor ons publiek, maar inhoudelijk is dit niet noodzakelijk weergave van de visie van het Kenniscentrum. Disclaimer
EDPB-EDPS Opinion: four lessons for the AI Regulation and data protection
BY JENNY BERGHOLM - 22 JULY 2021
On June 18, 2021, the European Data Protection Board (“the EDPB”) and the European Data Protection Supervisor (“EDPS”) (together “EDPB/EDPS”) issued a joint opinion on the proposal for a Regulation laying down harmonised rules on artificial intelligence (“AI Regulation”, previously discussed here, here and here). This blogpost aims to highlight some of the findings of the EDPB/EDPS Joint Opinion 5/2021, in particular on the relationship with the EU data protection framework.
The opinion kicks off by acknowledging the importance of the proposal and the need for regulation in the field of artificial intelligence (“AI”). Following this initial recognition, it offers some very important criticism related to the use of AI systems with regard to data protection. The opinion provides much needed and well-presented guidance on the implementation of data protection principles in AI systems. This contribution is not intended to be exhaustive, but will present four carefully chosen aspects.
Lesson 1: Confirm the overlap
First, and perhaps most important, the EDPB/EDPS “strongly recommend” that the legislator includes a statement which confirms the applicability of the EU data protection legislation to processing of personal data within the scope of the AI Regulation. This is a welcome statement, considering how little the GDPR is mentioned in the proposal for the AI Regulation.
Lesson 2: Don’t re-invent the wheel – align risk management approaches
Second, the EDPB/EDPS dive into the risk assessment mechanism proposed. It is noted, that the risk-based approach of the proposal should be aligned with that of the GDPR when it comes to issues related to protection of personal data. Specifying this recommendation, the EDPB/EDPS make the very relevant point that AI system providers might not be able to assess all the risks which could become relevant depending on different ways of using the AI tool in question, even after being placed on the market and in the hands of the end-user. Thus, any classification of an AI system as being of high-risk will “trigger a presumption of high-risk” under the data protection framework. Due to this potential lack of foreseeability and the risk that presents to data protection, it is recommended to complement the initial risk assessment included for high-risk AI systems with a “subsequent (more granular) assessment” in the form of a data protection impact assessment as per the GDPR. The risk assessment should therefore not only be carried out by the provider of the high-risk AI system, but also by the user (simultaneously potentially also the data controller) “considering not only the technical characteristics and the use case, but also the specific context in which the AI will operate”.
Lesson 3: Link CE-marking to data protection compliance
Third, the EDPB/EDPS point out, that the use of the CE-marking proposed is not linked to the compliance with the data protection framework. it is stressed, that even though a high-risk system fulfils the safeguards set in the proposed regulation, this provides not guarantee that it is also in compliance with the GDPR and other relevant data protection law. This fact is the basis for a recommendation to include a requirement to ensure compliance with the GDPR and the Regulation 2018/175 on processing of personal data by the EU(“EUDPR”) in the conformity assessment. It would further help to ensure the compliance with the accountability principle.
Lesson 4: Calling for a general ban on intrusive forms of AI
Fourth and last, the EDPB/EDPS direct substantial criticism towards Article 5 of the proposal. Article 5 prohibits certain AI system, of which the risks are regarded as unacceptable. The list is exhaustive and regards e.g. AI system deploying subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour, AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, social scoring for AI systems by public authorities or on their behalf and the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.
The EDPB/EDPS express concerns that Article 5 of the proposal “risks paying lip service to the “values””. This is because the criteria used to identify AI systems of unacceptable risk limit the scope of the prohibition in a way which could turn the prohibition into being “meaningless” in practice. Based on the risks identified, the EDPB/EDPS call for a “general ban” on the use of AI for automated recognition of human features in publicly accessible places. This call is based on the risks related to “intrusive forms of AI”, which could affect human dignity, such as social scoring and biometric identification of individuals in public spaces. Facial recognition, gait, fingerprints, DNA, voice recognition, keystrokes and other biometric and behavioural signals are also included as examples, and the ban should apply in all contexts. The EDPB/EDPS thereby direct criticism towards one of the fundamental approaches of the proposal – the positive list used to identify prohibited AI systems. It is mentioned, that the EDPS/ EDPB do not see how “this type of practice would be able the meet the necessity and proportionality requirements” and how fundamental rights as interpreted and defined by EU data protection law and the Court of Justice of the European Union and the European Court of Human Rights, could be protected.
These aspects of the opinion will surely be discussed intensely during the following negotiations (hopefully) leading to an adopted legislation in the coming years. The European Parliament has previously raised concerns with regard to the ethical aspects of AI and the civil liability regime related to AI, also noting the need to ensure consistency with the GDPR. The High-Level Expert Group on AI also set out important ethical requirements ensuring the right to privacy and data protection when personal data is processed by AI systems. It is not entirely clear, how these recommendations have been incorporated into the proposal.
The Opinion discussed herein is one in a row of recent EDPB and EDPS opinions (for example here) directing criticism towards proposals put forward by the European Commission, where the position of data protection law has been somewhat left in the shadow. Continuously stressing the need to properly implement the principles of EU data protection law by design in developing technologies is crucial to ensure that the right to data protection and privacy does water down to a mere water-stamp in future legislation. Ensuring consistency and compliance with the existing framework of data protection is important for foreseeability of the actors of the digital single market. It is also crucial for the functionality and consistency of EU digital law – in a EU that works for people.
ENSURESEC has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 883242.