Future of Privacy Forum – Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities
The Future of Privacy Forum has published a report analysing cases regarding automated decision-making under art. 22 General Data Protection Regulation (GDPR) . The report is based on an extensive review of more than 70 cases, including 19 judgments, 50 decisions, individual opinions and general guidance from data protection authorities and other policy documents from regulatory authorities from 18 EEA Member-States, the UK and the European Data Protection Supervisor (EDPS).
What: study
Impactscore: 5 (but with references to documents with impact score 1 e.g. judicial decisions)
For who: policy makers, organisations using or developing tools based on automated decision-making, citizens
URL https://fpf.org/wp-content/uploads/2022/05/FPF-ADM-Report-R2-singles.pdf
Summary
Article 22 of the GDPR is specifically applicable to decisions “based solely on automated processing, including profiling, which produces legal effects” concerning an individual “or similarly affects” that individual. The Future of Privacy Forum published a report analysing case-law under the GDPR applied to real-life cases involving Automated Decision Making (ADM). Their research is based on publicly available judicial and administrative decisions and regulatory guidelines across EU/EEA jurisdictions and the UK.
The report is divided in three parts. The first part explores the context and key elements of Article 22 and other relevant GDPR provisions that have been applied in ADM cases.
Examples of cases are:
- Automated grading of students
- Distribution of social benefits and tax fraud detection
- Automated credit scoring
- Algorithmic management of gig workers
- Automated assessment of job seekers’ chances for employment
- Processing biometric data for automated identification
Part two looks into how the dual threshold required by Article 22 GDPR has been interpreted and applied in practice. The two conditions that need to be met are the involvement of “solely automated” processing of personal data and that the ADM must either produce “legal effects” concerning the data subject, or “similarly significantly” affect the data subject. If one of these conditions is not applicable, the processing does not fall under Article 22 GDPR.
Their research shows that even in those cases where Courts and DPAs decided that the ADM at issue does not fall under Article 22 GDPR since it does not meet the required criteria, they still have enforced other relevant provisions of the GDPR, such as the principles of transparency, fairness, data minimization, purpose limitation, and other provisions.
The third part explores three specific scenarios where individuals are more inclined to challenge ADM systems: the workplace (managing employees, contractors, hiring processes); Facial Recognition (automated facial recognition, both in the public interest and for commercial purposes); and credit scoring. The report indicates that the GDPR provides protection for individuals against unlawful practices in these scenarios, even where Article 22 is not applicable. In addition, each section briefly refers to related policy proposals introduced by the EU to tackle each of these specific scenarios (e.g. the AI Act, the Platform Workers’ Directive, the Consumer Credit Directive), creating potential regulatory overlap.
The report’s conclusion lays out some of the identified legal interpretation and application trends that emerged from the research and points out remaining areas of legal uncertainty that may be clarified in the future by European policymakers or the CJEU.
Update
The FPF published a blogpost on how the GDPR and the AI Act interplay and what lessons can be learned from the case law report.