gastblog

Gastblog. The GDPR and the Artificial Intelligence Regulation – it takes two to tango?

16.11.2021

Samenvatting: Op 18 juni 2021 hebben het Europees Comité voor gegevensbescherming (European Data Protection Board – EDPB) en de Europese Toezichthouder voor gegevensbescherming (European Data Protection Supervisor-EDPS) een gezamenlijk advies uitgebracht over het voorstel van een AI Verordening. In deze blogpost worden enkele bevindingen van het gezamenlijk advies 5/2021 van de EDPB/EDPS door Jenny Bergholm besproken, met name wat betreft de relatie met het EU-kader voor gegevensbescherming.

Biofgrafie auteur: Jenny Bergholm was a Research Associate at the Centre for IT & IP Law (CiTiP) of KU Leuven with a LL.M. from the University of Helsinki, with focus on EU and data protection law. Having worked for both the European Parliament and the European Commission in Brussels, she focused her research on data protection, cybersecurity and AI. She now works as a Case Handling Assistant at DG Competition.

Met deze opinie willen we bijdragen aan het bestaande debat rond AI. Deze bijdrage verscheen eerder op 6 juli op de website van CiTiP KU Leuven. Het Kenniscentrum herpubliceert deze blog (met toestemming van de auteur) omdat we geloven dat dit interessant is voor ons publiek, maar inhoudelijk is dit niet noodzakelijk weergave van de visie van het Kenniscentrum. Disclaimer

The GDPR and the Artificial Intelligence Regulation – it takes two to tango?

BY JENNY BERGHOLM - 06 JULY 2021

The recently adopted proposal for an AI Regulation has already been the topic for widespread discussions., and so has the GDPR. This contribution discusses how the proposed AI Regulation inadequately addresses the risks presented to privacy and data protection by AI systems, and fails to integrate the comprehensive framework of the GDPR into the most buzzing proposal for regulation in a long time.

The recently published proposal for a EU AI Regulation (also discussed here and here) is the latest addition to the European Commission’s Digital Strategy, published in April 2021. It is the first of its kind in the world and comprises many important aspects for society. This contribution will discuss the proposed AI Regulation, and in particular the relationship to the General Data Protection Regulation (the GDPR).

Initially, a few basic concepts of the AI Regulation will be introduced. If adopted, the regulation enjoys a broad scope: it will apply to all providers supplying AI systems within the EU internal market (Article 2 and Article 3(9) and (10)) and all users of AI systems (Article 2) for commercial purposes (Article 3 (4)).

Perhaps most importantly, the proposal distinguishes between high-risk AI systems and other AI systems. An AI system is defined as a software, which is developed with machine learning, logic- and knowledge-based or statistical approaches and which can “generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” based on set “human-defined objectives” (Article 3 (1) and Annex I). An AI system is classified as high-risk if it is intended to be used as a safety component of a product, or is a product under one of various legislative pieces (Annex II), or falls under one of the specifically mentioned categories (Annex III). The categories of Annex III comprise among others, the use of biometric identification and safety components used in some critical infrastructures, AI systems used for educational purposes and in employment related circumstances.

Whereas certain artificial intelligence practices are directly prohibited, most emphasis is put on high-risk AI systems. Those are subject to purely internal market conformity procedures, with notified bodies and CE marking of conformity. Other AI systems, those that are not considered high-risk, are subject to transparency obligations (Article 52). High-risk AI systems, which make use of techniques involving the training of models with data, will need to be developed based on training, validation and testing of the data sets, to ensure that quality criteria are met.

The approach is not only detrimentally different from an obligation perspective, but also concerning the processing of personal data of AI systems. The mentioned quality criteria concern data governance, and many of the criteria will serve data subjects. Moreover, the GDPR plays a role in bias monitoring (Art. 10(5)) and processing of biometric data. It is also directly pointed out, that even if a system is included in the scope of the AI Regulation as a high-risk AI system, it does not mean that it is lawful from a data protection perspective (Recital 41). In its joint opinion, the EDPB and the EDPS highlight this and raise criticism regarding the lack of alignment with the GDPR. The EDPB/EDPS further propose that compliance with the data protection framework in the assessment for CE conformity. The European Commission White Paper on AI already identified risks to data protection and privacy rights by e.g. using AI to retrace or de-anonymise data, even for data sets which do not, as such, include personal data. Also the High-Level Expert Group on AI highlighted the need for AI systems to guarantee privacy and data protection, especially for information provided by the user of a system but also data generated about the data subject while using AI-based tools.

It is understood that the AI Regulation is a risk assessment based approach. The scale of risk is graded from unacceptable to minimal risks. Risk-based approaches are also common in cybersecurity legislation, and hints thereof can also be found in the 2020 Cybersecurity Strategy. From a data protection perspective, it can be noted that some risk management elements also exists in the GDPR, such as the data protection impact assessments, as discussed by Gellert and the Article 29 Working Party. Also data protection law is developing in a risk-based direction, as can be seen in the recently published Standard Contractual Clauses for international data transfers, which allow a risk-based approach.

Nevertheless, the rights and obligations of the GDPR stretch further than only high-risk operations. . The issue is, that even though the AI Regulation is intended to complement the GDPR, it provides very little clarity of processing of personal data by any other AI systems than high-risk AI systems. The Proposal includes a declaration that the proposed AI Regulation does not affect the application of the GDPR, but more guidance in the legal text on how the Regulation should be applied with regards to processing of personal data is missing. The lack of clear reference to the GDPR and other data protection legislation has been raised by the EDPB and the EDPS in a joint opinion. This is especially the case for data processing of AI systems which are not considered as high-risk systems. Even for high-risk AI systems, it mainly considers special categories of personal data . A clear example is Article 10 of the draft regulation, concerning data and governance. The article is set to regulate the training, validation and testing of data sets of high-risk AI systems. It refers to the data collection, but with no reference to rights and obligations of the data subjects. Only in Article 10(5), the GDPR is mentioned, as the providers of systems may process personal data of special categories if strictly necessary for the purpose of ensuring bias monitoring, detection and corrections. In such cases, the provider of the system need to adopt appropriate safeguards, such as technical limitations on the re-use of data and privacy-preserving measures. However, read in the light of the GDPR, many of these safeguards are already per-se obligatory due to the GDPR and the principle of data minimisation and privacy by-design and by-default.

Data processing of AI systems, which are not high-risk, or related to biometrics or bias, seem to be left solely to the already existing provisions of the GDPR. This includes, among others, the principles of lawfulness, fairness and transparency, purpose limitation, data minimisation, storage limitation, integrity and accountability (Art. 5 of the GDPR). The GDPR has become a bench-mark for privacy and data protection rights, and is one of the success stories of recent EU regulation. It is clear, that the GDPR applies when personal data is processed with the help of artificial intelligence. Even if that fact is not doubted, the relation between the GDPR and the AI Regulation should be clarified. The (long term) quality of the AI Regulation would benefit from clear and strong references to the principles of the GDPR, and so would the innovation community, the internal market and the data subjects.

ENSURESEC has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 883242.