Artificial Intelligence Systems and the GDPR: a data protection perspective
In this information brochure, the Belgian DPA discusses the interplay and complementarity of the GDPR and the AI act in the development and use of AI systems. The brochure provides a definition for AI systems and discusses various GDPR-principles that are relevant for providers and deployers of AI systems. For each principle, the brochure also discusses related obligations from the AI Act and provides examples. Finally, the brochure provides user stories to illustrate the different principles.
What: policy orienting document
Impactscore: 3
For who: AI developers, legal professionals and DPOs, controllers and processors under GDPR, Belgian authorities
URL: https://www.gegevensbeschermingsautoriteit.be/publications/...
Key takeaways for Flanders
AI providers and users can use this brochure for a high-level overview of GDPR principles relevant for them, and associated obligations from the AI Act. They can also draw inspiration from the examples in the brochure for the practical implementation of those obligations. The Knowledge Centre Data & Society also wrote its own guide on AI and data protection in 2020, before the publication of the AI Act's proposal.
The Belgian Data Protection Authority (hereafter “DPA”) published an information brochure with insights on data protection and the development and implementation of AI systems. The brochure addresses both the GDPR requirements that (likely) apply to AI systems as well as the requirements of the recently entered into force AI Act.
Remarkably, the brochure uses its own definition of AI systems as “computer systems specifically designed to analyze data, identify patterns and use that knowledge to make informed decisions or predictions”. The DPA lists several examples of systems falling under this definition such as spam filters in email, virtual assistants and AI-powered medical imaging analysis.
The DPA further discusses GDPR requirements that are relevant to providers and deployers of AI systems and how these requirements are complemented by provisions in the AI Act.
- Lawfulness: the DPA considers the AI Act’s prohibition on certain AI systems, such as social scoring and real-time biometric identification in public spaces for law enforcement, as an addition to the required legal bases for processing personal data in the GDPR.
- Fair processing: the DPA sees the principle of fair processing reflected in the AI Act obligations focusing on bias mitigation in high-risk AI systems.
- Transparency: the AI Act’s obligations to give information about data use in high-risk AI systems, in particular regarding decision-making by AI systems, is emphasized by the DPA.
- Purpose limitation and data minimization: These principles in personal data processing are, according to the DPA, further strengthened by the concept of intended purpose in the AI Act.
- Data accuracy and up-to-dateness: the data governance obligations in the AI act, particularly regarding data quality and the avoidance of bias, are complementary with GDPR obligations that personal data must be accurate and kept up-to-date.
- Storage limitations: there are no additional or complementary requirements on this GDPR principle in the AI Act, according to the DPA.
- Automated decision-making: The GDPR right not to be subject to decisions based solely on automated processing, producing legal effects, is further complemented by the human oversight requirements in the AI Act, both for AI providers and deployers, according to the DPA.
- Security of processing: In addition to implementing technical and organizational measures to secure the processing of personal data under GDPR, providers are required to take robust security measures for high-risk AI systems under the AI Act. The DPA identifies several proactive measures in the AI Act such as risk assessment and management, continuous monitoring and testing as well as meaningful human oversight to mitigate risks associated to the AI systems.
- Data subject rights: The rights to access, rectification, erasure, restriction of processing and data portability in the GDPR are further complemented by the right in the AI Act to receive a clear meaningful explanation on the role of the AI system in a decision-making procedure using certain high-risk AI systems.
- Accountability: Organizations are required to demonstrate accountability under GDPR through several measures (policies, documented legal basis, record keeping, DPIA's, etc.). The DPA finds that the AI Act builds upon this accountability principle through several obligations. This includes the risk management approach, required fundamental rights impact assessments, documentation obligations, human oversight processes etc.
Each requirement is further explained in the information brochure using examples and user stories to illustrate the application of the requirements to different scenarios.
Overall, this brochure provides a high-level overview of relevant requirements in the GDPR and the AI Act to providers and deployers of AI systems. The brochure also makes clear there is a significant overlap and complementarity between the two regulations' obligations. However, it must be noted that specific national authorities still need to be designated in Belgium, as well as in other member states, to further interpret and enforce the obligations of the AI Act. The exact role of the Belgian DPA in the enforcement of the AI Act is thus still up for discussion. Existing authorities in general should take care that they do not confuse providers and deployers of AI systems on which authorities are finally competent for the AI Act and that they do not publish guidance which is (or could become) contradictory to the guidance that the actual competent authorities will publish. Finally, harmonised technical standards will fulfill an important role in helping AI providers comply with the obligations in the AI Act, despite not being mentioned in the brochure.