Guest blog

Guest blog: 5 trends in AI Governance from the 2024 CPDP conference


The annual Computers, Privacy and Data Protection (CPDP) conference in Brussels brings together top lawmakers, regulators, lawyers, engineers, legal consultants, policymakers, academics and non-governmental organisations from Europe and around the world. The 2024 edition, from 22-24 May, focused on AI governance, and comprised 89 interdisciplinary panel discussions and 42 workshops, with around 1360 participants.

PHOTO by; Sophie Lenoir,

Trending topics at CPDP have a tendency to appear in European rulemaking, so those who want to stay abreast of developments in AI governance pay close attention to what is discussed here. We noted the following 5 trends in AI governance:

1. Citizen control over the use of their data for AI development

So-called data spaces (or ‘trusted data intermediaries’, TDIs) were a prominent topic of discussion during CPDP 2024. Initiatives such as SOLID and MyData provide individuals with digital ‘vaults’ to store data about themselves and control who gets access to those data, and under what conditions. In the development of AI systems, a lot of (personal) data will be needed and TDIs give individuals a more equitable negotiation position in the use of their data for such development. At the very least, AI deployers will have to explain how someone’s data will be used.

While many practical questions about TDIs remain open (such as, how do we prevent people from being coerced into giving access to their data, and how do we verify if data are only used as promised?), various European laws already refer to such systems, particularly the Data Governance Act and the Data Act. In short, TDIs may soon enter common parlance.

2. Independent algorithmic auditing

Throughout the conference, a number of independent not-for-profit organisations presented their approaches to algorithmic auditing, most prominently AlgorithmWatch, Algorithm Audit and AI Forensics. After all, transparency about the use of AI has a limited effect when most people lack the skills to evaluate what algorithms do. Independent auditors can play an important role in holding organisations that provide or deploy AI systems accountable.

Independent algorithm auditors create tools to detect biases, exchange knowledge on best practices and investigate disparate impacts of AI systems. They also publish policy advice and news reports to highlight and address malpractices. Doing so, they allow for a ‘reverse gaze’ on AI systems: not only will AI systems observe people, but people can also observe the systems.

3. Improving AI literacy in organisations deploying AI

The newly minted EU AI Act requires that organisations deploying AI systems ensure their staff has sufficient AI literacy, and understand potential risks for those to whom AI systems are applied. Several panellists at CPDP 2024 questioned whether AI literacy is currently sufficient in organisations that already use AI systems or plan to do so.

Improving basic statistical literacy was suggested as an important first step. The statistical toolbox provides a number of ways to address biases in datasets. For instance, z-tests can be used to evaluate the differences between two approaches to decision-making (e.g., one with and one without AI support).

Improved AI literacy among staff in organisations that use AI applications can prevent serious detrimental outcomes to (groups of) citizens, patients and consumers. It is also a measurable and verifiable performance indicator for organisational compliance.

Ana Pop Stefanija details several approaches to involving citizens. PHOTO by; Sophie Lenoir,

4. Oversight and supervision beyond the big players

Under the General Data Protection Regulation, supervisory authorities preferred to address missteps by big players (think Google, Apple, Amazon, Meta …) to set examples for everyone else. However, when it comes to AI systems, smaller players can wreak large-scale havoc just as well. The infamous Dutch scandals around automated decision-making – the SyRI case and the Childcare Benefits case – did not involve AI produced by big players. Oversight and supervision therefore also need to consider smaller players.

In addition to the providers of AI systems, their supply chains need to be scrutinised as well. Important actors in the AI supply chain include training data centres, data brokers, and ‘foundation model’ developers, whose activities can be verified afterwards only with difficulty, even if their mishaps can carry a long way.

Needless to say, with respect to the third trend described above, supervisory authorities also have to ensure their staff have a high level of AI literacy.

Photo provided by the author Ine van Zeeland.

5. Synthetic data (somewhat) improve the protection of personal data

When large amounts of personal data are used to train, develop and deploy AI systems, this can create risks for people whose data are involved. Using synthetic data is a way to reduce such risks. Basically, a synthetic data set consists of fake data that have the same distribution of characteristics as a set of real data. For example, rather than using a set of data about car drivers in a certain city, a fake data set is created with the same distribution of driving patterns and/or brands owned (whichever characteristic is relevant to the use of the data).

Synthetic data are not perfectly private, because it may still be possible to detect a unique combination of characteristics for which there is only one individual to whom this profile applies. Even when identities are removed, people can be identified by their unique behaviour. The more a synthetic data set resembles a real data set, the higher the likelihood real individuals can be re-identified in it. Unfortunately, for AI development and deployment a more realistic data set is also more useful. Therefore, a balance must be sought.

Not everyone is convinced of the usefulness and protection afforded by synthetic data, but it is a step forward compared to using more easily identifiable personal data, such as pseudonymous data (a real data set in which names are replaced with fake names).

Ine van Zeeland is a PhD researcher within the research group imec-SMIT at the Vrije Universiteit Brussel.

Ine studies organisational practices of data protection, with a focus on the media sector, banking sector, smart cities, and the health sector. Ine was in the Programming Committee of the 2024 CPDP conference.