Trending topics at CPDP have a tendency to appear in European rulemaking, so those who want to stay abreast of developments in AI governance pay close attention to what is discussed here. We noted the following 5 trends in AI governance:
1. Citizen control over the use of their data for AI development
So-called data spaces (or ‘trusted data intermediaries’, TDIs) were a prominent topic of discussion during CPDP 2024. Initiatives such as SOLID and MyData provide individuals with digital ‘vaults’ to store data about themselves and control who gets access to those data, and under what conditions. In the development of AI systems, a lot of (personal) data will be needed and TDIs give individuals a more equitable negotiation position in the use of their data for such development. At the very least, AI deployers will have to explain how someone’s data will be used.
While many practical questions about TDIs remain open (such as, how do we prevent people from being coerced into giving access to their data, and how do we verify if data are only used as promised?), various European laws already refer to such systems, particularly the Data Governance Act and the Data Act. In short, TDIs may soon enter common parlance.
2. Independent algorithmic auditing
Throughout the conference, a number of independent not-for-profit organisations presented their approaches to algorithmic auditing, most prominently AlgorithmWatch, Algorithm Audit and AI Forensics. After all, transparency about the use of AI has a limited effect when most people lack the skills to evaluate what algorithms do. Independent auditors can play an important role in holding organisations that provide or deploy AI systems accountable.
Independent algorithm auditors create tools to detect biases, exchange knowledge on best practices and investigate disparate impacts of AI systems. They also publish policy advice and news reports to highlight and address malpractices. Doing so, they allow for a ‘reverse gaze’ on AI systems: not only will AI systems observe people, but people can also observe the systems.
3. Improving AI literacy in organisations deploying AI
The newly minted EU AI Act requires that organisations deploying AI systems ensure their staff has sufficient AI literacy, and understand potential risks for those to whom AI systems are applied. Several panellists at CPDP 2024 questioned whether AI literacy is currently sufficient in organisations that already use AI systems or plan to do so.
Improving basic statistical literacy was suggested as an important first step. The statistical toolbox provides a number of ways to address biases in datasets. For instance, z-tests can be used to evaluate the differences between two approaches to decision-making (e.g., one with and one without AI support).
Improved AI literacy among staff in organisations that use AI applications can prevent serious detrimental outcomes to (groups of) citizens, patients and consumers. It is also a measurable and verifiable performance indicator for organisational compliance.