policy monitor

The Netherlands – Report on Algorithmic Risks

The Dutch Data Protection Authority drafted a report on algorithmic risks. The report provides a compact and understandable overview of the current risks and associated governance challenges of algorithm deployment in the Netherlands. It aims to raise awareness and knowledge on the risks associated with algorithms and AI and the related risk management. The report focuses on algorithms that may entail risks to individuals, groups or society as a whole, regardless of the domain or sector within which they are applied. The report also examines international frameworks and other policy initiatives on AI.

What: paper/study

Impactscore: 5

For who: citizens, companies, researchers, policymakers

URL: https://www.autoriteitpersoonsgegevens.nl/uploads/2023-07/Rapportage%20Algoritmerisico%27s%20Nederland%20-%20juli%202023.pdf

Key takeaways for Flanders/Belgium: In anticipation of the AI Act, a Directorate Coordination Algorithms (NL: “Directie Coördinatie Algoritmes”) has been set up in the Netherlands (as a branch of their data protection authority). It is this directorate that published the report under discussion. This decision could inspire Belgian policymakers to start thinking about which Belgian authorities should be involved in supervising algorithmic systems and how the various authorities will cooperate and coordinate their activities.

Summary

In order to increase awareness on algorithmic risks and knowledge on related risk management, the Dutch Data Protection Authority published a report concerning algorithmic risks. The report focuses on algorithms that may entail risks to individuals, groups or society as a whole, regardless of the domain or sector within which they are applied. Spreading awareness and starting a dialogue on the risks of AI/algorithms is thought to be a crucial step in mitigating these risks. A core message of this report is that the responsible development and use of algorithms or AI-applications with a societal impact is possible if it is accompanied by appropriate risk management and accountability practices.

Recent developments

The reports begins by describing the latest general developments in the Netherlands regarding algorithmic risks. The undeniable increase in algorithm usage contrasts strongly with the significant decrease of trust in AI. Additionally, Dutch experts have voiced their concerns vis-à-vis the political and public eagerness to develop AI as fast as possible, keeping in mind that the involved risks are often difficult to mitigate. Additionally, the report states that organisations at the forefront of deploying new AI technology and new algorithmic systems and applications should be aware of the extra effort this requires of them. In order to provide or deploy useful and responsible systems and applications in society, they should engage in meaningful risk management and lead by example. Moreover, adequate regulation for all kinds of algorithms is needed: generative AI, narrow AI, manipulative AI,…. Hence, this part concludes by stating that all organisations applying AI or algorithms (with societal impact) need to make efforts towards transparency, engage with society and stakeholders and anticipate new laws and regulations.

Use cases

The second part of the report highlights several applications of algorithms in the Netherlands.

  • One example is the “Criminality Anticipation System” used by the Dutch police to predict crimes committed in a certain region within the Netherlands. The system has been subject to criticism because of the potential biases that are embedded into it, such as a focus on certain ethnicities. The report proposes to increase the level of external transparency (e.g. disclose the variables used) to allow scrutiny and assess the effectivity and the social benefit of the system.
  • Another example is the use of pattern recognition in the monitoring of illegal payment activities in Dutch payment systems. This recognition system puts suspected payments on hold, however the patterns might lead to unwanted discriminatory effects. The report advises to map the related risks to fundamental rights and being more transparent towards users.
  • The last example addressed in the report covers the application of algorithms for the detection of fraud within social services. Like in the previous examples, this algorithm is susceptible to violating fundamental rights by basing its fraud detection on criteria such as profession or housing. Preventing this so-called technological chauvinism and safeguarding public values by performing balancing exercises, is necessary according to the report.

Legal frameworks and coordination point

Lastly, the report discusses key-takeaways from and for existing and future legal frameworks related to the risk management of AI and algorithms. On an international level, the report focuses on the AI Act (see here) and the AI-treaty which is being negotiated within the Council of Europe (see here). Regarding the AI Act, the report points out several shortcomings of the proposed regulation, mainly the lack of external evaluation of certain high-risk AI systems (such as predictive policing or the use of AI in education). These types would only be subject to self-assessment. Regarding the AI treaty, the report stresses that because the treaty will also apply outside the EU and involve e.g. the United States and Japan, the treaty has the potential to become an international standard for the development and use of algorithms in both the public and private sector. On a national level, the report refers to the human rights impact assessment that was introduced in the Netherlands (IAMA/FRAIA). Currently, the IAMA/FRAIA is the only comprehensive assessment in the Netherlands for algorithms, however its process is long and time-consuming, which may be disproportional for simple algorithmic applications. Therefore the report proposes that public organisations rank their algorithmic applications and start by assessing their most important ones.

Finally, the report clarifies the role of the Directorate Coordination Algorithms (NL: “Directie Coördinatie Algoritmes”, DCA) that has been set up in the Netherlands as a branch of their data protection authority. Its task is to coordinate the supervision on algorithms with a focus on public values and fundamental rights (e.g. discrimation prevention). It does by mapping/signalling risks, establishing networks and intensifying coordination with other supervisory authorities. The focus in risk signaling is on identifying and analysing cross-sectoral risks and effects of algorithms and related policy and regulatory developments. The DCA-network intends to involve other regulators, sector representatives, interest groups, civil society and scientific and specialized organisations and institutes.



Take-away for Flanders & Belgium

For Flanders and Belgium, this last point regarding the DCA is especially relevant. The AI Act will require that certain authorities are responsible for its supervision and enforcement. However, the complex Belgian division of competences will complicate this and it will not be evident to set up the necessary coordination and cooperation mechanisms. Therefore, the idea of establishing a (federal) coordination point could be an interesting model as a first step in the AI risk management in Belgium.