Netherlands - Dutch Authorities publish final advice on AI Authorities
The Dutch Authority for Digital Infrastructure (“Rijksdienst Digitale Infrastructuur” or “RDI”) and the Dutch Data Protection Authority (Autoriteit Persoonsgegevens” or “AP”) have published a final joint advice on the supervision of AI systems in the Netherlands. The authorities recommend relying on existing market surveillance authorities for the supervision of high-risk AI systems under Annex I of the AI Act, with coordination and support on technical aspects being provided by the RDI. The supervision of high-risk AI systems under Annex III and of prohibited AI systems would be performed by the AP and RDI since it is less familiar to existing market surveillance authorities. The AP would also take up supervision of the transparency obligations in article 50 AI Act, in cooperation with existing authorities with overlapping competences.
What: Policy-orienting document
Impactscore: 3
For who: AI providers and users, supervisory authorities and policy makers
URL: https://www.rdi.nl/actueel/nieuws/2024/11/7/eindadvies-inrichting-ai-toezicht-nederland
The final joint advice on the supervision of AI systems under the AI Act in the Netherlands was published by the Dutch Authority for Digital Infrastructure (“Rijksdienst Digitale Infrastructuur” or “RDI”) and the Dutch Data Protection Authority (“Autoriteit Persoonsgegevens” or “AP”) in November 2024. The advice was drafted in cooperation with several other Dutch supervisory authorities and further elaborates on two earlier advices published by the RDI and AP (first advice and second advice).
The final advice starts by recapping previous recommendations of the RDI and AP. Specifically, the organisation of the supervision should protect the public interests mentioned by the AI Act as effectively and efficiently as possible, and new tasks can only be taken up by authorities if resources and capacity are (made) available.
Authorities for AI systems under Annex III and prohibited AI systems
The authorities suggest that the supervision of AI systems in general should take existing goals, roles and powers of existing supervisors as much as possible into account. This being said, existing domain- and sector-supervisors, which currently do not perform market surveillance, may not be a good fit to supervise Annex III high-risk AI systems and prohibited AI systems. This is due to the scope of supervision in the AI Act (including the entire value chain), the type of supervision (market surveillance), etc. For these reasons, the authorities suggest designating the AP, the RDI as well as the Dutch Central Bank and Dutch Authority for Financial Markets as market surveillance authorities for these areas. Those market surveillance authorities can in turn cooperate with sector- and domain-specific supervisors for the relevant AI systems when needed (e.g. education). In this distribution the Authority for Financial Markets could enforce the ban on manipulative or exploitative practices in the financial sector. The market surveillance and cooperation with these main market surveillance authorities should also strengthen the powers and efforts of those sector- and domain-specific supervisors (e.g., by allowing the sector- and domain-supervisors to identify priorities for market surveillance, identify specific risks or impacts of AI use, and relevant developments). To accomplish this, the AP and RDI suggest that a clear legal basis is created for information exchanges and governance. The supervisors must also be provided with sufficient resources.
Authorities for product-specific AI systems under Annex I
With regards to high-risk AI systems under Annex I of the AI Act, which are systems linked to products or are themselves products under existing product harmonisation legislation, the AP and RDI suggest designating the existing, competent market surveillance authorities already supervising those products to also act as market surveillance authorities for the AI Act. The AP and RDI do note that further analysis is required to determine the necessary capacity for this supervision at the respective market surveillance authorities, particularly as the scope of existing product legislation may be more limited (e.g. in terms of actors) than the scope of the AI Act.
Each market surveillance authority should have the resources to acquire the necessary domain-specific knowledge to assess the impact of AI on its sector (as a starting point), while it may also turn to the RDI for support on AI-specific knowledge if this is lacking in its organisation. Concretely, the RDI would thus take up a coordinating role among the Annex I market surveillance authorities. This includes removing ambiguities, ensuring full coverage, reporting in the European context and building knowledge and expertise. Finally, the RDI would also take up enforcement for AI systems that are not limited to a single product under Annex I, which may include carrying out the enforcement of the AI Act itself or along with other market surveillance authorities.
High-risk AI systems listed in Annex I of the AI Act may also need to undergo conformity assessments before they are placed on the market or put into service, carried out by conformity assessment bodies. Those conformity assessment bodies need to be designated by notifying authorities. The AP and RDI suggest maintaining the role of existing notifying authorities under Annex I product legislation in designating conformity assessment bodies. If a conformity assessment body wishes to be designated only for assessments under the AI Act, then the RDI would be responsible for designating them. The RDI would additionally play a role in centralising the knowledge required to appoint a conformity assessment body, act where overarching actions are needed and act where the designation of a conformity assessment body is not limited to one product from Annex I or falls outside of the scope of Annex I.
Supervising AI systems in which GPAI is integrated
The RDI and AP consider that the supervision of AI systems in which GPAI models are integrated will generally lie with national market surveillance authorities (while supervision of GPAI models itself is taken up by the AI Office). This requires a certain amount of expertise by market surveillance authorities on GPAI technology as well as cooperation with the AI Office. The RDI and AP suggest that they themselves take the coordinating role in developing that expertise to achieve efficient and effective supervision. The AP and RDI would then support other market surveillance authorities when they encounter GPAI-related AI systems.
Supervising AI systems with increased transparency obligations
Providers and deployers of certain AI systems must meet specific transparency obligations under article 50 AI Act. These AI systems may include both high-risk AI systems and non-high-risk AI systems. With this in mind, the AP and RDI suggest that the supervision of these transparency obligations is allocated as much as possible to one market surveillance authority. This in part since they expect that many of the relevant AI systems will not be considered high-risk and in part because other relevant systems may be high-risk under several of the categories in Annex I and Annex III. Additionally, the transparency obligations are focused on preventing manipulation and deception, as well as informing natural persons of their interactions with AI systems, a very different focus than the rules on high-risk AI systems. The authorities therefore suggest centralising the supervision of the transparency obligations by assigning supervision to the AP. This is partially due to its role as a market surveillance authority in Annex III, its role in the Netherlands as the coordinating algorithm supervisor and due to the link of some of the transparency obligations with the processing of personal data. As an exception, the Dutch Central Bank and Dutch Authority for financial markets would still act as supervisors when certain transparency obligations relate to AI systems provided by financial institutions, since they also act as market surveillance authorities for those institutions.
The AP and RDI recognise that there is an overlap of the transparency obligations with existing frameworks that are enforced by sector- and domain-specific supervisors. This will require a close cooperation between the different relevant authorities, for example when it comes to tackling the supervision of the Digital Services Act and the transparency obligations under the AI Act (both of which can relate to disinformation by platform providers) or the supervision on consumer law or media law in conjunction with the supervision on AI systems generating synthetic content.
National governance and cooperation
The AP and RDI suggest that a mandatory national structure is needed for cooperation and information sharing between the different authorities to optimise supervisory capacity. Existing rules on market surveillance and the AI Act itself can be used as a basis for cooperation between market surveillance authorities and authorities protecting fundamental rights. Cooperation with sector- and domain-specific supervisors will still require additional conditions that need a legal basis. The Dutch supervisory authorities will continue to discuss this joint cooperation structure moving forward. In addition, they suggest that the implementing Dutch legislation should allow for a multilateral cooperation agreement for AI supervision as well as for separate bilateral cooperation arrangements between supervisory authorities. Finally, the Dutch legislator should provide a broad legal basis for information sharing between the supervisory authorities, removing possible legal barriers.
European representation, priorities and outstanding issues
The AP and RDI stress the importance that all market surveillance authorities, as well as sector- and domain-specific supervisors are involved in the European AI Board. A national secretariat should be set up in which the AP and RDI take on a coordinating role to prepare input for the AI board. Finally, the advice outlines several outstanding issues and priorities. This includes clarifying the scope of the supervision of “critical infrastructure” among supervisors. The authorities also suggest that at least the AP and the College for Human Rights (“College voor de Rechten van de Mens” or “CRM”) are already identified as authorities protecting fundamental rights. Finally, the advice urges the Dutch ministers to prioritise the designation of market surveillance authorities (especially for the prohibited AI practices under the AI Act), identify authorities protecting fundamental rights and create the foundations for cooperation agreements and information exchange between supervisors.