Report

Policy Prototyping the AI Act: human oversight - workshop report

13.09.2024

On 4 September 2024 the Knowledge Centre organized its second workshop on policy prototyping the AI Act. A group of 14 interdisciplinary professionals started working on different mock compliance measures for human oversight based on three different use cases. The final results and related report will be presented at the end of this year.

Article 14 of the EU AI Act (“AIA”) emphasizes the importance of developing AI systems with a human-centric approach to facilitate human oversight and enable their safe deployment. More precisely, the article concerns:

  • The obligation to design and develop an AI system to enable effective human oversight;
  • The types of measures to build human oversight into an AI system by design or operationalize it process-wise.

Given the importance of article 14, the Knowledge Centre decided to focus this year’s Policy Prototyping project on the human oversight requirement for high-risk AI systems in the AI Act.

In the course of this project, we aim to:

  • Examine and assess the envisaged human oversight requirements in detail;
  • Create operational documents that include prototype decision support processes and prototype instructions for human oversight;
  • Gather feedback on these human oversight requirements and their applicability.

What are the obligations in the article?

The method

To gather feedback on the human oversight requirement, the Knowledge Centre hosted an in-person legal design workshop at the FARI Institute in Brussels. The workshop brought together 14 participants (AI professionals and experts with experience in facilitating human oversight and drafting policies) along with 4 facilitators. Over the course of a whole day, they collaborated in 3 separate groups to create first versions of various prototypes. Using a step-by-step legal design approach the participants began by clarifying the human oversight requirement for a use case concerning a high-risk application, as defined by the AI Act. This process involved identifying relevant stakeholders of the use case and identifying both challenges and opportunities. In the next phase, they brainstormed and laid the basis for the prototype compliance documents, with a focus on the potential deployer.

Participants

A total of 18 people, including 4 facilitators from the Knowledge Centre, participated in the workshop: 3 representatives of AI-developers, 2 consultants, 5 legal experts, 3 technical experts, 1 educational expert and 4 academic researchers.

  • Use case 1 – Feedback system for student papers

The first use case concerns an AI tool that provides students with feedback on their draft papers. Students can upload their drafts, after which the tool generates personalized feedback using evaluation criteria set by the professor/teacher. Based on this feedback, students can revise their papers based on the feedback and resubmit them again for further evaluation and updated feedback allowing for continuous improvement with each submission. The teacher only receives the final version of the paper, along with the given feedback on this last version.

  • Use case 2 – Medical device for cardiac microvascular disease

In this use case an AI system is intended to aid the diagnosis of cardiac microvascular conditions. It concerns an image enhancing AI system capable of imaging heart microstructures at a very high resolution with the potential for the system to also provide diagnostic functions.

  • Use case 3 – Risk indicator for residential burglaries

The last use case focusses on an AI system that predicts the risk for residential burglaries in specific zones in accordance with several features. Based on this risk prediction, the AI system provides police officers with recommended patrolling routes.

Preliminary insights from the workshop

The operationalisation of human oversight under the AI Act is very dependent on the context in which it is applied and on the recipients of the related information/the actors that have to exercise the oversight. A thorough exploration of the target users and their possible concerns regarding an AI system is necessary when deciding on appropriate measures and preparing the accompanying documentation.

An important consideration to take into account is whether human oversight should be integrated 'by design' or ‘by organisation’. ‘By design’ means that the human oversight is integrated directly by the AI provider into the AI system, while 'by organisation' means that human oversight is delegated to the deployer/user who has to implement the appropriate procedures (as suggested by the provider). Human oversight 'by design' increases the providers' responsibilities and liabilities, while human oversight 'by organisation' places the burden of oversight primarily on the deployer/user of the AI system. Preferences may vary, with AI providers possibly leaning towards shifting oversight responsibilities to deployers, while users may favor built-in safeguards.

The provider must also consider any trade-offs related to human oversight measures, and assess how they impact both system performance and practical outcomes. For example participants already pointed out that a balance should be sought between the benefits of AI systems, such as their velocity, objectivity and efficiency, versus the added value of enabling effective human oversight.

Additionally, to enable effective oversight, it is crucial to clarify the importance of individual model features. This involves providing (detailed) explanations of how specific features contribute to the model's overall output (incl. e.g. decisions, predictions, or risk assessments). For instance, understanding why certain features are weighted more heavily than others or why a particular set of inputs leads to a specific outcome can help human reviewers to assess the validity and fairness of output and whether or not to intervene.

Questions were also raised about who is best suited to perform oversight on high risk AI systems, especially in high-stakes environments, where personal biases may still come into play and influence decision-making. The need for external oversight or multi-layered supervision was suggested as a potential solution.

Finally, the workshop illustrated the importance of multidisciplinarity. The different participants interacted with each other, each from their own background. This turned out to be important when developing the preliminary prototypes.

Next steps

During the workshop, rough drafts of the prototypes were conceived. These drafts will now be further developed by experts from various groups in accordance with the AI Act. In autumn, the Knowledge Centre will organise (possibly online) feedback sessions, open for the public (around mid-October – mid-November, date to be announced). These sessions will take the form of interviews during which participants and other interested parties can give feedback on the developed prototypes and art. 14 AI Act. This should eventually lead to a report presenting the prototypes and related policy recommendations.

This blog was prepared by Hanne Schutyser, summer intern at the Knowledge Centre Data & Society

Author