Report – From Policy To Practice: Prototyping The EU AI Act’s Human Oversight Requirements
After a successful first policy prototyping exercise around the transparency obligations from the AI Act (see our previous report on this), the Knowledge Centre is now focusing on the human oversight obligation from Article 14 of the AI Act. In this report, we share some findings on the first implementation of the obligation, as well as regarding the text and clarity of the provision.
For a live explanation of the report you can watch our webinar.

Article 14 AI Act establishes that high-risk AI systems must be designed and developed in such a way as to allow effective human oversight. This obligation means that providers must ensure effective human supervision both through the design of the AI system and through organisational measures – to be determined by the provider and implemented by the deployers responsible for its use.
Structure of the report
The report first contains an introduction on the concept of policy prototyping, followed by a detailed look at the phases of the project. The report focuses on the development and evaluation of prototype compliance documents for human oversight, enriched with feedback from experts on the subject. The final section of the report provides legal feedback on Article 14 AI Act and offers a interdisciplinary look at the implications of the AI Act.
Recommendations for human oversight
General best practices
Key findings – compliance documents:
Our findings highlight the importance of:
- Governance structure and role assignment for human oversight;
- Output tailored to the (end) user or individual performing the human oversight;
- Information regarding risks of the AI system and the (required) user profile;
- Layout and language.
Furthermore, the importance of combining this with Instructions for Use was also emphasised.
Key findings – Article 14 AI Act
- The wide degree of flexibility in the article makes it difficult in practice to give concrete substance to the obligation
- There is a need for examples and guidelines that provide guidance
- Providers need clear criteria/benchmarks to assess whether they comply with the obligation.
- There may also be a lack of technical expertise and AI-awareness among deployers, which further complicates effective implementation.
Conclusion
The report is a tool for understanding and applying the obligation to organise human oversight. By bridging the gap between regulatory expectations and practical implementation, we thus provide a valuable resource for policymakers and AI professionals alike.
The call for candidates for the next policy prototyping workshop taking place in May 2025 has just been launched. See here for more information.
Contact

wannes.ooms@kuleuven.be

thomas.gils@kuleuven.be