New tool: AI Blind Spots in (health)care

Discover the tool
Lone Thomasky Bits Bäume Distorted Fish School 1280x1806
06.10.2025

Is compliance enough? Rethinking ethics in the age of the AI Act

How high is ethical AI on your organisation’s agenda? With the arrival of the AI Act, the discussion about responsible AI seems to have shifted from ethics to compliance. What does this mean for the impact of AI, and how can ethics and law strengthen and support each other? On 23 September the ‘AI and ethics in practice’ learning community came together to reflect on these questions. The session began with two presentations, which were then followed by a discussion. This report outlines the key insights.

Ethics and law as complementary forces

In the first presentation, Julia Straatman (digital ethics researcher at the Data School, Utrecht University) examined the relationship between ethics and law, the challenges of applying them to fast-moving technologies such as AI, and how organisations can create room for ethical reflection alongside legal compliance.

Straatman stressed that ethics and law are closely linked. Many laws were developed from  ethical standpoints: rules against theft or discrimination began as moral judgements before being turned into enforceable norms. Law crystallises ethics, but it is slower to adapt, a problem that is illustrated by AI: when the AI Act was first drafted, it didn’t cover general-purpose systems like ChatGPT, which were added only later. This shows why, when the law can’t keep up, ethics is essential.

In practice, Straatman has observed a shift among governmental organisations: in the past, questions centred on ethical reflection (What is the right thing to do?), while now they are increasingly about compliance (How do we meet the requirements of the AI Act?). Compliance is important, but reducing debates to box-ticking exercises risks leaving wider ethical issues unaddressed. Legal frameworks cannot cover everything, so organisations must deliberately create space for ethical reflection.

Straatman closed her presentation with the following practical recommendations:

  • Always keep in mind the intended purpose of technology. Technology is a tool, not a goal in itself. Conversations about purpose help anchor ethical reflection.
  • Formalise ethical deliberation using structured tools (such as FRAIA). These tools create room for ethical reflection while also providing documentation that supports compliance efforts. This ensures that ethics and compliance strengthen rather than weaken each other.

Ethics and law are not competing domains but complementary forces. As technology outpaces legislation, ethics provides the necessary space to deliberate on what is socially desirable, while law crystallises these deliberations into enforceable rules. The key message: don’t let compliance overshadow ethics – keep both in play.

Reflections on ethics and regulation

The second speaker was Shazade Jameson, who currently works as a Project Officer at the UNESCO Ethics of AI Unit and leads the ‘AI Ready Flemish Public Administration’ project. Her presentation was a personal reflection on the topic and therefore not a reflection of UNESCO’s position.

Jameson started by highlighting three key challenges commonly faced in the public sector when it comes to AI implementation: technical infrastructure and capacity, legal compliance, and ethical questions for the use of AI in public administration.

Jameson emphasised that ethics is often viewed as a ‘nice to have’, and not treated as a key requirement. This tendency has only increased with the rise of legislation, as regulatory frameworks tend to push ethical reflection to the background. For example, the AI Act outlines which AI practices are prohibited and identifies high-risk systems, but this legal approach doesn’t automatically ensure ethical practices. Jameson also talked about the need to consider an AI literacy baseline and how we define that, referring to UNESCO’s upcoming ‘Ethical AI Competency Framework’ to help public sector actors develop the necessary knowledge to engage with AI responsibly.

Jameson also observed that uncertainty around AI is growing across all sectors, sometimes leading to discomfort for organisations. As a response, Jameson encouraged connecting with peers to collectively explore practical implementation and reflect on deploying AI more ethically. Taking time to ask broader societal questions, such as what kind of future we want AI to help build, is essential. Jameson concluded by pointing to two UNESCO resources, the ‘Ethical Impact Assessment’ and the ‘Readiness Assessment Methodology’, which are part of the ‘Recommendation on the Ethics of Artificial Intelligence’ and available online, with updates planned later this year.

Discussion

The Q&A raised some practical and ethical challenges of AI. On measuring the impact of ethics, the speakers stressed that numbers are not very meaningful. What matters is how systems affect people, from users to citizens, and how ethical concerns influence policy, strategy and decision makingProcurement was also a concern: buying a system is often seen as proof of quality by employees and users, but this is not always the case. Suggestions included more independent testing of claims, stronger contracting practices, clearer guidance, and involving suppliers in ethical deliberation processes.

Concerns were also expressed about the exceptions in the AI Act, particularly the provisions allowing biometric categorisation for law enforcement. While generally banned, such exceptions risk being misused, especially in more authoritarian contexts. This points to a deeper issue: regulating AI as a market product overlooks wider questions of social justice. The discussion made clear that organisations should consider not only what the law allows but also the broader ethical consequences of how AI is deployed.

‘AI & ethics in practice’ learning community

Four times a year, the 'AI and ethics in practice' learning community exchanges experiences on a current topic. You can find more information about the next meetings on the calendar page.

  • Event

ALLY demo workshop

Come to our practical demo about our “ALLY” guide during the FARI Conference Partner Day and discover how to build a strategy for responsible AI.

18 Nov
On site

About

Image by Lone Thomasky & Bits&Bäume from the betterimagesofai library. Licence: CC BY 4.0

AI attribution: This work was primarily human-created. AI was used to make stylistic edits, such as changes to structure, wording and clarity. AI was prompted for its contributions, or AI assistance was enabled. AI-generated content was reviewed and approved by the authors. The following model(s) or application(s) were used: ChatGPT-5. AIA Ph Se Hin R ChatGPT-5 v1.0

Authors

Wout Vermeir
Shannen Verlee
Sultan Erdogan
Jonne van Belle