Report

[EN] What can ethics-by-design mean for responsible AI innovation? 5 key takeaways from our learning community

25.03.2025


On 11 March 2025, the ‘AI and ethics in practice’ learning community met to explore the topic of ethics-by-design. If we want to develop or implement responsible AI systems, we need more than just guidelines and assessments. We need to integrate ethical principles into the initial design of the AI systems and to follow up on them iteratively. This use of design methodologies to operationalise ethical principles is called ethics-by-design.

We were joined by two speakers:

  • Rutger De Wilde – Researcher with the User Centered Experiences (UCE) group and lecturer of the Electronics-ICT department at Odisee University of Applied Sciences

  • Dries De Roeck – Product Manager at Helpper

In their presentations, they touched on the following questions: How to make ethics an ongoing and iterative process?How can ethics-by-design principles be applied to AI development and design? How can we design AI systems to take ethical challenges into account?

In this report, you’ll find our key takeaways from the two presentations and the community discussion that followed.

1. To make ethics part of the design process, focus on user experience and responsible use

One way to approach ethics-by-design in AI development is to look at it from a user experience perspective. How can the user experience of AI systems be improved? By creating user-friendly interfaces for AI-driven systems, you ensure that ethical values and requirements such as personalisation, acceptability, transparency and inclusion are taken into account. AI tools such as eye-tracking software and galvanic skin response can be used to analyse how users interact with the AI system, in order to improve their user experience.

For organisations using AI in their services or products, there are some general principles on the responsible use of AI to follow: be transparent about your use of AI, verify the correctness of the output, respect intellectual property and personal and confidential information, and take responsibility for how you’re using AI. For the use of generative AI specifically, you can use this flowchart to assess safety.


Image: Flowchart on the safe use of generative AI (slide from Rutger De Wilde’s presentation)

2. Ethics-by-design is a balancing act between business, technology and the customer

During the design and development of AI systems, different aspects regarding business, technology and the customer need to be balanced. This balancing act is central to ethics-by-design. For example, when users of your digital platform need to re-accept the terms and conditions of your organisation, you will need to address the following questions:

  • How will we communicate this?

  • What information will we show to users?

  • What information do users need in order to accept or decline the new terms and conditions?

  • What will happen if lots of users decline our terms and conditions?

  • etc.

Fully informing users about the changes you’ve made might be the most ethical approach, but doing it in a way that causes them concern could have a negative impact on your business. So even though it might seem obvious to do the ethical thing, in practice, ‘doing’ ethics is often a trade-off.

3. Tackle ethics in AI by asking the right questions, one step at a time

Legal and ethical requirements can feel daunting for design and development teams. For many tech companies, the threat of regulatory or ethical backlash hangs over the development of AI systems.

However, ethical reflection doesn’t have to be overwhelming. It can often be broken down into manageable steps, by addressing relevant questions at each phase of a project. While some initiatives may require advanced ethical reflection and strict regulatory compliance, many data-driven or AI projects can begin with the guiding questions outlined below (see image).

Image: Ethical questions to ask yourself in every data-driven or AI project (unknown source)

4. There are tools available

To take this holistic approach to ethics throughout every stage of the AI lifecycle, there are several tools you can use, including:

The new ALLY guide from the Knowledge Centre Data & Society and FARI - AI for the Common Good Institute helps all kinds of organisations ask the right questions for responsible AI innovation. The guide allows companies to create an actionable and organisation-wide governance strategy for responsible AI.

Also discussed was the AI compass developed by Jelle De Schrijver from the University of Antwerp, which helps raise awareness about ethical challenges. This tool allows you to identify ethical values (privacy, safety, sustainability, etc.) that are more or less important to your organisation when making ethical decisions related to AI.

Other ways to identify and deal with ethical challenges is to join learning networks where experiences are shared, validate cases in a user experience lab, or interact with end users and other stakeholders in workshops to learn from their experiences and ideas.

5. Assign clear responsibilities for ethics-by-design

In many cases, it is unclear who is ultimately responsible for ethical issues in the design process. Often, the development team is accountable for ethics-by-design, as they seem best-placed to assess the potential impact of ethical decisions. However, this responsibility would actually be better placed in the customer department or even with a dedicated ethics officer or ethics board.


The ‘AI and ethics in practice’ learning community is an initiative from the Knowledge Centre Data & Society to bring together professionals working on ethical, legal and societal aspects of data-driven and AI systems. By doing so, professionals can learn from peers and share their experiences, knowledge, worries and best practices with each other. The community gathers four times per year. If you are interested in joining this community, please contact info@data-en-maatschappij.ai.

Afbeelding: Firmbee.com via Unsplash