New theme: AI literacy 

Explore here
A man telling story with Lego
05.06.2025

How to develop a strategy for Responsible AI?

You want to use AI responsibly, but how do you approach this in a thoughtful and structured way? That was the central question during the ‘AI and ethics in practice’ learning community session on Tuesday, 20 May 2025. Four experts shared their perspectives on the topic. The Knowledge Centre Data & Society also presented its new guide, ALLY, which enables any organisation to develop their own strategy. Below is a summary of the session’s key insights.

5 steps to put Responsible AI into practice - Duuk Baten & John Walker (SURF & Npuls)

Speakers Duuk Baten and John Walker emphasised that ethics should be treated as a continuous organisational practice. They described AI as a ‘system technology’ – a technology that affects all domains and activities, much like electricity. AI applications are therefore not merely tools; they fundamentally change how we perceive and engage with the world.

A key challenge with many digital systems today is responsibility. This is often referred to as the ‘problem of many hands’, where no one feels truly responsible for the system’s shortcomings. As a result, end users are left with no option but to accept the limitations of the system, as is the case with hallucinations in AI outputs, for instance.

According to the speakers, taking responsibility is a practical skill, something individuals and organisations must actively cultivate. In their discussion paper, they outline five recommended steps towards responsible AI innovation: 

  1. Assess your current state and set clear ambitions, for instance using the AI Ethics Maturity Framework.
  2. Create the right environment for ethical reflection, such as by using the Polder Perspectives XR card game from Npuls.
  3. Engage all stakeholders.
  4. Hold yourself accountable by making explicit commitments, as seen in the GPT-NL initiative in the Netherlands.
  5. Reflect and share your insights.

During the Q&A, the speakers focused on how to generate organisational buy-in for responsible AI innovation. One effective approach is to work with internal ambassadors – colleagues who promote and inspire others with responsible AI practices.

The City of Ghent’s AI Strategy - Karlien Jordens (City of Ghent)

Speaker Karlien Jordens shared how the City of Ghent translates its ambition to use AI for better and more efficient public services into a concrete AI strategy.

In 2024, the city took its first steps by forming a multidisciplinary core team, including IT and data experts, as well as staff from HR and communications.

An internal learning network was launched via an online platform, which also provided training, for example on writing with AI. This led to the development of internal guidelines for AI use within city services.

The city explored some new use cases, then an internal ideation process resulted in the selection of three pilots:

  1. A chatbot to help answer citizen enquiries
  2. Social media monitoring for mentions of Ghent
  3. An AI assistant for road maintenance

For 2025, the city plans to further refine its AI vision, clarify mandates and roles regarding AI innovation, and define a clear process for experimenting with AI.

Responsible AI is a key priority. The city is monitoring developments related to the AI Act, exploring the potential role of a Certified AI Compliance Officer (CAICO) and investigating options for sustainable AI practices.

Training and awareness will play a crucial role: broad awareness campaigns are planned for city staff, and tailored upskilling will be offered to specific roles, focusing on digital, legal and ethical competences.

During the Q&A, Jordens reiterated Ghent’s commitment to following AI Act developments closely and taking a leading role in the responsible use of AI. As the selected use cases mature, the city will communicate transparently with its residents.

An AI Strategy as Risk Management - Jens Meijen (Umaniq)

Speaker Jens Meijen shared four key principles for building a successful strategy for responsible AI innovation, along with some practical takeaways:

  1. Set realistic strategic goals. Overly ambitious objectives risk turning the strategy into a symbolic exercise or burden. Extra steps for responsible AI should be integrated into existing processes. To secure buy-in – especially at the management level – it is helpful to frame the strategy in terms of risk management, which immediately clarifies its added value.
  2. Narrow expertise alone is no longer sufficient. Even interdisciplinary teams often lack certain AI-related skills, and this applies to technical, legal and end-user roles alike. Customised training is essential for all profiles.
  3. Transparency must be meaningful. If you want to respect people’s rights and engage stakeholders, the message you communicate must be clear and understandable to your target audience. Transparency is only effective when it is truly informative.
  4. Avoid fragmentation of responsibility. When tasks are scattered, accountability often disappears. Organisations should maintain a central overview of all AI usage and provide practical guidelines on acceptable AI use for staff.

Meijen concluded that investing in people is crucial for responsible AI innovation. Ethical reflection should not be mere window dressing but ensure that AI innovation delivers real value. This means organisations must have insight into how AI is used and provide actionable guidelines for responsible use.

During the Q&A, Meijen noted that the importance of responsible AI will likely continue to grow. What is now often seen as a ‘soft’ concern may soon become a hard requirement and a central part of organisational risk management.

Develop your own strategy for Responsible AI with ALLY

The Knowledge Centre Data & Society, in collaboration with FARI – AI for the Common Good Institute, has developed ALLY, a free guide to help organisations build their own strategy for responsible AI innovation.

You can use the guide online at no cost, or you can contact us to organise a tailored trajectory where we co-create your organisation’s strategy and a roadmap to achieve your goals.

Join the learning community 'AI and ethics in practice'

Four times a year, the learning community 'AI and Ethics in Practice' exchanges experiences on a current topic. You can find more information about the upcoming meet-ups on the calendar page.