New training course: Responsibly innovating with AI

(NL) Three-day training course "Responsibly innovating with AI"
RIOT cover image
23.06.2025

Find the blind spots in your use of generative AI

Introduction

Are you planning on using generative AI (GenAI) in your organisation or for your work? Do you sometimes worry about the impact of GenAI on the quality of your output or, broader, our society? The rapid rise of GenAI marks a new wave in artificial intelligence, promising increased efficiency, creative augmentation, and a boost in productivity across industries. With the democratisation of GenAI applications, anyone – from individuals to large enterprises – can now harness these technologies with minimal barriers to entry. However, this accessibility also raises pressing ethical concerns, as the widespread adoption of GenAI is already transforming sectors such as media, education and creative work, often sparking a moral panic about its potential consequences for employment, ecological impact, privacy, IP use, etc. 

With all this in mind, the Knowledge Centre Data & Society developed the GenAI Blind Spots card set. The card set helps professionals uncover and navigate the ethical risks unique to GenAI technologies. This can serve as the starting point to draft mitigation actions on how to deal with these issues in the organisation

In this article, we introduce the potential ethical dilemmas surrounding GenAI and how the GenAI Blind Spots card set can be used to make these ethical considerations more tangible and actionable. We outline its purpose and structure, as well as how it facilitates critical conversations on responsible GenAI implementation and use. Additionally, we share the insights from developing the GenAI Blind Spots card set during several workshops with academics and practitioners, including the one that took place at the TH/NGS conference in December 2024.

The ethical issues of GenAI

The use of GenAI raises ethical challenges on different levels, ranging from how individuals interact with these systems to how their widespread use is reshaping societal structures. On a personal level, users often face issues of overreliance, lack of critical engagement, or unawareness of how GenAI content is generated. The use of GenAI tools may result in incomplete or incorrect output (so-called hallucinations), reproduce harmful biases, or lead to subtle deskilling when users outsource creative or cognitive tasks without reflection. In professional contexts like HR, for instance, participants in our workshops discussed how delegating tasks such as sourcing candidates to GenAI risks reducing opportunities for meaningful human interaction and ethical judgement. 

On an organisational level, concerns arise around the responsible implementation of GenAI systems, especially when tools are used without proper oversight (‘shadow use’) or when sensitive data is entered into commercial models. Intellectual property, data privacy and regulatory compliance can quickly be overlooked in the rush to innovate. Beyond these internal risks, GenAI also has far-reaching societal implications: from the spread of disinformation and the rise of synthetic media to ecological costs and the flooding of online spaces with low-quality, AI-generated content. These issues highlight the importance of anticipating not just what GenAI can do, but what it should do – and under what conditions it can be responsibly implemented and used.

These issues highlight the importance of anticipating not just what GenAI can do, but what it should do – and under what conditions it can be responsibly implemented and used.

Towards the GenAI Blind Spots

The GenAI Blind Spots tool is a physical card set of ethical blind spots – aspects that are often overlooked – related to the responsible implementation and use of GenAI in an organisation. The blind spots range from oversights regarding the responsible use of GenAI (e.g., ‘data quality’ and ‘deliberate abuse’) to the impact of GenAI on jobs and society (e.g., ‘skills and competences’, ‘copyright & IP issues’ and ‘employment and job satisfaction’).  

The blind spots were identified based on a combination of desk research (scientific articles on ethical issues and GenAI) and the outcomes of several co-creation workshops. In these workshops, participants engaged in discussions – with the help of the existing AI blindspots card set – about the unintended risks and societal impact of GenAI. The insights from the desk research and the workshops informed the development of the GenAI Blind Spots tool. The draft of the card set was assessed in a workshop with people working in human resources and wanting to explore the use of GenAI in their working practice. They evaluated and refined the usability and the readability of the cards. 

We learned for example that the ethical blind spots of GenAI are similar to those of AI in general, but there are elements specific in GenAI systems (e.g. hallucinations) that need to be considered. The ease-of-use of GenAI applications results in a higher importance for individual users to have the right skills, knowledge and attitudes to use GenAI in a responsible manner. As a result, the GenAI Blind Spots card set focusses on organisations willing to implement GenAI, instead of developers of AI systems, who were the target group of the original AI Blind Spots card set. Consequently, the cards needed to be simplified in terms of wording and the amount of information to increase the readability and usability. In addition, the participants of the co-creation workshops found the GenAI Blind Spots tool useful to create awareness on ethical issues of GenAI, but found its use limited to formulate concrete action points. 

Each card introduces a specific blind spot by way of a question, brief description and concrete example. The card also offers a number of reflective questions to help you identify the blind spot in your own GenAI project and consider the possible mitigation strategies. As such, the cards encourage users to reflect on the possible blind spots in their organisation and stimulate them to think of a strategy or action plan to deal with them. 

Examples of GenAI Blind Spots cards

Some examples of blind spots are ‘up-to-date’, ‘ecological sustainability’ and ‘employment and job satisfaction’. Below, we further detail what these specific blind spots entail.  

The ‘up-to-date’ blind spot looks at the accuracy of the outputs of GenAI in relation to our constantly evolving society. The GenAI output might diverge substantially from reality when the application is trained on outdated information. You can therefore never be certain that the output is correct.

Another example of a blind spot is the ‘ecological sustainability’ of GenAI. This refers to the immense amounts of energy, water and rare resources used not only to train the model, but also every time the model is used. The exact amount of resources used is often unknown, mainly because providers are not transparent about these numbers. 

Finally, another example of a GenAI blind spot is ‘employment and job satisfaction’. GenAI can play a big role in professional life, as some jobs might be disappearing and others could potentially become less meaningful, creative or human. What will be the impact of GenAI on the tasks and roles within your organisation? How will it influence the quality and meaning of work of your colleagues and employees? This blind spot encourages reflection on how an organisation will cope with these changes.

Below you can find the corresponding cards of these examples, open them in a new tab to see them fully.

Learnings from applying the card set

As mentioned, the GenAI Blind Spots card set has only recently been developed. Still, a few learnings can already be listed based on our experiences during the creation process of the tool. 

  • The tool is good for raising awareness in an interactive way, especially for audiences with limited prior exposure to GenAI. One of the key strengths of the Blind Spots tool lies in its ability to introduce complex ethical considerations around GenAI in a low-threshold, engaging way. For professionals who are new to GenAI or have not yet deeply reflected on its societal implications, the tool serves as a valuable entry point. Its card-based format allows users to engage with ethical concerns without requiring in-depth technical or philosophical background knowledge. This makes it especially suitable for awareness-raising sessions where the goal is to spark initial reflection and dialogue.
  • The tool works best as part of a broader methodology that includes concrete follow-up actions. While the GenAI Blind Spots tool effectively initiates critical reflection, its impact is significantly enhanced when used as part of a structured workshop that guides participants beyond the initial discussion. Without a clear path towards concrete action, there's a risk that the tool will have little impact. We have found that the tool is most effective when combined with clear follow-up steps such as action plans or roadmaps. Embedding the tool in this kind of methodology helps ensure that ethical considerations are not just acknowledged, but actually translated into practices or policies.
  • The tool sparks discussion and different perspectives – it is not a checklist, so allow enough time for conversation. Rather than providing definitive answers, the tool is designed to open up space for dialogue and critical thinking. This naturally leads to the exploration of different viewpoints, future scenarios, and possible dilemmas. However, this also means that meaningful engagement with the tool takes time. It should not be seen as a checklist to be completed quickly; instead, sessions should be structured to allow for deep discussion, reflection, and collective sense-making

Get started with the GenAI Blind Spots

The GenAI Blind Spots card set can be used in different ways: as a conversation starter, a tool to learn about and uncover ethical issues, or during a workshop to define concrete steps for identifying and addressing them. We recommend using the cards once you already have a clear GenAI project in mind and want to explore the possible pitfalls while there's still room to make adjustments. In our experience, to get the most out of the workshop, gather a group of 4 to max. 8 people. Ideally, this includes the person(s) responsible for implementation, (a representative of) key users, and representatives from any departments that may be impacted by the GenAI application. 

A typical GenAI Blind Spots workshop takes about 2 hours. Start by selecting the cards that are most relevant to your project. Then, discuss with the group how each card applies and what actions might be needed to tackle the risks or challenges it raises. Each card includes guiding questions to help structure the conversation. Make sure you capture your decisions, insights, and action points along the way. The card set includes a detailed facilitation guide that offers a clear workshop structure and suggested timing. 

Download the full card set and start uncovering the blind spots in your own GenAI project. 

Conclusion

GenAI brings about specific ethical challenges regarding amongst others misinformation, reskilling, shadow use and intellectual property. The GenAI Blind Spots card set invites you to take a step back and critically assess the ethical challenges of GenAI in your organisation. By sparking meaningful conversations and surfacing overlooked risks, the cards help teams build a shared understanding and lay the foundation for more responsible, future-proof GenAI use.

About this publication

This article was originally published in the RIOT2025 report on Generative Things, published by Stichting ThingsCon Amsterdam in June 2025. The report was curated, edited, and designed by Andrea Krajewski and Iskander Smit.

This article and the RIOT2025 report are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license. Minor textual edits have been made to adapt the content for this blog format. The design of the example cards has been updated to the latest version. All rights remain with the authors.

Cover image of this blogpost is taken from the cover of the RIOT2025 report.

Footnotes

The GenAI Blind Spots card set is an adaptation of the AI Blindspots cards by the Knowledge Centre Data & Society, which in turn is based on the AI Blindspot cards of Ania Calderon, Dan Taber, Hong Qu, and Jeff Wen, developed during the Berkman Klein Center and MIT Media Lab’s 2019 Assembly program. 

For a good overview of all ethical issues relating to GenAI, we recommend the following publication:  Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics 2024, 11, 58. https://doi.org/10.3390/informatics11030058.

Authors

Jonne van Belle

Jonne van Belle

email hidden; JavaScript is required
Pieter Duysburgh

Pieter Duysburgh

email hidden; JavaScript is required
Willemien Laenens

Willemien Laenens

email hidden; JavaScript is required