New tool: AI Blind Spots in (health)care

Discover the tool
Elise Racine No Abnormalities 2560x2746
Tool

AI Blind Spots in (health)care

Introduction

This card set has been designed to get you thinking about how you can use AI responsibly and inclusively in (health)care. The cards will help you uncover your blind spots: ethical pitfalls such as bias in the data. By identifying these early on, you increase the chance of AI having a truly positive impact on the standard of care provided.

This card set focuses on AI applications that support (health)care processes, such as diagnostics. The Knowledge Centre Data & Society also has a separate card set on Generative AI. Use this one if you’d also like to consider the risks and challenges of GenAI.

What does the card set contain?

Each card describes a single blind spot. Per individual card, you’ll get a key question, the category or categories to which the blind spot belongs a short description, a concrete example and some reflective questions. As a set, these cards therefore help you identify and address potential risks of AI applications in (health)care.

If you’d like to develop further action points based on this exercise, we recommend using the Guidance Ethics Approach. This method focuses on the ‘how’ question: how can we develop and implement an AI application in a responsible and inclusive way? This method was developed by ECP - Platform for the Information Society.

The AI Blind Spots cards

  • Application: Things to watch out for when it comes to the design, training and development of AI applications.
  • Use: Considerations when using AI applications.
  • Organisation & Society: The broader impact of AI on (health)care recipients and organisations, as well as on society as a whole.
  • Outcomes: Potential risks related to the analysis, forecasts and recommendations, etc. provided by an AI application

Getting started

You can use the card set for different purposes:

  •  to raise more awareness of the things to watch out for when using AI;
  • as a guide for setting agreements for the use of AI (similar to a code of conduct);
  • to identify risks during the development and implementation of an AI application;
  • as a starting point for conversations about the potential impact of AI within your organisation;

The card set has been developed for caregivers, innovation managers, AI professionals and researchers. You can use it to reflect on blind spots and ethical dilemmas within your team. We also encourage you to involve (health)care recipients, informal carers, policymakers or administrators as this will ensure you consider a range of different, interdisciplinary perspectives. This is especially important if you want to develop a code of conduct or are thinking about using AI applications within your organisation.

About

The ‘AI Blind Spots in (health)care’ is a tool developed by the Knowledge Centre Data & Society (KCDS). The content of this card set is based on our own insights as well as our collaborations with imec-SMIT (Vrije Universiteit Brussel) and LiCalab (Thomas More University of Applied Sciences), and takeaways from workshops facilitated by the KCDS on AI use in (health)care with experts and professionals. 

This tool is based on the AI Blindspot card set by Ania Calderon, Dan Taber, Hong Qu and Jeff Wen, developed during the Berkman Klein Center and MIT Media Lab’s 2019 Assembly programme. The card set was created with the help of ChatGPT 4o to improve phrasing and generate examples of blind spots.

The card set is available under a Creative Commons CC BY-NC-ND 4.0 license. 

Image: Elise Racine / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Downloads

Ready to get started yourself? Download the materials here.

AI Blind Spots in (health)care (pdf, 459KB)

Download
  • Event

ALLY demo workshop

Come to our practical demo about our “ALLY” guide during the FARI Conference Partner Day and discover how to build a strategy for responsible AI.

18 Nov
On site