TOOL

AI Blindspots healthcare: general blindspots

The AI Blindspots healthcare cards are divided into two groups, the general blindspots and the specific blindspots. On this page, you can find the general AI Blindspots for healthcare, which can be discussed with a diverse group of members active in healthcare.

Each AI Blindspots card contains:

  • A set of questions to help you uncover this blindspot;
  • A use case that illustrates the importance of considering the blindspot;
  • A number of tools and tricks to help you detect and mitigate the blindspot.
1

Suitability

An AI system may provide support for the care for a patient, but is it the best solution for your specific goal? By examining other means, you will get a better idea of the possible solutions, your preferences and which one is worth investing in.

HAVE YOU CONSIDERED?

  1. Will the quality of care for the patient improve by using the AI system?
  2. Will the development and implementation of an AI system result in the greatest benefits in comparison to other solutions?
  3. Do these benefits outweigh the risks and changes for your organisation and for the patient?
  4. Will the system (not) interfere with (1) the norms and standards of your organisation, (2) the services you offer, and (3) the intentions you have as a care provider?

HOW NOT TO

Deep learning is very en vogue these days, therefore the board wants to invest in a deep learning application. This way the hospital will receive strongly awaited attention and recognition.

TOOLS & TRICKS

  • Q3: Cost-benefit analysis
  • Q3 & Q4: SWOT analysis
  • Q4: Discuss with stakeholders if the AI system is compatible with the mission statement of your organisation
2

Explainability & transparency

When using an AI system in a healthcare context, the system will likely collect health data, which are personal and sensitive data. Clear communication about the benefits of the system, the data collection process, and the decision making process are crucial to gain trust from the envisaged users or others who will encounter the system.

HAVE YOU CONSIDERED?

  1. How will you communicate the benefits of the AI system to the envisaged users and others who will encounter the AI system?

  2. How will you explain the data collection process to them?

  3. How will you inform them about the decision-making process (incl. the underlying logic)?

  4. What difficulties can you encounter when communicating about the AI system to your target groups, and how will you cope with them?

HOW NOT TO

Although you specifically asked for it, a sales representative is not able to give details about the datasets used other than them being ‘massive’ and ‘free of all sorts of biases’. The model they use is proprietary and completely black box. Some measures of uncertainty are reported along with the output, though.

TOOLS & TRICKS

  • Q1, Q2 & Q3: Use clear and accessible language (no technical terms), sit together with the development team of the system to make sure they can guarantee certain requirements with regard to transparency
  • Q2 & Q3: AI Explainability 360
3

Trust

The implementation of an AI system might lead to resistance as not everyone is acquainted with (digital) technologies. For some, the implementation of an AI system might feel exciting and new, while others might feel stress and distrust.

HAVE YOU CONSIDERED?

  1. How will you gain trust from your envisaged users and others who may encounter the AI system?
  2. Have you thought of a sensibilisation campaign for affected stakeholders?
  3. How will you guarantee to not only reach the ‘AI believers’ but also persons who are not yet convinced of the advantages of AI?

HOW NOT TO

Only after the AI system (making automatic reports from consultations) was deployed, the management cared to brief the staff and communicate to the patients about the system and its promises.

TOOLS & TRICKS

  • Interviews with possible users and others to determine their psychological barriers to use the AI system. What can you do to lower these barriers?, clear communication (see card on explainability and transparency)
  • Q2 & Q3: Develop a sensibilisation campaign
  • Q1 & Q3: Trustable Technology Mark
4

Willingness

The success of an AI application is partly dependent on the effort and willingness of the users (i.e. doctors, nursing staff, patients and other possible users) to learn, adopt and use the application for their (daily) needs and in their medical practice.The willingness to use an AI system goes hand in hand with trust in the AI system. When a person does not trust the system, why would he/she use it? (see also card on trust)

HAVE YOU CONSIDERED?

  1. Does the AI system address a true need according to the stakeholders?
  2. Are the stakeholders willing to adopt and use the AI system or do they feel reluctant?
  3. How will you reduce the reluctance of your stakeholders towards the AI system?

HOW NOT TO

It seems the new AI-based feature of the electronic health record software has barely been used by the staff. It turns out people don’t like to change the way they work, as they tend to be less productive in the process of getting to know the new possibilities.

TOOLS & TRICKS

  • Q1 & Q2: Interviews with possible stakeholders
  • Q2: Find out what reservations people have before developing the AI application. By listening to them, you can make better-informed decisions and attain a higher adoption rate.
  • Q3: Clearly communicate the intended benefits of the AI system and the actions you will take to lower psychological barriers (see card on explainability and transparency, and on trust)
5

Diversity

The persons (i.e. patients, nursing staff,...) confronted with the AI technology are a diverse group. They all have a different social and demographic background, and have a different level of digital maturity (see also card on (digital) inequality). This is not only the case on a personal level, but also on an organisational level. Between care institutions there are many differences in terms of digital maturity and experience with IT.

HAVE YOU CONSIDERED?

  1. Can the AI system be used and interpreted by a diverse set of possible users?

  2. Has your institution the necessary experience and capabilities to make use of an AI system in its daily practice?

HOW NOT TO

The company promised the AI system had endless possibilities. What they did not say is the user guide consists of 400 pages of technical writing full of equations and code.

TOOLS & TRICKS

  • Q1: Create personas of possible users: what challenges are they confronted with when using the AI system?

  • Q1 & 2: Sit together with a diverse team (i.e. IT department, management, doctors and nursing staff) to discuss the digital readiness of your organisation.

  • Q2: Set up a feasibility study, together with the R&D department of your organisation, AI-Readiness Assessment

6

(Digital) Inequality

The implementation of an AI system might have a negative or unintended impact on the envisaged group of users, such as increasing existing inequalities. Not all envisaged users will have the appropriate level of digital skills and digital health literacy to use and interpret the AI system.

HAVE YOU CONSIDERED?

  1. Are there alternatives in place so persons who are not able to make use of or interpret the AI system are not left out?
  2. Has it been ruled out that the implementation of the AI system will lead to more stigmatisation and discrimination of certain groups?
  3. When implementing the AI system, will it enlarge or decrease existing inequalities because of..?
    1. The availability of patient information;
    2. The differences in treatment between hospitals;
    3. The level of competition between care institutions;
    4. The effect of digitalisation on nursing staff; etc.

HOW NOT TO

The information desk is now completely replaced by virtual assistants. People only have to scan the QR code they received by email after making their appointment online. From then on the assistant will talk them through everything they need to do.

TOOLS & TRICKS

7

Trade-off

(Personal) data must be processed with respect for the privacy of the data subjects (see also card on data governance & privacy). But sometimes, a trade-off between personal privacy and public interest must be made: think for example of the contact tracing apps for the COVID-19 crisis. Even though the amount of personal data that needed to be shared was strictly limited, people still shared some personal data in order to trace corona infections. Furthermore, a weighing exercise sometimes needs to be made between the social interests/costs of implementing an AI system and the economic interests of commercial partners involved in this process.

HAVE YOU CONSIDERED?

  1. What are the benefits and risks of the AI system on (1) an individual level, and (2) on a collective level?
  2. Are personal data really required by the AI system in order to be able to serve the public interest?
  3. How long will the personal data be stored?
  4. Are the data subjects adequately informed about the purposes for which their personal data will be processed?
  5. Which interests do the commercial partners have in developing/implementing this AI system? How do these relate to the social interest of providing good health care for all people?

HOW NOT TO

A new disease is developing fast and kills many people. Doctors have noticed that the disease pattern is different for the group of children with Down syndrome between the age of 4 and 6. They want to use an AI system to detect which characteristics of the children are strongly correlated with this different disease pattern because it might help to find a cure for the disease. However, the data of these children can only be pseudonymised, not anonymised, and there is no time to ask for consent.

TOOLS & TRICKS

  • Q1-Q3: A proportionality exercise
  • Q4: Clearly communicate the purposes of data processing
  • Q5: Make an explicit agreement with all the involved partners about the goals of the AI system

A patient in need of care may not be fully convinced of the advantages of the AI system that will be used during his/her care process. Be aware that a person (patient, healthcare staff, …) might feel to have no other option than to give his/her consent in order to receive care or to continue his/her work. The power structure between patient and caregiver or between management and staff can play a role as well in the freedom of consent. And what with patients who are physically or mentally unable to give their consent?

HAVE YOU CONSIDERED?

  1. What does ‘giving consent’ entail? Is a patient giving consent for the purpose of (1) optimising his/her diagnosis or treatment, and/or (2) further processing of his/her personal data (e.g. training other AI systems)?
  2. How will you explain to the affected stakeholders what ‘giving consent’ entails? Are they sufficiently informed about what they are consenting to?
  3. Can stakeholders withdraw their consent?
  4. Will you review the way(s) you are asking for consent?
  5. When collecting data on other grounds than consent, the collection process may be legal but not necessarily ethical. Did you consider to not collect, store and analyse data, although you may be legally allowed to do so?

HOW NOT TO

As the performance of AI systems is in everyone’s interest, the hospital assumes consent from the patients for the use of their data to train these AI systems.

TOOLS & TRICKS

9

Datafication of health data

Not all information can be quantified. One can for example think of mental health information or information regarding a person's perception of pain. This information is extremely interesting but harder to measure, to datafy and to standardise. When trying to datafy this type of information, a choice has to be made between the parameters that will/can be measured. Some parameters may be more easily measured than others because they are easier to datafy.

HAVE YOU CONSIDERED?

  1. Are there parameters on which the AI system is based that are more difficult to measure and to datafy than others? If so, how will you deal with this?
  2. Which proxies can you use to gather data about the parameters that cannot be identified? What are the limits/disadvantages of these proxies?

HOW NOT TO

In order to assess the risk of burn-out, a company's HR department may decide to analyse facial micro-expressions when entering the building. However, this does not take into account differences between individuals’ facial structure. As a consequence, there is a risk of arbitrarily estimating certain facial structures as an indication of a high risk, while they are actually not.

TOOLS & TRICKS

  • A group discussion with different stakeholders (IT department, management, care professionals) on the calculability of parameters.
10

Accuracy & quality

It is important that an AI system is fed with accurate and high-quality data to ensure that the results and outcomes can be interpreted in a correct and adequate way. This is especially the case when the system is for example used to improve care or well-being, such as examining which type of treatment will be the most appropriate for a patient.

HAVE YOU CONSIDERED?

  1. Are the data upon which the AI system is based accurate and representative for the time, space and population in which the AI system will be used?
  2. Are the data carefully entered/integrated?
  3. Is there a test phase to examine whether the AI system predicts the right symptoms/treatment?
    1. Are the appropriate performance estimates used for this test phase?
    2. Are the performance estimates unbiased?
  4. Are there control mechanisms to monitor the analysis and the outcomes of the AI system?
  5. Is there a strategy to detect and counteract users who give false information on purpose?

HOW NOT TO

A hospital uses an AI system to analyse the course of pain complaints of fibromyalgia patients. It collects data via an app in which patients have to give an estimation of their pain experience three times a day. However, these estimations widely vary depending on the patient, and patients also often forget to submit the data.

TOOLS & TRICKS

  • Q1-Q3: Involve domain experts
  • Q2 - Q8: Involve IT staff, check the system on statistical accuracy, Data Collection Bias Assessment, intermediate/prototype testing
11

Impact on work(floor)

When implementing an AI system, the impact of the system on the daily practices and work of e.g. healthcare professionals must be assessed and measured.

HAVE YOU CONSIDERED?

  1. How will the AI system influence the current workflow and more generally the workfloor?
    1. Will it not create additional workload for the envisioned stakeholders?
    2. Will it not cause feelings of oppression (e.g. healthcare professionals may feel controlled by the system as it collects metrics about patients and their treatments)?
    3. Will it not affect and compromise the human aspect of providing care?
  2. Are stakeholders able to give feedback with regard to the implementation of the AI system?

HOW NOT TO

A new app ensures that when a healthcare worker is measuring blood pressure, the data on the patient’s blood pressure is automatically integrated into his/her electronic health record. The app still has some bugs and, therefore, gives many warnings of unidentifiable values. Healthcare workers spend a lot of time analysing the origins of these warnings and entering the correct values. This takes more time than when the input was provided manually.

TOOLS & TRICKS

  • Frequent inquiry (interviews/survey) with stakeholders on a.o. the perceived impact of the system on their daily practices and work
12

Undesirable impact

Which unintentional consequences and perverse effects could be caused by the AI system? One can for example think of (1) the stress and worries that users may feel because of false positives, and the unwarranted feeling of joy and reassurance due to false negatives, or (2) a health application that constantly reminds its users that they are not healthy.

HAVE YOU CONSIDERED?

  1. What is the perceived negative impact of the AI system on (the experiences of) patients?
  2. What are possible unintentional negative consequences caused by the AI system (e.g. unnecessarily alarming patients because they are able to monitor their data)?

HOW NOT TO

An AI system that delineates the zones in the brain that need to be radiated in order to treat brain tumours is widely valued for its accuracy. In the press, hospitals received positive feedback for making strong advances in the treatment of this type of cancer with the help of the AI system. After some years, the radiologists notice that patients believe that nothing can go wrong with the AI system, while there are still cases in which it is not able to accurately delineate the zone ánd treat the tumour.

TOOLS & TRICKS

  • Test the AI system with a small (but representative) group of patients to examine the (positive and negative) impact of the system, clearly inform patients about (1) the accuracy of the AI system and (2) the meaning of the AI systems’ results (see also card on accuracy & quality and explainability & transparency)
13

Ownership

Who should be qualified as the owner of health data? This question is the subject of a vivid discussion on whether patients can remain the owners of their data and decide who they grant access to the data.

HAVE YOU CONSIDERED?

  1. Who is the current owner of a patients’ health data from a legal point of view?
  2. Is the ownership affected when part of these data is stored on a health platform provided by a certain company?

  3. Is there a possibility, from a legal point of view, for patients to become the owner of their own data?

  4. If so, what actions must be made to (1) guarantee secure ownership by patients and (2) inform patients about their role as owner and their associated responsibilities?

HOW NOT TO

During a hospital’s experiment heart patients were given a wearable that collected medical data. The patients were the owners of the data. The hospital collected data about how patients acted upon their wearable data as its owners (e.g. how many times they would check their data). However, these data were stored on (and owned by) a company’s platform, and also revealed medical data of the patients.

TOOLS & TRICKS

  • Q1 & Q2: Consult the legal department
  • Q3 & Q4: Meet with the legal and the IT department
  • Q4: Discuss with the communication and the legal department how patients will be informed about their role as owner and how this relates to their privacy

Downloads

Below, you can find 3 downloads:

  • A PDF of the AI Blindspot healthcare card set.
  • A PDF with 2 templates to use the AI Blindspots healthcare card set. With the first template, you start from an ethical dilemma and use the AI Blindspots healthcare card set (workshop method 1 and 2). You can use the second template for the reversed brainstorm with the AI Blindspots healthcare card set (workshop method 4). A filled-in example of the templates is provided as an example. Visit the main page of the AI Blindspots healthcare card set for more information about the methods to use the card set.
  • A PDF with a detailed guide to organise an in-house workshop on the identification of AI Blindspots in a healthcare context.

De kaartenset werd aangepast op basis van 'AI Blindspot' van Ania Caldeeron, Dan Taber, Hong Qu and Jeff Wen, die de kaartenset ontwikkelden tijdens het Berkman Klein Center een MIT Media Labs's 2019 Assembly Program. De AI Blindspots kaartenset is verkrijgbaar onder een CC BY 4.0 licentie.

Het Kenniscentrum Data & Maatschappij paste de originele kaartenset aan de Vlaamse context aan, om de ontwikkeling van betrouwbare AI in Vlaanderen te ondersteunen. Deze kaartenset vind je hier.

In een latere fase werkte het Kenniscentrum Data & Maatschappij samen met VIVES Hogeschool om een versie van de AI Blindspots kaartenset specifiek voor de zorgcontext te maken. VIVES Hogeschool werkte aan deze Blindspots-kaartenset binnen het project ‘AI in ziekenhuizen: ethische en juridische aspecten’, gefinancierd door ESF Vlaanderen.

GO BACK TO THE TOOL: