TOOL

AI Blindspots healthcare: specific blindspots

The AI Blindspots healthcare cards are divided into two groups, the general blindspots and the specific blindspots. On this page, you can find the specific AI Blindspots for healthcare, that are more relevant for specific job profiles than for all healthcare providers.

Each AI Blindspots card contains:

  • A set of questions to help you uncover this blindspot;
  • A use case that illustrates the importance of considering the blindspot;
  • A number of tools and tricks to help you detect and mitigate the blindspot.
1

Broad discussion

The implementation of an AI system in your organisation will bring several opportunities but also risks or challenges. To minimise the risks and challenges for all, every relevant stakeholder (this includes patients) should be involved in the discussion regarding the purpose of the AI system and its functionalities. The more diverse these stakeholders are, the more you will be able to limit and counter these risks and challenges.

Envisioned job profile(s): management

HAVE YOU CONSIDERED?

  1. Are all relevant stakeholders involved in the discussion regarding (1) the purpose of the AI system, and (2) the necessary functionalities of the AI system?

HOW NOT TO

Because the hospital wants to invest in AI, an AI working group is assembled. The hospital director is proud that this working group reflects the different departments of the hospital. However, only managers of these different departments are involved in the working group.

TOOLS & TRICKS

  • Take a sample of your patient population and involve them in the planning process, involve a diverse team of different job profiles to start a first discussion and ask them who else in the organisation should be involved in the discussion.
2

Education & sensibilisation

When implementing an AI system, the involved stakeholders may not be familiar with the technology. It is important to not only sensibilise the stakeholders about the goal, the benefits and the impact of the technology but also educate and guide them in how the AI system can be used and/or interpreted.

Envisioned job profile(s): management

HAVE YOU CONSIDERED?

  1. Is there a learning trajectory for stakeholders/users on the topic of AI (general information)?
  2. Are stakeholders/users informed about the goal, the benefits and the impact of the AI system?
  3. Is there a learning trajectory for stakeholders/users that guides them in using and interpreting an AI system?

HOW NOT TO

An innovative AI system in radiology is able to optimize the quality of CT scans that were taken with a small radiation dose. However, it turns out that only those radiologists who are convinced of the benefits of innovations and AI are using the system. The other radiologists are not informed about the benefits of the system and how to work with it.

TOOLS & TRICKS

  • Create a sensibilisation strategy together with the communication department, create a learning trajectory with clear guidelines on (1) AI in general, and (2) how to use and interpret the AI system.
3

Large selection

There are many AI systems on the market, many of which are not transparent about their gains and pains. This makes it harder (1) to examine which AI system will be the most effective for a specific situation and (2) to estimate which third parties (i.e. developer and supplier of the AI technology) are trustworthy.

Envisioned job profile(s): management

HAVE YOU CONSIDERED?

  1. Have you compared the different developers and suppliers of the AI system? If so, with whom did you have the most comfortable feeling?
  2. Are there specific risks related to working with the developer/supplier you want to choose?
  3. Did you estimate the return on investment working with the developer/supplier you want to choose?

HOW NOT TO

A hospital wants to invest in AI but has little expertise at hand. It decides to work with an American company that develops medical AI apps, but it turns out that the data collected in the hospital to feed the AI app is stored on the company’s insufficiently secure platform.

TOOLS & TRICKS

  • SWOT analysis of the different developers and suppliers, cost-benefit analysis
4

Changing purpose

The data controller, i.e. the person or company that determines the purpose and means of personal data processing, has the responsibility to set the correct boundaries for personal data processing. What if the processing of personal data was approved for a specific purpose, but this purpose has changed after a certain period of time?

Envisioned job profile(s): management

HAVE YOU CONSIDERED?

  1. Do you regularly examine if the AI system is still the best and most efficient solution for the intended purpose?
  2. If the purpose has changed, did you reset the boundaries for personal data processing?

HOW NOT TO

A hospital is using, with the consent of the patients, an AI system that can predict a patient’s risk for several diseases, based on his/her medical data in the electronic health record. The system was trained by another company using a large dataset, but now the hospital wants to use its own electronic health record data to train the AI system. However, it does not inform the patients about this.

TOOLS & TRICKS

  • Q1: Look at the cost-benefit analysis and the SWOT analysis that you have prepared earlier and examine if it still fits (see card on suitability)
  • Q2: Reset the boundaries together with the DPO of the organisation
5

Availability of data

The data on which you want to build your AI system may not be available or easily accessible to you, or may not be allowed to be shared with other actors.

Envisioned job profile(s): IT

HAVE YOU CONSIDERED?

  1. Are the data you need for your AI system available and accessible to you?
  2. Are the data you want already digitised?
  3. If not, will the effort and cost to digitise the data outweigh the benefits of being able to access the data?

HOW NOT TO

Strong advances can be made in treating multiple sclerosis (MS) if an AI system could analyze the data from MS patients all over Europe. Unfortunately, the data is in many cases locked, unfindable or unreadable by the system...

TOOLS & TRICKS

  • Follow the FAIR data principles when you are collecting/storing data.
  • Q3: Discuss the ROI of the digitisation of data together with the management
6

Contextual factors

The implementation of an AI system is preceded by a great deal of research and development in order to maximise the functioning of the system. But what if the predetermined context in which the system is to function changes and no longer corresponds to what was analysed? As a result, the system might not work as accurately as estimated.

Envisioned job profile(s): IT

HAVE YOU CONSIDERED?

  1. Do you regularly compare the training and testing data with the current situation?
  2. Do the input data and predicted values align with the expectations?
  3. If needed, do you have a plan to remedy or to phase out the use of the AI system?

HOW NOT TO

A smartphone app was developed which allows patients to test if they are infected with the coronavirus by coughing to their smartphone. The app was trained with a mass of recordings of coughs. The app is very functional and a lot of people use it. However, when the coronavirus mutates, the app is not retrained with new data, leading the app to miss a significant amount of positive cases.

TOOLS & TRICKS

7

Standardisation

Standardisation of data is important when the data are used for statistical purposes. An ontological framework in which the characteristics of the data are defined is essential to ensure unity in the standardisation process. Standardisation will also help the sharing of data between stakeholders.

Envisioned job profile(s): management & IT

HAVE YOU CONSIDERED?

  1. Is there an ontological framework to ensure the data are interpreted in an identical way?
  2. Is it possible to share the data with other stakeholders/other parties? What must be done to make this possible?

HOW NOT TO

All Flemish hospitals want to team up to train an AI predicting tool on the basis of all electronic health records in Flanders. However, because the electronic health record has many different formats depending on the hospital and region, this plan is abandoned.

TOOLS & TRICKS

  • Do preliminary research in which you examine how other organisations who are working with the same kind of data are structuring their data, try to set up an agreement about standardisation of data.
8

Data minimisation

According to the General Data Protection Regulation (i.e. GDPR) you are not allowed to collect more (personal) data than needed for the functioning of your AI system. More data also means more data analysis, and more costs and effort to process and analyse the data.

Envisioned job profile(s): IT & management

HAVE YOU CONSIDERED?

  1. What data should be collected for the proper functioning of the AI system?
  2. How long should the data be stored for the proper functioning of the AI system?

  3. Have data been collected that are not strictly necessary for the functioning of the system?

  4. Can you comply with the data subject’s rights of the GDPR?

HOW NOT TO

The hospital has put up an experiment for a group of patients with a specific disease and uses an app developed by a company that gathers data about a standard set of parameters. This set, unfortunately, includes data that are unnecessary for analysing the disease type of the group of patients.

TOOLS & TRICKS

  • Q1 & Q2: Interviews with domain experts
  • Q3: If necessary, set up a process wherein you decrease and prevent the collection of more personal data than needed for the functioning of the system

  • Q4: Sit together with the legal department

9

Security

Because of the sensitivity of healthcare data an adequate security policy for the collection, storage and sharing of data with other parties is required.

Envisioned job profile(s): IT & management

HAVE YOU CONSIDERED?

  1. Do you have a data security policy?
  2. Did you assess who can access the data and who cannot, and more importantly why actors can or cannot access the data?
  3. Do you register who has accessed the data, at what time and for what purpose?
  4. Is there a procedure to report violations?
  5. Did you test if the AI system cannot be hacked? Do you test the AI system periodically?

HOW NOT TO

Two doctors are experimenting with a new AI system and store the medical images which the AI system analyses on a server that is not perfectly secure. By doing so, they potentially expose the diagnoses and the individuals to whom they are related to hackers.

TOOLS & TRICKS

  • Q2 & Q3: Data Protection Impact Assessment, yearly internal audit
  • Q5: Penetration test
10

Human vs machine

An AI system can facilitate the human aspect of care, by giving care professionals more time to interact with patients. Yet the suggestions made by the AI system about the right diagnosis or treatments can deviate from the personal intuition, ‘gut feeling’, and evaluation of the care provider. It can be difficult for care professionals to deal with this balance between the human assessment and the machine.

Envisioned job profile(s): doctors, nursing staff & management

HAVE YOU CONSIDERED?

  1. Are the results of the AI system reviewed by a human professional?
  2. Is there a policy in place on how the assessment of healthcare professionals relates to the one of the AI system?
  3. Can healthcare professionals freely report their concerns with regard to the decisions of an AI system?
  4. Who is responsible for (1) the collection, the storage, the sharing, and the analysis of the data, and (2) for choosing the right treatment and monitoring the patient?
  5. Who has the final responsibility?

HOW NOT TO

An AI system analyses a patient’s tumour, compares it with its database and makes a suggestion about the zone that needs to be radiated. Based on this suggestion, the radiologist makes the final call. One day, he follows the suggestion and overlooks an important parameter that was not submitted to the AI system and which implied that the suggestion was wrong. The patient dies because the radiation damaged a vital part of the brain. The radiologist believes the AI system is to blame.

TOOLS & TRICKS

  • Inform care professionals about the functioning of the AI system and about the limits of its performance.
  • Q2: Organise a discussion about how to deal with the ‘authority’ of an AI system and bundle the outcome into a clear policy. (Q2)
  • Q4 & Q5: Log the choices that were made during the collection/storage/sharing/analysis of the data (eg. by making use of the Data Collection Bias Assessment)
11

Data governance & privacy

Privacy with regard to the data subjects is crucial when making use of health data. However, privacy is not a clear-cut concept. Data can, for example, be made anonymous by removing the names of the patients. Yet, because of other parameters in the data set (which are often particularly interesting to feed AI models), it is sometimes still possible to identify patients. Furthermore, the difference between anonymisation and pseudonymisation is not always clear. And is it righteous to use the data you have collected?

Envisioned job profile(s): IT, management

HAVE YOU CONSIDERED?

  1. What measurements did you take to protect the data subjects and their data?
  2. Can you ensure the anonymity of the data subjects? If not, are they informed about this?
  3. Have you considered how proxy-data categories (e.g. shoe size for gender) can also identify personal information?
  4. Are you complying with the General Data Protection Regulation (i.e. GDPR) and other regulations?
  5. Is it still righteous to use the data?

HOW NOT TO

A hospital has a large electronic health record which it wants to use to train an AI system that can predict the risk of having a heart attack. They removed the names of the patients in the data and replaced them with a code so they are able to retrace patients that need to be informed about the risk of having a heart attack. The hospital did not ask patients for consent.

TOOLS & TRICKS

  • Q1 - Q4: Because of the sensitive character of medical data, it is a responsible strategy to always ask for consent to collect and use the data and to at least pseudonymise these data
  • Q4: Guide ‘Artificiële intelligentie en gegevensbescherming’, check with the ethical committee which regulations you have to comply with

Downloads

Below, you can find 3 downloads:

  • A PDF of the AI Blindspot healthcare card set.
  • A PDF with 2 templates to use the AI Blindspots healthcare card set. With the first template, you start from an ethical dilemma and use the AI Blindspots healthcare card set (workshop method 1 and 2). You can use the second template for the reversed brainstorm with the AI Blindspots healthcare card set (workshop method 4). A filled-in example of the templates is provided as an example. Visit the main page of the AI Blindspots healthcare card set for more information about the methods to use the card set.
  • A PDF with a detailed guide to organise an in-house workshop on the identification of AI Blindspots in a healthcare context.

De kaartenset werd aangepast op basis van 'AI Blindspot' van Ania Caldeeron, Dan Taber, Hong Qu and Jeff Wen, die de kaartenset ontwikkelden tijdens het Berkman Klein Center een MIT Media Labs's 2019 Assembly Program. De AI Blindspots kaartenset is verkrijgbaar onder een CC BY 4.0 licentie.

Het Kenniscentrum Data & Maatschappij paste de originele kaartenset aan de Vlaamse context aan, om de ontwikkeling van betrouwbare AI in Vlaanderen te ondersteunen. Deze kaartenset vind je hier.

In een latere fase werkte het Kenniscentrum Data & Maatschappij samen met VIVES Hogeschool om een versie van de AI Blindspots kaartenset specifiek voor de zorgcontext te maken. VIVES Hogeschool werkte aan deze Blindspots-kaartenset binnen het project ‘AI in ziekenhuizen: ethische en juridische aspecten’, gefinancierd door ESF Vlaanderen.

GO BACK TO THE TOOL: