TOOL

AI Blindspots card set 2.0: planning phase

The AI Blindspots cards are divided into three phases (planning, development and implementation). On this page, you can find the AI Blindspots of the planning phase, the phase prior to the development of your data application or AI system.

Each AI Blindspots card contains:

  • A set of questions to help you uncover this blindspot;
  • A use case that illustrates the importance of considering the blindspot;
  • A number of tools and tricks to help you detect and mitigate the blindspot.
1

Purpose

At the start of an AI project, determine the purpose of your AI system. Determining the purpose includes involving stakeholders, experts and your team to clearly delineate your purpose and the problem that will be solved with your AI system.

HAVE YOU CONSIDERED?

  1. Did you clearly articulate the problem and outcome you are optimizing for?
  2. Is this tool adequate to obtain this outcome?
  3. Do all involved and affected stakeholders recognize this as an important problem?
  4. Did you consider the advantages and disadvantages of your AI system for each stakeholder?
  5. How will you guarantee to keep the state of purpose of your AI system?

HOW NOT TO

A company introduced an AI system to speed up their production process, but as an indirect result, employees lost their bonuses. How could this have been avoided? Take the trade union as an involved stakeholder in your project and find a way to increase the speed without losing the bonus.

TOOLS & TRICKS

2

Data balance

Data Balance means that you have checked your data on its representative quality. And that you have considered how you would mitigate unbalance.

HAVE YOU CONSIDERED?

  1. What is the minimal viable data collection you need according to domain experts?
  2. Who/What might be excluded in your data?
  3. How will limitations in your data impact the representative nature of your model and the actions your model supports?
  4. If your data is unbalanced, can you mitigate this limitation?
  5. Considering your data, can you describe the case or person where your predictions will be most unreliable?

HOW NOT TO

After the release of the massively popular Pokémon Go, several users noted that there were fewer Pokémon locations in primarily black neighborhoods. This came to be because the creators of the algorithms failed to provide a diverse training set, and didn’t spend any time in these neighbourhoods.

TOOLS & TRICKS

3

Data governance & privacy

Questions with regard to data governance and the impact on the privacy of the data subjects whose personal data will be processed by the AI system, are all part of the preparation of your AI project. Determining the level of access to data and describing the flow of information will help you with protecting your data subject’s rights.

HAVE YOU CONSIDERED?

  1. Can you lawfully process or reuse the data?
    1. If you reuse the data, is the purpose the same?
    2. Are appropriate contractual arrangements in place?
    3. Can you process or reuse the data on the basis of consent or other grounds?
  2. Do you gather sensitive data or not?
  3. Are there special regimes to protect your data?
  4. Who will have access to the (collected) data? (internally and externally)
  5. Can you comply with the data subject’s rights of the GDPR?

HOW NOT TO

A UK hospital together working together with Deepmind on a AI application detection and diagnosis of kidney injury was fined for violating the rules on personal data. It had transferred personal data on 1,6 million patients without their adequately informing them about this.

TOOLS & TRICKS

4

Team composition

Know your team’s unknown knowns. It is difficult to be aware of possible (ethical) issues if you are not aware of prejudice within your team. To avoid such blindspots, it is necessary to unveil them.

HAVE YOU CONSIDERED?

  1. Did you consider bias in your team?
  2. Is your team diverse and multidisciplinary or in touch with the problem area you try to solve?
  3. Who you should invite to myth bust this wrong idea?

HOW NOT TO

Google’s photo-categorization software has at times mistaken black people for gorillas. The chances of this occurring would decrease drastically if black team members tested the service.

TOOLS & TRICKS

5

Cross boundary expertise

You may be an expert in machine learning but not in the field you apply machine learning to. This is fine if you have an expert to tell you what to look out for in terms of typical outliers, hugely important variables or common practices that may impact your data.

HAVE YOU CONSIDERED?

  1. Discussing with domain experts what the minimal viable data collection is that you need in order to allow your AI system to fulfill its purpose?
  2. Using an expert to understand what the impact should be from your algorithm?
  3. Which variables are essential for your problem?
  4. An expert to help you assess the results of your algorithm?

HOW NOT TO

A new algorithm would help with diagnosing who needs to be assessed for pneumonia ASAP in the ER. According to the algorithm, people with asthma do not require immediare care. Experts did not agree with this estimation as asthma cases are treated with urgency in the ER. The experts stated that this was based on faulty assumptions by the AI system. According to the training data, asthma patients spent the least time in the ER. Therefore, the AI system deemed them to be unimportant for reaching efficiency in the ER.

TOOLS & TRICKS

  • Interview or focus group with expert(s)
  • Workshop on technical and systems requirements
6

Abusability

You want to create an AI system to improve something in the world. However, if you only focus on the good it does, you may overlook the ways in which it might cause harm. It is always better to prevent than to cure. So consider what a truly malevolent party could do to or with your application.

HAVE YOU CONSIDERED?

  1. How the AI system might be used unethically?
  2. What the consequences would be if your AI system was used unethically?
  3. Who you have involved to understand the underlying social motivations and threat models?
  4. What your mitigation strategy is if your AI system is used unethically?
  5. What to do if your algorithm develops unethical behaviour?
  6. What are the key ethical principles that your AI system should exhibit?

HOW NOT TO

In 2016 Microsoft introduced Tay, a Twitter chatbot, to the world. Within 24 hours Tay was changed as she had learned to be a racist Twitter user based on the tweets addressed to her. Microsoft therefore decided to retire her.

TOOLS & TRICKS

  • Creating scenarios to grasp the malicious and unethical practices of your system, and map the consequences of these scenarios on innocent bystander personas
  • Involve experts from social sciences and law
  • Thing-Centered Design

Downloads

Below, you can find 2 downloads:

  • A PDF of the AI Blindspot card set.
  • A PDF with 2 templates to use the AI Blindspots card set. With the first template, you start from an ethical dilemma and use the AI Blindspots card set (workshop method 1 and 2). You can use the second template for the reversed brainstorm with the AI Blindspots card set (workshop method 4). A filled-in example of the templates is provided as an example. Visit the main page of the AI Blindspots card set for more information about the methods to use the card set.

GO BACK TO THE TOOL: