Tool

AI Blindspots Card Set

How can you take into account possible prejudices and structural inequalities before, during and after the development of an AI system? In order to help you do this, the Knowledge Centre Data & Society and Agoria developed the AI Blindspots card set to uncover potential AI blindspots by reflecting on decisions and actions prior to the development of your AI system

This card set is inspired by AI Blindspot of Ania Calderon, Dan Taber, Hong Qu, and Jeff Wen, developed during the Berkman Klein Center and MIT Media Lab’s 2019 Assembly program. Their card set is available under a CC BY 4.0 Licence. The Knowledge Centre Data & Society adapted the original card set to the Flemish context in order to support the development of trustworthy AI in Flanders. Agoria made the modifications to offer it to the whole Belgian eco-system.


What are AI Blindspots?

AI blindspots refer to oversights that can occur before, during, or after the development of an AI system. They originate from biases, prejudices and structural disparities in society. It is challenging to predict the disadvantageous results of AI blindspots. But they can be mitigated by detecting them proactively and reacting accordingly.


Method

Each card contains:

  • A set of questions to reflect on potential blindspots
  • A use case that illustrates the importance of this blindspot
  • Tools and tricks to help detect/mitigate this blindspot


How to use the card set?:

  • Read the introduction of possible blindspots in group.
  • Try to form an answer for each of the questions on the cards.
  • The tools and tricks can help you to formulate an answer on these questions.
  • Read the 'how not to' case if you want to know more about the importance of identifying this blindspot.
  • At the end of the card set, there is a joker card for you to include other potential AI Blindspots you and your team detect


Next steps

The AI Blindspots card set of the Knowledge Centre Data & Society and Agoria currently only focuses on possible blindspots that may occur prior to the development of a system. In the future, this card set will be expanded with possible blindspots for the two other phases, namely during and after the development of a system.

1

Purpose

At the start of an AI project, determine the purpose of your AI system. Determining the purpose includes involving stakeholders, experts and your team to clearly delineate your purpose and the problem that will be solved with your AI system.

HAVE YOU CONSIDERED?

  1. Did you clearly articulate the problem and outcome you are optimizing for?
  2. Is this tool adequate to obtain this outcome?
  3. Do all involved and affected stakeholders recognize this as an important problem?
  4. Did you consider the advantages and disadvantages of your AI system for each stakeholder?
  5. How will you guarantee to keep the state of purpose of your AI system?

HOW NOT TO

A company introduced an AI system to speed up their production process, but as an indirect result, employees lost their bonuses. How could this have been avoided? Take the trade union as an involved stakeholder in your project and find a way to increase the speed without losing the bonus.

TOOLS & TRICKS

2

Data balance

Data Balance means that you have checked your data on its representative quality. And that you have considered how you would mitigate unbalance.

HAVE YOU CONSIDERED?

  1. What is the minimal viable data collection you need according to domain experts?
  2. Who/What might be excluded in your data?
  3. How will limitations in your data impact the representative nature of your model and the actions your model supports?
  4. If your data is unbalanced, can you mitigate this limitation?
  5. Considering your data, can you describe the case or person where your predictions will be most unreliable?

HOW NOT TO

After the release of the massively popular Pokémon Go, several users noted that there were fewer Pokémon locations in primarily black neighborhoods. This came to be because the creators of the algorithms failed to provide a diverse training set, and didn’t spend any time in these neighbourhoods.

TOOLS & TRICKS

3

Data governance & privacy

Questions with regard to data governance and the impact on the privacy of the data subjects whose personal data will be processed by the AI system, are all part of the preparation of your AI project. Determining the level of access to data and describing the flow of information will help you with protecting your data subject’s rights.

HAVE YOU CONSIDERED?

  1. Can you lawfully process or reuse the data?
    1. If you reuse the data, is the purpose the same?
    2. Are appropriate contractual arrangements in place?
    3. Can you process or reuse the data on the basis of consent or other grounds?
  2. Do you gather sensitive data or not?
  3. Are there special regimes to protect your data?
  4. Who will have access to the (collected) data? (internally and externally)
  5. Can you comply with the data subject’s rights of the GDPR?

HOW NOT TO

A UK hospital together working together with Deepmind on a AI application detection and diagnosis of kidney injury was fined for violating the rules on personal data. It had transferred personal data on 1,6 million patients without their adequately informing them about this.

TOOLS & TRICKS

4

Team composition

Know your team’s unknown knowns. It is difficult to be aware of possible (ethical) issues if you are not aware of prejudice within your team. To avoid such blindspots, it is necessary to unveil them.

HAVE YOU CONSIDERED?

  1. Did you consider bias in your team?
  2. Is your team diverse and multidisciplinary or in touch with the problem area you try to solve?
  3. Who you should invite to myth bust this wrong idea?

HOW NOT TO

Google’s photo-categorization software has at times mistaken black people for gorillas. The chances of this occurring would decrease drastically if black team members tested the service.

TOOLS & TRICKS

5

Cross boundary expertise

You may be an expert in machine learning but not in the field you apply machine learning to. This is fine if you have an expert to tell you what to look out for in terms of typical outliers, hugely important variables or common practices that may impact your data.

HAVE YOU CONSIDERED?

  1. Discussing with domain experts what the minimal viable data collection is that you need in order to allow your AI system to fulfill its purpose?
  2. Using an expert to understand what the impact should be from your algorithm?
  3. Which variables are essential for your problem?
  4. An expert to help you assess the results of your algorithm?

HOW NOT TO

A new algorithm would help with diagnosing who needs to be assessed for pneumonia ASAP in the ER. According to the algorithm, people with asthma do not require immediare care. Experts did not agree with this estimation as asthma cases are treated with urgency in the ER. The experts stated that this was based on faulty assumptions by the AI system. According to the training data, asthma patients spent the least time in the ER. Therefore, the AI system deemed them to be unimportant for reaching efficiency in the ER.

TOOLS & TRICKS

  • Interview or focus group with expert(s)
  • Workshop on technical and systems requirements
6

Abusability

You want to create an AI system to improve something in the world. However, if you only focus on the good it does, you may overlook the ways in which it might cause harm. It is always better to prevent than to cure. So consider what a truly malevolent party could do to or with your application.

HAVE YOU CONSIDERED?

  1. How the AI system might be used unethically?
  2. What the consequences would be if your AI system was used unethically?
  3. Who you have involved to understand the underlying social motivations and threat models?
  4. What your mitigation strategy is if your AI system is used unethically?
  5. What to do if your algorithm develops unethical behaviour?
  6. What are the key ethical principles that your AI system should exhibit?

HOW NOT TO

In 2016 Microsoft introduced Tay, a Twitter chatbot, to the world. Within 24 hours Tay was changed as she had learned to be a racist Twitter user based on the tweets addressed to her. Microsoft therefore decided to retire her.

TOOLS & TRICKS

  • Creating scenarios to grasp the malicious and unethical practices of your system, and map the consequences of these scenarios on innocent bystander personas
  • Involve experts from social sciences and law

Downloads

Interested? Download the card set and get started. There are 2 versions:

  • Printversion: QR codes for tools & tricks that guide you to this webpage.
  • Digital version: hyperlinks for tools & tricks that immediately guide you to the external webpages.

The card set of the Knowledge Centre Data & Society and Agoria is available under a CC BY 4.0 Licence, which means that you can reuse and remix them without asking for our permission, as long as you credit us as the original authors.