TOOL

AI Blindspots card set 2.0: implementation phase

The AI Blindspots cards are divided into three phases (planning, development and implementation). On this page, you can find the AI Blindspots of the implementation phase, the phase wherein your data application or AI system is finalized and is put on the market.

Each AI Blindspots card contains:

  • A set of questions to help you uncover this blindspot;
  • A use case that illustrates the importance of considering the blindspot;
  • A number of tools and tricks to help you detect and mitigate the blindspot.
1

Generalization error

Between conceiving, building and deploying an AI system, conditions in the world may have changed. Or they no longer reflect the development context. As a result training and testing data are no longer representative or adequate.

HAVE YOU CONSIDERED?

  1. Regularly checking your training and testing data against the current situation?
  2. Ensuring a human review process for outliers?
  3. Determining if the input data and predicted values align with expectations?
  4. Planning how to ensure your model can be retired?

HOW NOT TO

Your email spam filtering system may fail to recognize spam that differs in form from the spam the automatic filter has been built on. As a result, the spam filtering system will not work properly.

TOOLS & TRICKS

2

Right to explanation & objection

AI systems can carry biases that make them subjective or imperfect. The right to an explanation or objection to an algorithmic decision can mitigate inaccuracies and grant agency to people affected.

HAVE YOU CONSIDERED?

  1. Ensuring transparency and accountability throughout the system and deployment?
  2. Offering individuals meaningful explanations for a given decision?
  3. Providing guidance on how to contest the AI decision?

HOW NOT TO

Based on algorithms, insurance rates can differ amongst clients, sometimes it is unclear where this price difference originates from. Clients should be informed about the reasoning of the algorithms in determining the premiums.

TOOLS & TRICKS

3

User manual

The actual use and the context of use of an AI system determine if harmful unintended consequences will occur. The creation of a user manual to guide possible end users in a responsible use of your AI-system might help in this regard.

HAVE YOU CONSIDERED?

  1. Have you considered the different contexts of use of your system?
  2. If the use of AI is not disproportionate to the context?
  3. Have you considered informing users on how to appropriately make use of the AI system?
  4. How your AI system may be deployed in a situation you did not envisage?

HOW NOT TO

A camera surveillance might not be the best choice to increase security in a city, the same budget could perhaps be spent better on prevention or social outreach workers.

TOOLS & TRICKS

  • Q 1 & 3: write a user manual
  • Q 2: use participatory methods to determine proportionality
  • Q 3: consult legal expertise
  • Q 4: think of possible worst case scenarios and look for mitigation strategies
4

Transparency

It is important to not only communicate about your AI system and the decisions it makes when your target audience asks for it. Gaining trust of your target audience starts with communicating proactively and transparently at all times.

HAVE YOU CONSIDERED?

  1. Communicating and explaining your AI system and the decisions it makes to your target audience (the users of your AI system) and the outside world?
  2. What moment would be the most ideal to communicate about your systems and the decision it makes to your target audience?

HOW NOT TO

You paid for an ethical audit and managed to mitigate all the uncovered challenges but you do not publicly communicate this in any way. As a result, your AI application is met with undeserved suspicion.

TOOLS & TRICKS

5

Service reconsideration

The implementation of your AI system might cause some changes in your current workflow and work profiles, as well as in your customer agreements. Being aware of possible changes, will help you to anticipate and act fast on these changes.

HAVE YOU CONSIDERED?

  1. Changes in your current workflow and work profiles due to the implementation of your AI system?
  2. Evaluating the effect of your AI system on your service’s customer agreements?

HOW NOT TO

Support specialists can talk to customers about more advanced and difficult topics, and they can easily solve such issues with the help of AI. But in some cases, customers receive a notification about problems with their accounts, before they knew something was wrong.

TOOLS & TRICKS

6

Accountability & sign-off

As a company introducing or making use of an AI system, you must explain and justify the decisions and actions that were made to your partners, users and others who may interact with your AI system.

HAVE YOU CONSIDERED?

  1. Who will take the final decision if the algorithm can be released?
  2. Who will be held accountable if something goes wrong?

HOW NOT TO

Learning platforms must be clear and explicit about the recommendation of certain learning paths or options over others. Trainees, course designers and teachers can review and update as they see fit.

TOOLS & TRICKS

  • Trustable Technology Mark
  • Q 1: set up an independent ethical board that reviews the algorithm and the data on which it is built.
  • Q 2: Log the choices you have made during the collection of the data and during the design and development of the application (eg. by making use of the Data Collection Bias Assessment)
7

Revisited purpose

During and after the development of your AI system it is important to ask yourself if your AI system is still the best solution and means for the goal you have set.

HAVE YOU CONSIDERED?

  1. Does your AI system still apply to the purpose you had in mind?
  2. How long a defined ‘ruleset’ for your AI system remains in place? Or does it evolve constantly?

HOW NOT TO

The light sensor of a no-touch soap dispenser in a lavatory was solely trained with light-skinned persons. As a result, people of color could not make use of the soap dispenser.

TOOLS & TRICKS

8

Perverse effects

After implementing your AI system into the market, you might come across some unforeseen effects of your AI technology. These perverse effects are the result of decisions, causing unintentional outcomes, you and your team have made during the planning and development phase.

HAVE YOU CONSIDERED?

  1. Why your AI system might cause some perverse effects?
  2. An expert double-checking the decisions you made during the development of your AI system?

HOW NOT TO

A common feature on social media platforms is the filtering of information based upon your searches and clicks. The ‘filter bubble’ offers you a more personalized service of the social media platforms. But, the algorithm of the ‘filter bubble’ will also present you false information when you have shown interest in for example flat earth theories. By doing so, it enforces the belief in conspiracy theories for some users of the social media platforms.

TOOLS & TRICKS

  • Q 1: reverse brainstorm: think of possible perverse effects of your AI system, and then see how you will mitigate them.
  • Q 2: Interview with an expert.

Downloads

Below, you can find 2 downloads:

  • A PDF of the AI Blindspot card set.
  • A PDF with 2 templates to use the AI Blindspots card set. With the first template, you start from an ethical dilemma and use the AI Blindspots card set (workshop method 1 and 2). You can use the second template for the reversed brainstorm with the AI Blindspots card set (workshop method 4). A filled-in example of the templates is provided as an example. Visit the main page of the AI Blindspots card set for more information about the methods to use the card set.

GO BACK TO THE TOOL: