Category: Transparency

Tool: AI Assessment Tool
Tool: AI Assessment Tool

This open-source AI assessment tool from AI4Belgium, based on the European Commission's ALTAI questionnaire, can help you make your AI system more transparent, robust and trustworthy

Publications
Tool: The Digital Ethics Compass
Tool: The Digital Ethics Compass

The Digital Ethics Compass toolkit includes several questions, recommendations and a workshop to learn more about how to design digital products in an ethical way and how to use this as a competitive advantage.

Publications
Tool: Intelligence Augmentation design toolkit
Tool: Intelligence Augmentation design toolkit

A tool to explore the possibilities of machine learning and its advantages and disadvantages.

Publications
Tool: AI Blindspots healthcare
Tool: AI Blindspots healthcare

How can you avoid replicating societal biases, prejudices and structural disparities in your AI healthcare system? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots healthcare card set.

Publications
Tool: AI Blindspots 2.0 - planning phase - kopie
Tool: AI Blindspots 2.0 - planning phase - kopie

How can you take into account possible prejudices and structural inequalities before, during and after the development of your AI application? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots card set.

Publications
Tool: AI Blindspots 2.0 - development phase - kopie
Tool: AI Blindspots 2.0 - development phase - kopie

How can you take into account possible prejudices and structural inequalities before, during and after the development of your AI application? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots card set.

Publications
Tool: Data Cards Playbook
Tool: Data Cards Playbook

A playbook for ensuring transparency in data set documentation.

Publications
Tool: AI Blindspots 2.0
Tool: AI Blindspots 2.0

How can you take into account possible prejudices and structural inequalities before, during and after the development of your AI application? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots card set.

Publications
Tool: AI Blindspots 2.0 - planning phase
Tool: AI Blindspots 2.0 - planning phase

How can you take into account possible prejudices and structural inequalities before, during and after the development of your AI application? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots card set.

Publications
Tool: AI Blindspots 2.0 - development phase
Tool: AI Blindspots 2.0 - development phase

How can you take into account possible prejudices and structural inequalities before, during and after the development of your AI application? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots card set.

Publications
Tool: AI Blindspots 2.0 - implementation phase
Tool: AI Blindspots 2.0 - implementation phase

How can you take into account possible prejudices and structural inequalities before, during and after the development of your AI application? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots card set.

Publications
Tool: Dynamics of AI Principles
Tool: Dynamics of AI Principles

The AI Ethics Lab has developed a toolbox that gives an overview of the different existing sets of principles that you can integrate and/or evaluate in your AI system.

Publications
Tool: Ethics Inc., a design game for ethical AI
Tool: Ethics Inc., a design game for ethical AI

In the cooperative card game Ethics Inc. you design ethically responsible AI applications together.

Publications
Tool: Method LINDDUN
Tool: Method LINDDUN

The need for privacy is rising and more and more important in the development of new applications. LINDDUN is a method developed for the assessment of privacy within an AI application

Publications
Tool: Product Impact Tool
Tool: Product Impact Tool

Technology impacts us in many ways, from individual to societal and open to closed impact. All these forms of impact are gathered in the Product Impact Tool to help researchers and developers reflect on the technology.

Publications
Tool: Ethical Explorers Pack
Tool: Ethical Explorers Pack

You’d like to develop products that would make the world a bit better? Omidyar offers a tool to help you with the development of responsible technology. Bonus: you also receive a guide to actually change the way your organisation develops tech.

Publications
Explainable machine learning in deployment

The authors of this article interviewed 20 data scientists to examine their explainablility techniques. After determining that 4 techniques were widely used they described how this was used most often and finally they provided some recommendations and concerns in relation to explainability.

Publications
Article: Explainable Artificial Intelligence for Kids

The author J.M. Alonso offers a conceptual method to create natural explanations to children using single classifiers.

Publications
Tool: AI Explainability 360 Open Source Toolkit
Tool: AI Explainability 360 Open Source Toolkit

A good way to get acquainted with the idea of explainable AI is to look at IBM Research's AI Explainability 360 Open Source Toolkit. In addition they offer multiple algorithms as a possible start of using explainable AI (XAI).

Publications
Tool: Guidance Ethics
Tool: Guidance Ethics

The approach to guidance ethics aims to start the dialogue about the connection between ethics and technology through a workshop.

Publications
Tool: Artificial Intelligence Impact Assessment (AIIA)
Tool: Artificial Intelligence Impact Assessment (AIIA)

The Artificial Intelligence Impact Assessment (AIIA) is a structured method to clearly identify the (social) benefits of an AI system application. In addition, attention is also paid to analysing the reliability, safety and transparency of the AI system.

Publications
Tool: AI System Ethics Self-Assessment Tool
Tool: AI System Ethics Self-Assessment Tool

Do you want to know how ethical your AI application is? The AI systems Ethics Self-Assessment Tool helps you to estimate four ethical principles yourself.

Publications
Tool: People+ AI Guidebook
Tool: People+ AI Guidebook

The principles discussed in this guide are intended to assist user experience (UX) professionals and product managers with a human-centered approach to AI. This guide helps them put the user at the center of developing an AI application.

Publications
Tool: Unbias Toolkit
Tool: Unbias Toolkit

This toolkit created by and for young people is meant to share the online experience of young people with policy makers, regulators and the ICT industry.

Publications
Tool: Principles for Accountable Algorithms en Social Impact Statement for Algorithms
Tool: Principles for Accountable Algorithms en Social Impact Statement for Algorithms

This tool by FAT/ML ( ‘Fairness, Accountability and Transparency in Machine Learning’) has as goal to promote the following ethical principles: responsibility, explainability, accuracy and accountability.

Publications
Tool: SDoC for AI / AI service FactSheets
Tool: SDoC for AI / AI service FactSheets

The authors of this article state that suppliers of AI application must create a fact sheet for each product, demonstrating that the application is 'compliant' (an SDoC or 'supplier's declarations of conformity'), just like that for other products happens.

Publications
Tool: Data Collection Bias Assessment
Tool: Data Collection Bias Assessment

With this Data Collection Bias Assessment form, you make a few choices from the beginning of the data collection so that you can discover possible biases at an early stage.

Publications