Tag: Rapport: Tools voor Ethiek
A good way to get acquainted with the idea of explainable AI is to look at IBM Research's AI Explainability 360 Open Source Toolkit. In addition they offer multiple algorithms as a possible start of using explainable AI (XAI).
The Institute for the Future (IFTF) and the Omidyar Network have developed a toolkit to help predict difficult and unwelcome consequences and to prevent these from occurring while you develop products and projects based on AI.
The Data Ethics Canvas provides tools to address ethical questions during the design and development of a project or product based on data, such as an AI application.
A component of designing new technologies is imagining the impacy they will have on the world. In order to facilitate this Artefact created the Tarot Cards of Tech . These cards can. be used during team meeting to perform an informal impact assessment
The approach to guidance ethics aims to start the dialogue about the connection between ethics and technology through a workshop.
De Artificiële Intelligentie Impact Assessment (AIIA) is een gestructureerde methode om de (maatschappelijke) baten van een AI-systeem toepassing duidelijk in kaart te brengen. Daarnaast is er ook aandacht voor het analyseren van de betrouwbaarheid, veiligheid en transparantie van het AI-systeem.
The Aequitas tool performs an audit for your project. This method is intended to analyze whether there are prejudices in the data and in the models you use. You can perform the audit via the desktop or online tool.
The AI Maturity Tool is intended to find out to what extent your organization is ready to get started with AI.
With the Building an Algorithm Tool you ask ethical questions during the entire AI process: the design, development, test phase and implementation of the AI application.
The creators of the Data Ethics Guide want to highlight the issues surrounding ethics and make them manageable for companies that work with AI technology. They introduce different concepts of how ethics can be researched within a company. They do this through questions that help determine the current ethical situation.
Do you want to know how ethical your AI application is? The AI systems Ethics Self-Assessment Tool helps you to estimate four ethical principles yourself.
The principles discussed in this guide are intended to assist user experience (UX) professionals and product managers with a human-centered approach to AI. This guide helps them put the user at the center of developing an AI application.
This toolkit created by and for young people is meant to share the online experience of young people with policy makers, regulators and the ICT industry.
This tool by FAT/ML ( ‘Fairness, Accountability and Transparency in Machine Learning’) has as goal to promote the following ethical principles: responsibility, explainability, accuracy and accountability.
This tool was created by and for data scientists. It consists of various methodologies that all can be applied at the start of a project. The aim is to document ethical issues and decisions in this regard, for example, to provide accountability towards stakeholders.
This ethical framework is intended for government agencies. However, it can also be useful for other organizations.
The authors of this article state that suppliers of AI application must create a fact sheet for each product, demonstrating that the application is 'compliant' (an SDoC or 'supplier's declarations of conformity'), just like that for other products happens.
The Ethics framework of Machine Intelligence Garage consists of seven principles, each with a set of questions that can lead to a better understanding of how to deal with ethics in your design.