Tool

Artificial Intelligence Impact Assessment (AIIA)

The Artificial Intelligence Impact Assessment (AIIA) is a structured method to clearly map out the (social) benefits of an AI application. In addition, attention is also paid to analyzing the reliability, safety and transparency of the AI system. The AIIA is a workshop that allows you to continue in the first phase of a project, with regular feedback and then to the results of the workshop.

This method was developed by the ECP Platform for the Information Provision. This is a Dutch national think tank.

What you should know before reading further:

  • For whom: (project) managers, business analysts, developers
  • Process phase: problem analysis and ideation, design, development
  • Control Unit: entire application, users, context AI system
  • Price: freely available

Method

The tool consists of a clear step-by-step plan, with the preparation as the first step. This consists of checking whether there is a need to use the tool. They make this clear by answering 7 questions.

Steps 2 to 7 of the AIIA consist of questions that you answer alone or together with others. These guidelines ensure that you thoroughly discuss what the AI system will do and what the possible ethical impact is. Based on the assessment, you record what the results of the conversation are, so that you can reflect on the conclusions of this first workshop at a later stage and see what has changed. This happens in step 8, in which you periodically check whether changes have been made to the system with possible ethical consequences.


Result

You can see the tool as a way to account for choices made with regard to values and interests. Using the AIIA from the start to the end of the project should result in the delivery of an ethically and legally responsible AI application. When the AI system is delivered you have documentation that shows how you consciously handled the various ethical aspects that emerged in the process.

Values as mentioned in the tool Related
ALTAI-principles
  • Autonomy
  • Transparency
  • Human dignity
  • Diversity, Non-discrimination & Fairness
  • Responsibility
  • Environmental & Societal wellbeing
  • Transparency
  • Technical robustness & safety
  • Fairness
  • Privacy & Data governance
  • Democracy & rule of law
  • Safety
  • Privacy
  • Sustainability & durability

This tool was not developed by the Knowledge Center Data & Society. We describe the tool on our website because it can help you deal with ethical, legal or social aspects of AI applications. The Knowledge Center is not responsible for the quality of the tool.