Ethical OS Toolkit

With the slogan 'How not to regret what you built' in mind, the Institute for the Future (IFTF) and the Omidyar Network have developed a toolkit to help predict difficult and unwelcome consequences to prevent these from occurring while you develop products and projects based on AI.

What you should know before reading further:

  • For whom: developers, product managers, engineers
  • Process phase: design
  • System component: data usage and processing, the AI technology, entire application, users, context of AI system
  • Price indication: freely available


The makers of the toolkit have identified 8 risk zones. In addition to these 8 risk zones, they have written 14 scenarios that can help you discuss the impact of your AI application. And they also provide 7 strategies that can help your company prepare for the AI future.


The toolkit gives you insight into the possible ethical impact of your product. By going through the scenarios it is possible to always keep ethical questions central, this brings you an ethical mindset.

Values as mentioned in the tool Related
  • Trust, misinformation and propaganda
  • Diversity, Non-discrimination & Fairness
  • Addiction and dopamine economy
  • Privacy & Data governance
  • Economical unfairness
  • Environmental & Societal wellbeing
  • Machine ethics and algorithmic bias
  • Governmental surveillance
  • Data control and data as valuta
  • Implicit trust and usability insights
  • Evil and criminal actors


Ethical OS Toolkit

Go to the tool.

This tool was not developed by the Knowledge Center Data & Society. We describe the tool on our website because it can help you deal with ethical, legal or social aspects of AI applications. The Knowledge Center is not responsible for the quality of the tool.