Tool

Ethical OS Toolkit

With the slogan 'How not to regret what you built' in mind, the Institute for the Future (IFTF) and the Omidyar Network have developed a toolkit to help predict difficult and unwelcome consequences to prevent these from occurring while you develop products and projects based on AI.

What you should know before reading further:

  • For whom: developers, product managers, engineers
  • Process phase: design
  • System component: data usage and processing, the AI technology, entire application, users, context of AI system
  • Price indication: freely available

Method

The makers of the toolkit have identified 8 risk zones. In addition to these 8 risk zones, they have written 14 scenarios that can help you discuss the impact of your AI application. And they also provide 7 strategies that can help your company prepare for the AI future.

Result

The toolkit gives you insight into the possible ethical impact of your product. By going through the scenarios it is possible to always keep ethical questions central, this brings you an ethical mindset.

Waarden zoals benoemd
in de tool
Gerelateerde
ALTAI-principes
  • Vertrouwen, desinformatie en propaganda
  • Diversity, Non-discrimination & Fairness
  • Verslaving en dopamine economie
  • Privacy & Data governance
  • Economische en vermogensongelijkheden
  • Environmental & Societal wellbeing
  • Machine ethische aspecten en algoritmische vooroordelen
  • Overheidstoezicht
  • Data controle en data als monetaire valuta
  • Impliciet vertrouwen en gebruikersinzicht
  • Haatdragende en criminele actoren

Link

Ethical OS Toolkit

https://ethicalos.org/

This tool was not developed by the Knowledge Center Data & Society. We describe the tool on our website because it can help you deal with ethical, legal or social aspects of AI applications. The Knowledge Center is not responsible for the quality of the tool.