TOOL

LINDDUN

The need for privacy is rising and more and more important in the development of new applications. But how can you set up and execute a thorough privacy assessment of your software system? LINDDUN is a method developed for the assessment of privacy within an AI application based on seven pillars of threats to individual privacy.

LINDDUN was created out of an collaboration between the DistriNet en COSIC research groups van de KU Leuven (Belgium). Since then, DistriNet has been working on evaluating and improving the methodology.

What you should know before reading further

  • For whom: (project) managers, business analysts, developers
  • Process phase: design, developments, implementation, evaluation en iteration
  • System component: complete application
  • Price indication: freely available

Method

LINDDUN is a methodology for a workshop in three steps. The three steps will help you to visualise your data flow, determine the risks of using this data and discover the steps needed to mitigate those risks. LINDDUN itself is an acronym for the seven basic threats to individual privacy.

These threats are:

  • Linkability,
  • Identifiability,
  • Non-repudiation,
  • Detectability,
  • Disclosure of information,
  • Unawareness,
  • Non-compliance.

The methodology offers four tools to use during the steps by helping you to visualise the data flow and by giving you a taxonomy of the different kinds of threats and solutions.

Result

By using this methodology you will create an overview on the state of the privacy of your application, discover the different privacy threats in your system and find ways to take action where needed.

Ethical values as mentioned in the tool Related ALTAI principles
  • Privacy
  • Privacy & Data governance
  • Transparency
  • Transparency

Links

More info & downloads of the method:

https://www.linddun.org

Links

For explanations and downloads of the methodology: https://www.linddun.org


This tool was not developed by the Knowledge Center Data & Society. We describe the tool on our website because it can help you deal with ethical, legal or social aspects of AI applications. The Knowledge Center is not responsible for the quality of the tool.