What: Legislation
Impact score: 1
For who: Government, businesses, citizens
URL
With FRAIA, the government hopes that the relevant parties can have a balanced discussion when considering whether or not to develop an algorithmic application. It should help public servants to identify the risks in the use of algorithms and to take appropriate measures to address these risks.
FRAIA comprises many questions about the topics that need to be discussed and to
which an answer must be formulated in any instance where a government organization considers
developing, delegating the development of, buying, adjusting and/or using an algorithm. Examples of questions include:
- What are the public values that prompt the use of an algorithm?
- What is the legal basis for the use of the algorithm and of the targeted decisions that will be
- made on the basis of the algorithm?
- What type of data is going to be used as input for the algorithm and from which sources has the data been taken?
Answering these questions is legally complex and requires consultation with various internal and external stakeholders. This also makes carrying out the IAMA a time-consuming process.
The assessment contains four steps:
- the first part assesses the reasons, the underlying motives and the intended effects of the use of the algorithm.
- Next, the algorithm itself is highlighted. First, questions are asked about the data used, then the algorithm and the conditions for responsibly using it.
- Part 3 focuses on the implementation and use of the algorithm and how to deal with the output
- The final step is a tool that assesses whether the algorithm compromises fundamental rights
FRAIA must be implemented when algorithms are used to make evaluations of or decisions about people. Where that boundary lies in practice is, however often unclear. For example: Does a policing algorithm that predicts where and when burglaries will happen decide or evaluate people? The government has yet to clarify this