Guest blog: A Fundamental Rights Impact Assessment for law enforcement AI systems
04.11.2024
Throughout the year, the Knowledge Centre Data & Society offers a platform for partner organisations and other interested parties. In this guest blog, researcher Donatella Casaburo from the KU Leuven Centre for IP & IT Law (CiTiP) dives into the Fundamental Rights Impact Assessment tool on which she worked as part of the European Union-funded ALIGNER project.
Despite the numerous ethical and legal concerns raised, police and law enforcement agencies are increasingly implementing AI systems to enhance their operational capabilities. The Horizon 2020 ALIGNER project released its Fundamental Rights Impact Assessment template, an instrument that allows law enforcement agencies to identify and mitigate the risks to individuals and society posed by their AI systems, as well as to comply with Article 27 of the AI Act.
Artificial intelligence in the law enforcement domain
In the last decade, artificial intelligence (AI) applications have immensely flourished in diverse fields: from the (relatively!) innocuous spam filters, video game bots, and virtual assistants to the more concerning facial recognition systems, deep fakes and lethal autonomous weapons.
Of course, the law enforcement domain is no exception. Police and law enforcement agencies are increasingly implementing AI systems to enhance their abilities to prevent, investigate, detect and prosecute crimes, as well as to predict and preempt them. Many national police forces in the EU are already implementing AI in their daily operations, such as for patrolling hazardous areas, gathering and analyzing data from crime scenes; identifying items, crime suspects, or victims; or forecasting individuals or geographical areas with an increased probability of criminal activity.
Ethics and law of law enforcement AI
The actual and promised benefits of AI on police forces’ operational capabilities do not often come without a cost. Scholars, practitioners, civil society organizations and policymakers are raising numerous concerns about the possible negative effects that law enforcement AI may have on individuals and society. For instance, AI systems can be biased and reinforce discrimination; the reasoning behind an AI output can be non-explainable and hard to challenge by defendants in court; or a generalized and untargeted use of AI systems can lead to a situation of mass surveillance and deter individuals from exercising their rights and freedoms.
To guide police and law enforcement agencies and minimize the risks their use of AI may pose, two instruments are often invoked: ethics and law. Let’s take a closer look at them.
Ethics
Starting from ethics, in 2019, the independent High-Level Expert Group on AI, set up by the European Commission, published its Ethics Guidelines for Trustworthy AI. There, the experts translated the broad principles of AI ethics into seven concrete requirements that should always be met by all AI developers and users, including law enforcement agencies. The requirements are the following:
- human agency and oversight
- technical robustness and safety
- privacy and data governance
- transparency
- diversity, non-discrimination and fairness
- societal and environmental wellbeing
- accountability.
Law
As for the law, the fundamental rights of individuals enshrined in the EU Charter should always inform the development and deployment of all AI systems in the EU. In the context of law enforcement AI, the following fundamental rights are of paramount importance:
- presumption of innocence and right to an effective remedy and to a fair trial
- right to equality and non-discrimination
- freedom of expression and information;
- right to respect for private and family life and right to protection of personal data.
Moreover, since August 2024, the AI Act horizontally regulates AI systems placed or put into service in the EU market. However, while the AI Act establishes extensive obligations for AI providers, it devotes a more limited attention to AI deployers, namely those natural or legal persons using AI systems during professional activities. As a result, the AI Act largely allocates the task of ensuring an ethical and legal deployment of law enforcement AI to the instrument of the fundamental rights impact assessment. Pursuant to Article 27 of the AI Act, deployers of high-risk AI systems who are also bodies governed by public law – such as police and law enforcement agencies – are obliged to perform an assessment of the impact on fundamental rights that the use of such AI systems may produce.
Undoubtedly, obliging police and law enforcement agencies to perform a fundamental rights impact assessment of the high-risk AI systems they are planning to deploy is a first step toward the objective of mitigating the risks posed by law enforcement AI. However, it is a step difficult to translate into practice: law enforcement agencies need an operational instrument suitable to be included in their AI governance procedures to allow them to comply with Article 27 of the AI Act.
The ALIGNER Fundamental Rights Impact Assessment
The HORIZON2020 project ALIGNER (Artificial Intelligence Roadmap for Policing and Law Enforcement) has just released its Fundamental Rights Impact Assessment (FRIA) template, an instrument ready to be implemented by police and law enforcement agencies who are planning to deploy high-risk AI systems for law enforcement purposes.
The ALIGNER FRIA consists of two connected and complementary templates:
The Fundamental Rights Impact Assessment template, which helps law enforcement agencies identify and assess the risks posed by their AI systems to those fundamental rights most likely to be infringed; and
The AI System Governance template, which helps law enforcement agencies identify the relevant ethical standards for trustworthy AI and mitigate the risks to fundamental rights.
An iteration of the ALIGNER FRIA enables police and law enforcement agencies to evaluate and mitigate the risks related to a single AI system, deployed for a single law enforcement purpose or a set of connected law enforcement purposes and in a pre-determined context of use. Pursuant to Article 27 of the AI Act, the ALIGNER FRIA needs to be performed before the deployment of the AI system, to help law enforcement agencies decide on the modalities of use, ideally, by a multidisciplinary team including legal, operational and technical experts.
The Fundamental Rights Impact Assessment template
An example from the first template, the Fundamental Rights Impact Assessment, is shown in Figure 1 below.
The Fundamental Rights Impact Assessment template is divided into four parts, and, in each one of them, a group of fundamental rights is used as a benchmark for the following assessment. To help and guide law enforcement agencies in their assessment, the template already lists some possible characteristics of the AI system that may have a negative impact on the fundamental right considered (‘challenge’ column). Law enforcement agencies need to explain whether and to what degree the listed challenges relate to the assessed AI system (‘evaluation’ column). Finally, law enforcement agencies need to estimate the level of risk posed by an AI system to the fundamental right considered (‘estimated risk level’ column).
Figure 2 below shows an example of a filled Fundamental Rights Impact Assessment template
The AI System Governance template
An example from the second template, the AI System Governance, is shown in Figure 3 below.
The AI System Governance template is divided into seven parts and, in each one of them, a requirement of the AI ethics is used as a benchmark for the following assessment. To help and guide law enforcement agencies in their assessment, for each building block of the key requirement considered (‘component’ column), the template already lists some characteristics that an AI system should embed to be considered ethical (‘minimum standards to be achieved’ column). When the minimum standard is suitable to mitigate the risks posed by the deployment of the AI system, the template connects it to a previously estimated challenge (‘initial risk estimate’ column). Law enforcement agencies need to explain how the minimum standard is foreseen to be implemented (‘additional mitigation measures implemented’ column). Finally, when the minimum standard is connected to a previously estimated challenge, law enforcement agencies need to estimate the final level of risk posed by the AI system to fundamental rights (‘final assessment’ column).
Figure 4 below shows an example of a filled AI System Governance template.
Interest in the ALIGNER Fundamental Rights Impact Assessment?
You can freely download the ALIGNER FRIA templates and their handbook from ALIGNER’s website. In case you are curious to know more about the ALIGNER FRIA or would like to provide feedback, you can reach out to Donatella Casaburo (donatella.casaburo@kuleuven.be).
This is an updated version of a post originally published in the blog of the KU Leuven Centre for IT & IP Law, reflecting the changes made to the ALIGNER FRIA templates after the adoption of the AI Act.
ALIGNER has received funding from the European Union’s Horizon 2020 research and innovation programunder Grant Agreement no. 101020574.
Author
Donatella Casaburo
Doctoral Researcher at the KU Leuven Centre for IT & IP Law
The Knowledge Centre Data & Society does not bear responsibility for the content of the guest blogs, and therefore, we will not engage in any correspondence regarding the content. In case of questions, comments, or concerns: contact the author(s)!
Our full disclaimer can be read here.
Credit cover image: Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI / CC-BY 4.0