paper

News Recommenders and Cooperative Explainability

Confronting the contextual complexity in AI explanations

M.Z. van Drunen, PhD researcher, Institute for Information Law, University of Amsterdam

J. Ausloos, Postdoctoral researcher, Institute for Information Law, University of Amsterdam

N.M.I.D. Appelman, PhD researcher, Institute for Information Law, University of Amsterdam

N. Helberger, Distinguished University Professor of Law and Digital Technology, University of Amsterdam

Artificial Intelligence (AI) needs to be explainable. This is a key objective advanced by the European Commission (and its high-level expert group) throughout its AI policy, the Council of Europe, and a rapidly growing body of academic scholarship in different disciplines, from Computer Sciences to Communication Sciences and Law. This interest in explainability is in part fuelled by pragmatic concerns that some form of understanding is necessary for AI’s uptake (and therefore economic success). But on a more fundamental level, there is a recognition that explainability is necessary to understand and manage the societal shifts AI triggers, and to ensure the continued agency of the individuals, market actors, regulators, and societies confronted with AI.

However, what does it mean for AI to be explainable? In this vision paper, we argue that the answers to this question must take better account of explainability’s contextual and normative complexity. For our purposes here, and without discarding the ample scholarly debate around this notion, we understand explainability as “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing” (as can be derived from a combined reading of Articles 13(2)f, 14(2)g, 15 (1)(h), Art. 22 and Recital 71 in the General Data Protection Regulation (GDPR)). As will become apparent in the following pages, explainability is relied on to perform a wide variety of functions with regard to a wide variety of actors involved in, and affected by, AI. This complexity is reflected in the patchwork of sector-specific legal frameworks, policy recommendations, ethics guidelines and self- regulatory instruments governing all kinds of decision-making processes now grouped under the common ‘AI’ denominator. Given these considerations we argue that approaches that focus on specific AI explanations or treat explainable AI as a general, abstract concept, however, cannot fully address its inherent complexity.

That complexity is lost when explainable AI is treated as an abstract, general concept that is treated generically and without attention for the intrinsics of a particular sector. What is needed instead, is to acknowledge this complexity head on. This requires us to understand normative demands and restrictions on explainability in specific contexts, the different capacities and responsibilities of the various stakeholders within these contexts, and how these can and should relate to one another. AI explainability, in other words, must be seen as part of the larger sectoral approach to governing specific technology-applications and stakeholders.

To illustrate this point, we will focus on the specific AI systems that recommend news on social, and increasingly also on legacy media. In the context of the media, AI is often defined broadly as “A collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks normally requiring human intelligence.” Where AI is used to automate an editorial activity such as news dissemination, explainability is not just important to enable accountability vis-a-vis government and especially non-governmental actors, but also to afford individuals the means to control their media diets and generate the trust the media requires to fulfil its role in democracy.

Stressing the need for a more nuanced understanding of AI explainability, this Vision Paper draws on the concept of cooperative responsibility, as developed by Helberger, Pierson, and Poell. For the purposes of this paper, we wish to highlight three key points:

  • Cooperative responsibility addresses the idea that certain problems in complex (automated) systems (such as their impact on diversity, accountability, or privacy) are the result of the interplay of all stakeholders involved, and cannot be solved by, or attributed solely to one actor (the so called ‘problem of many hands’).
  • These problems cannot be fully resolved without considering the different roles, capabilities and responsibilities of all stakeholders and how these are meaningfully divided. The (degree of) responsibility of the respective stakeholders depends on, among other things, their respective knowledge, capacities, resources, incentives, and efficiency in addressing the problem. These factors are context-dependent.
  • The responsibilities of each actor cannot be seen in isolation, but are interconnected. Platforms, for example, have a responsibility to ensure that the users whose interactions they facilitate can do their share to contribute to the realisation of public values (for example through the way they design and explain their systems). Individuals at the same time have an ability to exercise pressure on their governments and platforms to secure public values, and with that also a responsibility for the realisation of those public values. One critical precondition for all actors, including users, to be able to exercise their responsibility is explainability.

Following this logic, we will use section 2 to sketch the complex web of actors involved in providing and receiving explanations on AI in news recommender systems, and section 3 to highlight the normative complexities of AI explainability in this specific context. Section 4 will chart a path toward a more comprehensive form of cooperative AI explainability

The views expressed in this paper are those of the author and do not necessarily reflect the views or policies of the Knowledge Center Data & Society or CiTiP. The paper aims to contribute to the existing debate on AI. Full disclaimer.