policy monitor

International AI Governance - Paper on International AI institutions

In this academic paper, the authors provide a comparative overview of the range of possible international institutions that have been suggested to govern the application of AI. The paper examines seven different models for international institutions for AI governance. For each model, the paper discusses its specific features, existing and underexplored examples, critiques and advantages.

What: academic paper

Impactscore: 5

For who: policymakers, researchers, other stakeholders in the AI-policy context

URL: https://www.legalpriorities.org/documents/Maas%20-%20Villalobos%20-%20International%20AI%20Institutions.pdf

Summary

This paper was written by the Legal Priorities Project (LPP) and provides a comparative overview of the discussion regarding the institutional framework for AI governance. The authors acknowledge that the calls for the establishment of a (new) international AI institution is an approach that has not yet global traction. Various other approaches are also being proposed by different parties, such as the application (and clarification) of already existing international standards.

In this paper the authors describe seven different governance models. Each model description features a part on its characteristics, existing (and underexplored) examples, specific suggestions in the literature for institutions, focused on AI, under the model and their (dis)advantages.

A first model is the scientific consensus-building model, which mainly aims to raise awareness and create consensus on a scientific level. Important to note is that (proposals under) this model intends to be non-political and neutral in its scientific assessments. An existing example of a body in this model is the International Panel on Climate Change (https://www.ipcc.ch/), which can possible function as an example for a future International Panel on AI. Several AI experts have already expressed interest in the creation of a scientific AI panel, to communicate risks and opportunities to policymakers or international organizations. However, critics fear that a lack of consensus concerning the future trajectory of AI makes this model not yet feasible.

The second model is the political consensus-building and norm-setting model, striving to reach political agreement on the governance of AI. By aligning national regulations, harmonization can be achieved and unified norms can be set. An informal example of this model is the G20, where a discussion on AI governance could be started quickly thanks to the already existing alignment of the group. Focusing on AI, an International AI Organization could serve as a public forum where nations can discuss international standards. Another option is to include experts into the discussion, which would be the case in a proposed International Agency for AI. A challenge that institutions under this model face, is defining the amount of actors that would be included into the discussion: when there are too many parties, finding a consensus is increasingly difficult; when there are too little parties, the consensus might be interpreted as non-representative on an international level.

A third model is the coordination of policy and regulation, uniting all different types of policies within AI governance to form a coherent whole. An international example of this kind of institution is the World Trade Organization. Experts have proposed multiple AI institutions that follow the example of the WTO by combining multilateral treaties, international standards, monitoring bodies, et cetera to align risks and possibilities of AI. Such an institution could also be created on a regional level such as the European Union. One challenge that this model may need to confront is overcoming the lack of actual interest of countries in participating in international institutions focused on political and ethical standard-setting. (An exception could be the Convention on AI being negotiated within the Council of Europe.)

A fourth model (enforcement of standards and restrictions) wishes to prevent the application of hazardous technologies by setting up monitoring systems, bans, licenses, et cetera. Potential conflicts are prevented by ensuring compliance by states and enforcing international standards. An existing example is the International Atomic Energy Agency, that safeguards their standards thanks to effective monitoring. Several proposals for “global watchdogs” of AI have been raised, aiming to establish and apply safety protocols to (certain types of) AI in order to ensure prosperity and minimize risks. Other previously-mentioned proposals, like the International Agency for AI, could be expanded to include a system which controls hazardous applications. Some experts wish to go even further, by creating institutions that analyze risks for emerging technologies in order to get ahead of the potential dangers. A disadvantage of this model may be the feasibility of creating such an overarching institution and the need for an objective third party assessor to spot potential risks.

The fifth model (stabilization and emergency response) focuses on preventing harm to social stability or international peace in the case of negative AI applications. This functioning can be split into the assessment of risks, the warnings ahead of time and mitigation by increasing transparency. This model has been applied in e.g. the Financial Stability Board, inspiring a proposed Geotechnology Stability Board which would maintain a geopolitical balance regarding the use of AI. Another option is the creation of a platform where parties can prepare jointly for possible impacts of emerging technologies. While this model has not yet received much criticism, some fear that a response to social harm might be too late in case of an actual impact, pointing out that prevention is more effective than mitigation.

The sixth model focuses on international joint research by multiple collaborating states with a common goal. This goals can vary from the acceleration of AI development to the establishment of safeguarding measures. An example hereof is the International Space Station or CERN, where interdisciplinary experts cooperate to forge scientific progress. This method would also be used in a proposed Multilateral AI Research Institute, a CERN for AI, et cetera. The main goals are typically to start a scientific collaboration on specific topics in order to increase the available knowledge (e.g. AI safety) or more general technical research (incl. socio-economic aspects). Some authors also suggest setting up research institutions that would have a monopoly on developing certain AI (e.g. AGI or forms of advanced AI). A possible danger of this model is that it may pull away scientific experts from other areas within AI governance where they are needed, such as the analysis of emerging AI.

The seventh model (distribution of benefits and access) aims to distribute access to and advantages of AI to those that are deprived of the benefits from similar technologies. An institution that already follows this model is Gavi, the Vaccine Alliance, transferring knowledge and accessibility to less-developed countries. A similar goal would be pursued by proposals made by experts, like the Frontier AI Collaborative or the International Digital Democracy Initiative. A realistic obstacle is finding a balance between the growing technological developments and the distribution of knowledge, as well as finding and ensuring state participation for these projects.

The LLP finds that these proposals are a good step in the right direction, but remain too vague. This is why the paper concludes with a list of directions for further research in this area. One of these advices is the analysis of the effectiveness of the proposals, namely to check whether the envisaged institutions would be able to enforce their standards, meet their objectives, etc. The relation between different proposed (and existing) institutions and the compatibility of their functions would also merit further research.