policy monitor

United States of America – Executive Order on the Safe, Secure and Trustworthy development of Artificial Intelligence

On 30 October 2022, President Biden signed an executive order containing an extensive list of measures that federal agencies will need to take in order to manage the risks of artificial intelligence. The order sets out eight principles and policy priorities aimed at protecting Americans from the potential risks of AI systems.

What: policy-oriented document;

Impactscore: 3

For who: policymakers, civil society, federal US agencies, educational institutions

URL: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Summary

On 30 October 2022, President Biden issued an Executive Order (EO) with the goal of promoting the “safe, secure and trustworthy development and use of artificial intelligence”.

An executive order is signed, written and published as a directive from the President of the United States and has force of law (like regulations issued by federal agencies), but is not considered legislation. Executive orders are primarily used to manage federal government operations.

The EO on AI builds on earlier related US efforts like NIST’s AI Risk Management framework and the Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, as well as previous actions president Biden has taken. Our respective summaries can be found here:

Content

It is important to consider is that the EO does not adopt a single definition of AI systems and does not restrict itself to generative AI systems or systems leveraging neural networks. Throughout the EO, various terms such as ‘AI’, ‘AI model’, ‘AI system’ ‘generative AI’ and ‘machine learning’ are used interchangeably. Therefore, its scope can be considered to be quite broad. Additionally, it is important to understand that the EO predominantly orders federal agencies to take certain future actions (e.g. establish guidelines, draft a report, require other parties to register certain activities,…) but that these actions have not materialized yet. The two most important sections of the EO are the sections regarding safe and secure AI, and, the federal government’s use of AI.

The Executive Order outlines eight guiding principles and related policy measures.

Ensure safe and secure AI technology through the development of standards, guidelines and best practices.

The first section focuses on the safety and security of AI systems.

  • Guidelines and best practices should be established, including developing a companion resource to the NIST AI Risk Management Framework for generative AI.
  • An initiative should be launched to create guidance and benchmarks for evaluating and auditing AI capabilities.
  • Guidelines should be established to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests (i.e. structured testing of the system to identify flaws and vulnerabilities, often in a controlled environment). These efforts should also ensure the availability of testing environments, such as testbeds, a mechanism to test tools and technology, such as AI and PETs (privacy-enhancing technologies). This echoes the regime for AI regulatory sandboxes included in the EU AI Act.
  • Developers of potential dual-use foundation models must provide the Federal US Government, on an ongoing basis, reports and records regarding e.g. the developing and training of the model and its performance in AI red-teaming tests. These reporting requirements would apply to any model that was trained using a quantity of computing power greater than 10²6 integer or floating-point operations
  • Companies that acquire or develop a potential large-scale computing cluster must report this acquisition or development and provide its location and amount of total computing power available. These reporting requirements would apply to any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second. (This requirement does not appear to be limited to US territory or US organisations.)
  • Infrastructure as a Service (IaaS) Providers (defined in EO 13984) should report when a foreign person transacts with that US IaaS Provider to train a large AI model “with potential capabilities that could be used in malicious cyber-enabled activity” (which is defined through technical parameters). Moreover, IaaS Providers must also ensure that foreign resellers of US IaaS Products verify the identity of any foreign person that obtains an IaaS account from the foreign reseller (regardless of the envisaged activities on that infrastructure).
  • A public report shall be issued on best practices for financial institutions to manage AI-specific cybersecurity risks.
  • Tools will be developed to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards.
  • The potential for AI to be misused to enable the development or production of chemical, biological, radiological and nuclear (CBRN) threats should be evaluated and reports regarding this subject shall be submitted to the President. The EO also contains several measures regarding synthetic nucleic acid sequencing (without clear link to AI).
  • The EO also obliges that several documents regarding synthetic content need to be published, including (i) a report identifying the existing standards, tools, methods, and practices for e.g. authenticating content and tracking its provenance, labeling synthetic content or detecting synthetic content; (ii) guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures; and (iii) guidance for federal agencies for labeling and authenticating synthetic content.

Promote responsible innovation, competition, and collaboration

The EO lists measures regarding attracting talent and stimulating R&D while simultaneously addressing new intellectual property rights questions and other problems to protect inventors and creators.

  • The US will establish a program to identify and attract top talent in AI and other critical and emerging technologies at universities, research institutions, and the private sector overseas. Additionally, competent authorities are allowed to use their discretionary authorities to support and attract foreign nationals with special skills in AI and other critical and emerging technologies.
  • A pilot program will be launched, by NSF, implementing the National AI Research Resource (NAIRR). This program aims to pilot an initial integration of distributed computational, data, model, and training resources to support AI-related R&D. These resources will be made available to the research community.
  • The USPTO will publish guidance regarding inventorship and the use of (generative) AI, patent eligibility and other guidance regarding the intersection of AI and IP. It will also issue recommendations to the President regarding potential executive actions relating to copyright and AI.
  • A report on the potential role of AI in research aimed at tackling major societal and global challenges, such as climate change, shall be published.
  • Competent agencies (incl. the Federal Trade Commission) are encouraged to use their authorities to promote competition in AI and related technologies, as well as in other markets (e.g. semiconductor markets).

Support American workers

The actions proposed in this regard are rather limited and include e.g.:

  • A report on the labor-market effects of AI
  • The development of principles and best practices for employers to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits. Subsequently, competent agencies need to encourage the adoption of these guidelines through their existing programs.
  • The NSF shall support AI-related education and AI-related workforce development through existing programs.

Advancing equity and civil rights

In this section, a variety of agencies and institutions are addressed and obliged to take various measures to prevent civil rights violations and discrimination.

  • The US Attorney General should provide (i) guidance on best practices for investigating and prosecuting civil rights violations and discrimination related to AI, and (ii) best practices for law enforcement agencies including safeguards and appropriate use limits for AI.
  • Also, a report addressing the use of AI in the criminal justice system shall be submitted to the President (regarding the use of AI in sentencing, parole, bail, etc.).
  • Agencies shall use their respective civil rights and civil liberties offices and authorities to prevent and address unlawful discrimination and other harms that result from uses of AI in Federal Government programs and benefits administration.
  • The Secretary of Labor shall publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.

Protect American consumers, patients, passengers and students

This section starts with a general reminder that “independent regulatory agencies are encouraged to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI.” Other actions put forward, are:

  • Development of a strategic plan on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector
  • Development of a strategy to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality
  • Establishment of an AI safety program focused on healthcare
  • Development of a strategy for regulating the use of AI or AI-enabled tools in drug-development processes
  • Development of resources, policies, and guidance regarding safe, responsible, and nondiscriminatory uses of AI in education (incl. an AI Toolkit for education leaders)

Protect privacy and civil liberties

The EO broadly states that it aims to ensure that the collection, use and retention of data is lawful, secure and minimizes any privacy and confidentiality risks. However, the concrete actions in this regard are very limited and focus primarily on several evaluations that should be conducted and PET R&D. No actual regulatory measures are proposed.

Manage the federal government’s use of AI

The section lists various measures that intend to enhance the federal government’s internal capacity to regulate, govern and promote responsible use of AI to deliver better results for Americans. It includes:

  • Development of guidance for agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.
  • The requirement that each agency should designate a Chief Artificial Intelligence Officer (CAIO) who shall hold the primary responsibility for coordinating their agency’s use of AI.
  • Establishment of minimum risk-management practices for government uses of AI that impact people’s rights or safety
  • Drafting of AI strategies in order to pursue high-impact use cases
  • Development of a method for agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with Federal policy on AI.
  • Instructions for agencies regarding the collection, reporting, and publication of agency AI use cases.
  • Discouraging agencies from imposing broad general bans or blocks on agency use of generative AI. Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments and establish (internal) guidelines and limitations on the appropriate use of generative AI. This will be further complemented by the development of (general) guidance on the use of generative AI for work by the Federal workforce.
  • Intention to attract AI talent in the Federal Government through a variety of measures.
  • Implementation of AI training and familiarization programs for employees, managers, and leadership in technology as well as relevant policy, managerial, procurement, regulatory, ethical, governance, and legal fields.

Strengthen U.S. leadership abroad

This section highlights the US ambition to collaborate with international partners and allies on AI policy. In line with previous domestic initiatives and the EO itself, the EO stresses “encouraging international allies and partners to support voluntary commitments and “developing common regulatory and other accountability principles for foreign nations”. Especially this latter part is interesting as the US itself has not yet defined binding accountability principles for AI.

Finally, the EO also foresees the publication of (i) an “AI in Global Development Playbook” that should incorporate the NIST AI Risk Management Framework’s principles, guidelines, and best practices into the social, technical, economic, governance, human rights, and security conditions of contexts beyond United States borders, and (ii) a “Global AI Research Agenda” to guide the objectives and implementation of AI-related research in contexts beyond United States borders. Again, given that these kinds of documents are not even available for situations within the US, it is interesting to see that it is a policy priority to instruct other countries on e.g. what research agenda they should pursue and how.

Conclusion

The EO includes an impressive list of documents that will need to be delivered by federal US agencies in the near future. Apart from some exceptions (e.g. the reporting obligation for computing clusters or the appointment of CAIOs), the EO does not contain binding regulatory proposals or initiatives. Given that 2024 is a presidential election year, it remains to be seen how many of these documents will actually be published and which future policy initiatives they may instigate.