United States of America – Tech Companies’ Voluntary AI Commitments
Under the auspices of the White House, a series of leading American AI companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, have voluntarily agreed to a set of standards that promote the principles deemed fundamental to the future of AI: safety, security and trust.
What: policy-oriented document
Impact score: 4
For whom: AI users and developers, researchers, policymakers, regulators.
On 21 July, the White House published a document signed by seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – containing a list of commitments the companies are making to promote the safe, secure and transparent development and use of AI technology. Even though non-binding and hence unenforceable, the commitments are intended to be complied with “until regulations covering substantially the same issues come into force”.
The companies are more specifically committing to:
Ensuring products are safe before putting them on the market, by
- Committing to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns;
- Working towards sharing information across the industry and with governments, civil society, and academia on trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. This includes adopting shared standards and best practices such as the NIST AI Risk Management Framework (Find our publication on the Framework here ).
Building AI systems that prioritize security, by
- Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, the most essential part of any AI system;
- Facilitating responsible third-party discovery and reporting of issues and vulnerabilities through e.g. bug bounty programs.
Earning the public's trust, by
- Developing and deploying robust mechanisms that enable users to know if audio or visual content is AI-generated, such as a watermarking system;
- Publicly reporting their AI systems’ capabilities, limitations, and areas of (in)appropriate use, including discussions on societal risks like discrimination and bias;
- Prioritizing research on harmful biases, discrimination, infringement of privacy, and other societal risks attached to AI systems;
- Developing and deploying advanced AI systems that are able to help address society’s greatest challenges, such as mitigating climate change and preventing cancer.
On 12 September, eight additional tech companies joined the pledge to develop AI technology that is safe, secure and trustworthy. Read more about their commitments here.
Furthermore, the Biden administration plans to release an executive order on AI in the coming weeks and has supported bipartisan legislation on the issue. More on that here.