Policy Monitor

AI Safety Summit – The Bletchley Declaration

The AI Bletchley Declaration represents a pivotal moment in the global management of AI, uniting 28 countries in recognizing the transformative potential and inherent risks of advanced AI technologies. This international agreement emphasizes the need for human-centric, trustworthy AI development, prioritizing safety, human rights, and sustainable growth. It highlights the urgency of addressing the unique challenges posed by frontier AI, advocating for comprehensive international cooperation and policy-making to harness AI's benefits responsibly.

What: Policy oriënting document

Impact score: 2

For who: policy makers, industry, businesses

URL: https://www.gov.uk/government/...

Publication date: 01/11/2023

The Bletchley Declaration marks a seminal moment in the global discourse on AI, uniting 28 countries in a commitment to harness AI's transformative potential while addressing its inherent risks. This international agreement emphasizes the crucial need for designing, developing, deploying, and utilizing AI in ways that are safe, human-centric, trustworthy, and responsible, with a focus on enhancing values such as human well-being, peace, prosperity, and sustainable development. Acknowledging AI’s pervasive role in critical areas like housing, employment, transportation, education, health, and justice, the declaration underscores this as a pivotal moment to advocate for the safe development of AI, ensuring its benefits are inclusively harnessed globally.

Recognizing the significant risks AI poses, the declaration calls for international cooperation to address challenges related to human rights, transparency, fairness, accountability, regulation, safety, ethics, privacy, and data protection. Special emphasis is given to the safety risks associated with 'frontier AI' – highly capable AI models, including foundation models and specific narrow AIs, which present substantial risks due to potential misuse or control issues. The declaration advocates for a balanced approach to AI governance, emphasizing the role of all stakeholders – nations, companies, civil society, academia – in ensuring AI safety and bridging the digital divide. It highlights the need for entities developing frontier AI to take strong responsibility for ensuring their safety through rigorous testing and evaluations.

The agenda set forth in the declaration focuses on identifying shared AI safety risks, building a scientific and evidence-based understanding of these risks (including evaluating model capabilities and the development of new standards to support governance), and developing risk-based policies across nations. It proposes supporting an international network for scientific research on frontier AI safety and maintaining a global dialogue to contribute to broader international discussions. Looking ahead, the declaration anticipates further meetings in 2024, indicating an ongoing commitment to refining AI governance and policy development in response to the rapidly evolving nature of AI technology. This comprehensive approach underlines the necessity of balancing innovation with safety and ethical considerations, aiming to maximize AI's benefits while mitigating its risks on an international scale.