New training course: Responsibly innovating with AI

(NL) Three-day training course "Responsibly innovating with AI"
UK beleidsmonitor covers template 23
09.06.2025

United Kingdom – AI Security Institute

United Kingdom – AI Security Institute

The United Kingdom has rebranded its 'AI Safety Institute' as the 'AI Security Institute', marking a significant shift in focus from general AI ethics to national security and protection against malicious use of artificial intelligence.  

What: Administrative decision

Impactscore: 2

For who: Government, businesses, researchers 

URL: https://www.gov.uk/government/...

From AI Safety to AI Security: a shift towards protection against misuse of AI 

The United Kingdom recently rebranded its “AI Safety Institute as the “AI Security Institute to sharpen the institute’s focus on addressing serious security threats posed by artificial intelligence. This change, announced at the Munich Security Conference by Peter Kyle, Secretary of State for Science, Innovation and Technology in the UK government, reflects a shift towards prioritising national security and public protection against AI abuse and crime. This is a foundational element of the UK’s AI strategy under its broader “Plan for Change”. The change of name underscores that the institute is concerned with protection against intentional malicious applications of AI, more than general societal issues. 

The change in name is accompanied by a change in focus. Although the AI Safety Institute previously addressed a wide range of topics, the government has now clarified that matters related to freedom of speech or determining what counts as bias or discrimination will no longer be part of the institute’s areas of focus. Instead, the institute will home in on the most serious risks that new AI technologies may pose, such as the creation of chemical and biological weapons, cyberattacks, and criminal activities like fraud and child sexual abuse. A criminal misuse team is being established in collaboration with the UK Home Office to research how AI might be misused for such purposes, including the generation of child sexual abuse material, an area where the government plans to introduce specific legislation. 

The Institute itself states that governments must understand advanced AI in order to make informed policy decisions and to ensure public safety and security. Therefore, the Institute aims to conduct scientifically grounded assessments of these systems both before and after their release. Their current work focuses on understanding how AI could be misused, how effective existing safety and security measures are at preventing their own circumvention, and how autonomous AI systems could become. 

Collaboration with other bodies and experts 

To succeed in its mission, the AI Security Institute actively seeks collaboration with public and private parties. The Institute will work closely with other security bodies, such as the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation and the National Cyber Security Centre, to build a strong scientific foundation for understanding and mitigating AI-related threats.  

Furthermore, the announcement is accompanied by a new agreement between the UK and the leading American AI company Anthropic. Through this collaboration the UK government and Anthropic will share insights and explore together how AI tools can be used securely for the benefit of society, such as better government services or scientific research. This partnership underscores that the UK is not only looking at fending off threats, but also at responsibly harnessing AI's capabilities to unlock full economic potential while maintaining a robust safety framework. Moreover, the UK plans to partner with several major AI companies to promote innovation and productivity.  

Conclusion 

In summary, the former AI Safety Institute has undergone a transformation into the AI Security Institute to stress the safety of the United Kingdom and its citizens in the AI era. The focus has shifted from general AI ethics to concrete security threats, with new teams, collaborations with defense and technology partners, and targeted research. These changes aim to ensure that artificial intelligence can continue to evolve in ways that improve society, without malicious abuse of the technology.