New theme: AI literacy 

Explore here
Person holding a pile of papers. A white banner with the flag of the United Kingdom (UK) is placed over the image, this reflects that this policy monitor item is about an administrative decision in the UK. 

Persoon houdt een stapel papieren vast. Een witte banner met de vlag van Engeland / Groot-Britannië is over het beeld geplaatst, dit geeft aan dat dit beleidsmonitor item over een administratieve beslissing in Engeland / Groot-Britannië gaat.
05.09.2025

UK - Memorandum of Understanding between the UK and OpenAI on AI opportunities

UK - Memorandum of Understanding between the UK and OpenAI on AI opportunities

The UK government signed a non-binding MoU with OpenAI to explore the use of frontier AI in public services, deepen collaboration with the AI Security Institute, and support infrastructure linked to its sovereign AI agenda. The MoU, which creates no legal obligations, sets out four cooperation tracks: piloting adoption in government and industry, building infrastructure (including AI Growth Zones), developing skills and engagement, and sharing technical information on risks and safeguards. OpenAI framed the UK as a priority market and pledged to expand its London presence.

What: policy-oriented document

Impact score: 3

For whompolicy-oriented: Policy makers, vendors & startups, academia & civil society

URL: https://www.gov.uk/government/publications/memorandum-of-understanding-between-the-uk-and-openai-on-ai-opportunities

Key takeaways for Flanders: 

  • More UK pilots with frontier models (justice, education, security). This might be useful for benchmarking pilots
  • EU-UK data transfers remain covered by an extended adequacy decision through 27 December 2025. After this, the EU Commission should renew the agreement.
  • The MoU shows how governments can use non-binding agreements to accelerate pilots and signal strategic intent without immediately committing to procurement.

The UK government signed an MoU with OpenAI to accelerate the use of AI in public services and to strengthen the UK’s ambition for sovereign AI. The agreement is explicitly voluntary and not legally binding; it does not prejudge procurement but establishes a framework for collaboration. The MoU identifies four main pillars: piloting AI adoption in government and private sectors, developing infrastructure aligned with the UK’s AI Opportunities Action Plan (including potential participation in AI Growth Zones), supporting skills and public engagement, and sharing technical knowledge with the UK’s newly rebranded AI Security Institute.

OpenAI’s Role and Expansion

OpenAI positions the UK as one of its top markets and has committed to expanding its London office, which already employs more than 100 staff. The company highlighted existing use of its models in government prototypes such as Humphrey and Consult (a tool to review public responses to a consultation), presenting this MoU as a deepening of its public-sector presence. For the UK, OpenAI’s willingness to invest and collaborate signals alignment with the broader growth and innovation goals set out in the AI Opportunities Action Plan.

Governance Context

The MoU comes in the wake of the AI Safety Institute’s rebranding into the AI Security Institute, which reflects a shift in emphasis from broad “safety” to a narrower focus on security risks and potential misuse of AI. Within this context, the agreement underscores the UK’s strategy of positioning itself as a global hub for AI testing and deployment, while maintaining flexibility through non-binding commitments rather than binding contracts. The UK’s security-heavy framing contrasts with the EU’s risk- and rights-based approach.

Reactions and Debate

Reactions to the MoU have been mixed. Supporters see it as a strategic step that could support the UK’s global AI position, encourage investment, and creates space for rapid experimentation in the public sector. Critics, however, described the deal as legally meaningless, and warn of its vagueness and the risks of government dependency on a single US-based vendor. Concerns were also raised about transparency, data use, and the absence of clear safeguards for accountability and worker impact. Labour unions and rights groups emphasised the need for stronger oversight and public engagement before scaling AI adoption across critical services such as justice, security, or welfare.