The UK government signed an MoU with OpenAI to accelerate the use of AI in public services and to strengthen the UK’s ambition for sovereign AI. The agreement is explicitly voluntary and not legally binding; it does not prejudge procurement but establishes a framework for collaboration. The MoU identifies four main pillars: piloting AI adoption in government and private sectors, developing infrastructure aligned with the UK’s AI Opportunities Action Plan (including potential participation in AI Growth Zones), supporting skills and public engagement, and sharing technical knowledge with the UK’s newly rebranded AI Security Institute.
OpenAI’s Role and Expansion
OpenAI positions the UK as one of its top markets and has committed to expanding its London office, which already employs more than 100 staff. The company highlighted existing use of its models in government prototypes such as Humphrey and Consult (a tool to review public responses to a consultation), presenting this MoU as a deepening of its public-sector presence. For the UK, OpenAI’s willingness to invest and collaborate signals alignment with the broader growth and innovation goals set out in the AI Opportunities Action Plan.
Governance Context
The MoU comes in the wake of the AI Safety Institute’s rebranding into the AI Security Institute, which reflects a shift in emphasis from broad “safety” to a narrower focus on security risks and potential misuse of AI. Within this context, the agreement underscores the UK’s strategy of positioning itself as a global hub for AI testing and deployment, while maintaining flexibility through non-binding commitments rather than binding contracts. The UK’s security-heavy framing contrasts with the EU’s risk- and rights-based approach.
Reactions and Debate
Reactions to the MoU have been mixed. Supporters see it as a strategic step that could support the UK’s global AI position, encourage investment, and creates space for rapid experimentation in the public sector. Critics, however, described the deal as legally meaningless, and warn of its vagueness and the risks of government dependency on a single US-based vendor. Concerns were also raised about transparency, data use, and the absence of clear safeguards for accountability and worker impact. Labour unions and rights groups emphasised the need for stronger oversight and public engagement before scaling AI adoption across critical services such as justice, security, or welfare.