policy monitor

United Kingdom – Guidance to civil servants on use of Generative AI

The Cabinet Office of the United Kingdom and the Central Digital and Data Office have written a guidance paper directed to civil servants in order to ensure proper utilization of generative AI systems, such as DALL-E or ChatGPT, when operating such systems on behalf of the government. The objective for this guidance is enabling the use of generative AI in order to enjoy the many opportunities, while also preventing harm in regards to privacy risks, biases and misleading information. To achieve this, general principles and practicalities are discussed, along with an assessment of various beneficial examples as well as dangerous examples of using generative AI.

What: policy orienting document

Impactscore: 3

For who: civil servants, government institutions

URL: https://www.gov.uk/government/publications/guidance-to-civil-servants-on-use-of-generative-ai/guidance-to-civil-servants-on-use-of-generative-ai

Key take-away for Flanders/Belgium: at both federal and Flemish level, an ethical framework on the use of AI by the administration is being developed. However, these Belgian frameworks are broader than these UK guidelines and are not limited to generative AI. Nevertheless, this document can serve as inspiration.


The Cabinet Office and the Central Digital and Data Office of the United Kingdom (UK) published a paper providing guidance for civil servants regarding the use of generative AI, such as ChatGPT or Bard. Civil servants that are operating such systems while working for the government are encouraged to be cautious of the risks of generative AI, although the document also recognizes that there are many advantages and opportunities for utilizing generative AI as a civil servant. David Knott, the Chief Technology Officer for the UK government, stated that

we are actively encouraging and enabling civil servants to get familiar with the technology and the benefits it could provide, but to do so in a safe and responsible manner

The government also stated in the guidance that the propositions will be revised after six months in order to update the paper with new developments.


The guidance begins with a summary of the most important aspects when using generative AI, also reminding civil servants that complying with the UK General Data Protection Regulation (GDPR) is still required. It is emphasized that sensitive or classified governmental data should never be part of the input for Large Language Models (LLMs), due to the risk of potentially leaking it into the public domain. Generally, civil servants need to recognize that generative AI often lacks contextual understanding and may introduce biases, which could lead to harmful output.

In addition to those considerations, two general principles are established:

  1. Civil servants need to be aware of what information they have access to, what rights and restrictions apply to that information and the conditions under which that information can or should be shared
  2. Civil servants need to be mindful of any systems into which they enter information: does that information need to be entered in that system? What will be done with the information once it has left your possession? What rights are being given away in placing the information elsewhere?

Additionally, three questions/considerations are addressed that should be kept in mind when using such technology: understanding how a question will be used by the generative AI system, acknowledging the possibility of receiving misleading answers and understanding the functioning of LLM’s. These are also called the three Hows:

  • How will the question be used by the system?
  • How can answers from generative AI mislead the user?
  • How does generative AI operate?

Next, the paper states certain practicalities of using generative AI systems, such as using a work e-mail and citing that an AI programme has been consulted for a specific task (e.g. in a footnote).

As a concluding chapter to the guidance paper, the guidance includes several examples of appropriate and inappropriate uses of generative AI in government.

  • Various examples of a proper operation of such AI systems are: using it as a research tool for background information, consulting it to summarize publicly available information, asking it to develop code to save time during the process or tasking it with Textual Data Analysis. Nevertheless, the three “how”-questions need to be considered before operating generative AI for any of these tasks.
  • Examples of inappropriate use are letting generative AI produce written outputs regarding confidential data (e.g. upcoming policy changes) or inputting data for data analysis without consent of the data owner.

The Cabinet Office of the UK government acknowledged in this guidance paper that they are willing to utilize AI in order to improve and benefit their work. Besides the publishing of this paper, a new Department for Science, Innovation and Technology (DSIT) is created by the UK government to tackle the upcoming issues and to pave the way for a beneficial and advantageous use of generative AI in the public sector.