Data Cards Playbook

Google's PAIR teamhas released a playbook for ensuring transparency in data set documentation. The Data Cards Playbook is a human-centred and participatory way for interdisciplinary teams to capture and document the needs of stakeholders. The data cards on which all information is described are central and the playbook provides various exercises and templates to fill in the data cards step by step during the development process. End users can also be involved in this process.

The People+AI Research (PAIR) team is a multidisciplinary team from Google that explores the human side of AI by doing research, building and providing tools and collaborating with different communities.

What you should know before reading further:
  • For whom: everyone in companies who deal with data (sets)

  • Process phase: entire process

  • System component: data (set) documentation

  • Price indication: free


The playbook guides you step by step through various exercises. These are divided into two parts:
  1. Ask: from the perspective of the various stakeholders, you set up the Data Cards step by step.

  2. Inspect: evaluate your Data Cards based on five dimensions and identify how they can be improved. There is also an exercise to ensure that the Data Cards are understandable to everyone.

The result is a set of Data Cards that documents the data sets in a structured and human way and provides insight into the quality, validity, reusability and risks of the data. The information is presented in an accessible way so that all the different stakeholders can use it. The maps help in subsequent steps of the AI project to make informed decisions so that the system can be designed in a responsible way.
Ethical values as mentioned in the tool
ALTAI principles
  • Transparency
  • Transparency
  • Human focused
  • Privacy & Data Governance
  • Diversity, non-discrimination & fairness


Data Cards Playbook

Go to the tool and discover the templates.

This tool was not developed by the Knowledge Centre Data & Society. We describe the tool on our website because it can help you deal with ethical, legal or social aspects of AI applications. The Knowledge Centre is not responsible for the quality of the tool.